Next Article in Journal
Dynamical Study of Fokker-Planck Equations by Using Optimal Homotopy Asymptotic Method
Previous Article in Journal
A Neural Network Approximation Based on a Parametric Sigmoidal Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computing the Moments of the Complex Gaussian: Full and Sparse Covariance Matrix

by
Claudia Fassino
1,*,†,
Giovanni Pistone
2,† and
Maria Piera Rogantin
1,†
1
Department of Mathematics, University of Genova, 16146 Genova, Italy
2
Department de Castro Statistics, Collegio Carlo Alberto, 10122 Torino, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(3), 263; https://doi.org/10.3390/math7030263
Submission received: 11 January 2019 / Revised: 5 March 2019 / Accepted: 8 March 2019 / Published: 14 March 2019

Abstract

:
Given a multivariate complex centered Gaussian vector Z = ( Z 1 , , Z p ) with non-singular covariance matrix Σ , we derive sufficient conditions on the nullity of the complex moments and we give a closed-form expression for the non-null complex moments. We present conditions for the factorisation of the complex moments. Computational consequences of these results are discussed.
MSC:
15A15; 60A10; 68W30

1. Introduction

The p-variate Complex Gaussian Distribution (CGD) is defined by [1] to be the image under the complex affine transformation z μ + A z of a standard CGD. In the Cartesian representation this class of distributions is a proper subset of the general 2 p -variate Gaussian distribution and its elements are also called rotational invariant Gaussian Distribution. Statistical properties and applications are discussed in [2].
As it is in the real case, CGD’s are characterised by a complex covariance matrix Σ = E Z Z * , which an Hermitian operator, Σ * = Σ . In some cases, we assume Σ to be positive definite, z * Σ z > 0 if z 0 . This assumption is equivalent to the existence of a density. We assume zero mean everywhere and we use the notation C N p Σ .
When the complex field C is identified with R 2 it becomes a R -vector space, and the monomials must be replaced by complex monomials. The object of this paper is the computation of the moments of a CGD, i.e., the expected values of the complex monomials j = 1 p z j n j z ¯ j m j under the distribution C N p Σ .
The computation of Gaussian moments is a classical subject that relies on a result usually called Wick’s (or Isserlis’) theorem, see ([3], Ch. 1). The real and the complex cases are similar, but the complex case has a peculiar combinatorics. Actually, from many perspectives, the complex case is simpler, as observed in Section 2.3.
The paper is organised as follows. In Section 2 we offer a concise but complete overview of the basic results concerning the CGD. In particular we give a proof of Wick’s theorem based on a version of the Faà di Bruno formula. In Section 3 we present recurrence relations for the moments and apply them to derive a new closed-form equation for the moments. Other results are sufficient conditions for the nullity of a moment, which is a feature where the complex case is different from the real case. The presentation is supported by a small running example. In Section 4 we present conditions on the moment of interest and on the sparsity of the correlation matrix that ensure the factorisation of that moment into a product of lower degree moments. Both the results on the closed form of the moment formula and the factorisation are expected to produce computational algorithm of interest. This issue is discussed in the final Section 5. The standard version of the Wick’s theorem reduces the computation of the moments to the computation of the permanent of a matrix for which optimised algorithms have been designed. We have not performed this optimisation for the algorithms suggested by our results so that a full comparison is not possible by now. Some technical combinatorial computations are presented in the Appendix A.

2. The Multivariate Complex Gaussian Distribution and Its Moments

The identification C z = x + i y ( x , y ) R 2 turns C into a 2-dimensional real vector space with inner product z 1 , z 2 = z 1 z ¯ 2 . The image of the Lebesgue measure d x d y is denoted d z .
If C is seen as a real vector space of dimension 2, then the space of linear operators has dimension 4. It is easy to verify that a generic linear operator A has the form
A : z α z + β z ¯ , α , β C .
Notice that the linear operator z α z is just a special case. A generic R -multilinear operator is of the form
( z 1 , , z k ) L 1 , , k α L i L z i j 1 , , k \ L z ¯ j .
A general complex monomial j = 1 p z j n j z ¯ j m j is obtained from a suitable multilinear form by identifying some of the variables e.g., z 1 z ¯ 1 z 2 = t 1 t ¯ 2 t 3 , with t 1 = t 2 = z 1 , t 3 = z 2 and L = { 1 , 3 } .
The set of p-variate complex monomials characterise probability distributions on C p .

2.1. Calculus on C

If f : C C is differentiable, the derivative at z in the direction h is expressed in the form in Equation (1) as
d f ( z ) [ h ] = D f ( z ) h + D + f ( z ) h ¯
and the derivative operators D and D + are related with the Cartesian derivatives by the equations
D = 1 2 x j i y j j = 1 , , k and D + = 1 2 x j + i y j j = 1 , , k .
The operators appearing in the equation above are sometimes called Wirtinger derivatives and denoted by D = / z and D + = / z ¯ . The Wirtinger derivatives act on complex monomials as follows
D z n z ¯ m = n z n 1 z ¯ m , D + z n z ¯ m = m z n z ¯ m 1 .
If f is k-times differentiable, the the k-th derivative at z in the direction h 1 , , h k is
d k f ( z ) [ h 1 h k ] = c , + k j = 1 k D c ( j ) f ( z ) j = 1 k h j c ( j ) ,
where h j = h j and h j + = h ¯ j .
We are going to use the following form of the Faà di Bruno formula, see [4].
Proposition 1.
Let f : C C and g : R h C . For each set of commuting derivative operators D 1 , , D k ,
D 1 D k f g ( x ) = π Π ( k ) ( d # π f ) g ( x ) B π D B g ( x )
where Π ( k ) is the set of partitions of 1 , , k , d # π is defined in Equation (4) and D B = b B D b .
Proof. 
The proof easily follows by induction on k by using the fact that each partition of 1 , , k is derived by a partition of 1 , , ( k 1 ) by adding the singleton k as a new element of the k-partition, and by adding k to each element of the ( k 1 ) -partition. ☐
Remark 1.
The formula applies either when each derivation is a different partial derivation D j = x j or when D i = D j , for some i , j . In case of repeated derivations, some terms in the RHS appear more than once. If equal factors are collected, combinatorial counts appear. In the following, first we use the basic formula, then we switch, in the general case, to a different approach, based on the explicit use of recursion formulæ instead of closed form equations.

2.2. Complex Gaussian

The p-variate Complex Gaussian Distribution (CGD) is, when expressed in the real space R 2 p , a special case of a 2 p -variate Gaussian Distribution. The definition is given below.
The univariate CGD is the distribution of a complex random variable Z = X + i Y were the real and imaginary part form a couple of independent identically distributed centered Gaussian X and Y. As x 2 + y 2 = z z ¯ , the density of Z is φ ( z ) = 1 2 π σ exp 1 2 σ 2 z z ¯ . The complex variance is γ = E Z Z ¯ = 2 σ 2 and we write Z C N 1 γ . Notice that the standard Complex Gaussian has γ = 1 that is, σ 2 = 1 / 2 . The complex moments of U C N 1 1 are
E U n U ¯ m = 1 π C u n u ¯ m exp u u ¯ d u = 1 π 0 ρ n + m + 1 e ρ 2 d ρ 0 2 π e ( n m ) θ d θ ,
hence 0 if n m , otherwise
E U n U ¯ n = E U 2 n = 2 0 ρ 2 n + 1 e ρ 2 d ρ = n ! .
If Z C N 1 γ and α C , then it is easy to prove that α Z C N 1 α α ¯ γ . The univariate complex Gaussian is sometimes called “circularly symmetric” because e i θ Z Z . Moreover, Z ¯ Z .
Consider d independent standard Complex Gaussian random variables, U j , j = 1 , , d and let C = [ c i j ] be a p × d complex matrix. As in the real case, the distribution of Z = C U , U = ( U 1 , , U d ) , is a multivariate Complex Gaussian Z C N p Σ , with covariance matrix Σ = C C * . In the special case of a non singular covariance matrix Σ , the density exists and is obtained by performing the change of variable V = Σ 1 / 2 Z C N p I , to get
φ ( z ; Σ ) = 1 π p det Σ exp z * Σ 1 z ,
see [2].
Our aim is to discuss various methods to compute the value of a given complex moment of a normal random vector Z C N p Σ namely,
ν ( n 1 , , n p ; m 1 , , m p ) = E Z 1 n 1 Z p n p Z ¯ 1 m 1 Z ¯ p m p ,
where n 1 , , m p are non-negative integers. In the case of independent standard CG’s, i.e., identity covariance matrix, we have
E U 1 n 1 U p n p U ¯ 1 m 1 U ¯ p m p = E U 1 n 1 U ¯ 1 m 1 E U p n p U ¯ p m p
which is zero unless m j = n j for all j = 1 , , p . In the general case, each component Z j is a C -linear combination of independent standard U j ’s, so that each moment is the result of C -multilinear and C -anti-multilinear computations. The result will be a sum of products hence, one should expect numerically hard computations.
Various approaches are available and their respective merits depend largely on the setting of the problem: number of variables, total degree of the complex monomial, sparsity of the covariance matrix. All this issues will be discussed in the following.

2.3. Wick’s Theorem

Let us first consider the case where all the exponents in the complex monomial are 1. The general case is a special case of that one, where some of the variables are equal. The unity case is solved by the classical Wick’s theorem (or Isserlis’ theorem). In real case, for example in ([3], [Th. 1.28]), if X 1 , , X q is a centered (real) Gaussian vector with covariance matrix Σ = [ σ i j ] i , j = 1 q , then, if q is even
E X 1 X q = π P ( q ) i , j π E X i X j = π P ( q ) i , j π σ i , j
where P ( q ) is the set of all partitions of { 1 , , q } into subsets of two elements. If q is odd, then E X 1 X q = 0 .
The moment can be zero even in special cases depending on the sparsity of the covariance matrix. For example,
E ( X 1 X 2 X 3 X 4 ) = E ( X 1 X 2 ) E ( X 3 X 4 ) + E ( X 1 X 3 ) E ( X 2 X 4 ) + E ( X 1 X 4 ) E ( X 2 X 3 ) ,
and, if E ( X 1 , X 2 ) = E ( X 2 , X 4 ) = E ( X 1 , X 4 ) = 0 , then E ( X 1 X 2 X 3 X 4 ) = 0 , even if the variables are not independent.
A similar equation applies to the complex case, see e.g., ([5], [Lemma 4.5]). For sake of completeness, we give here a proof based on the Faà di Bruno formula.
Let us recall that the permanent of the q × q complex matrix A = [ a i , j ] i , j = 1 q is
per A = π S ( q ) i = 1 q a i , π ( i ) ,
where S ( q ) is the symmetric group of permutations on 1 , , q . The properties of the permanent are discussed in [6].
Theorem 1 (Wick’s theorem).
Given a 2 q -variate complex Gaussian ( Z 1 , , Z q , T 1 , , T q ) , then
E Z 1 Z q T ¯ 1 T ¯ q = per Σ , Σ = [ E Z i T ¯ j ] i , j = 1 q .
In particular,
E Z 1 Z q Z ¯ 1 Z ¯ q = per Σ , Σ = [ E Z i Z ¯ j ] i , j = 1 q .
Proof. 
Let us consider first the case where Z = Z 1 = = Z q and T = T 1 = = T q . There are two standard independent U 1 , U 2 C N 1 1 such that Z c 11 U 1 + c 12 U 2 and T c 21 U 1 + c 22 U 2 . From straightforward algebra, the independence of U 1 , U 2 and Equation (6), we get
E Z q T ¯ q = q ! E Z T ¯ q .
Second, we apply a typical Gaussian argument. For each real λ = ( λ 1 , , λ q ) and μ = ( μ 1 , , μ q ) , we define the jointly complex Gaussian random variables Z = λ 1 Z 1 + + λ q Z q and T = μ 1 T 1 + + μ + q T q to get
E ( λ 1 Z 1 + + λ q Z q ) q ( μ 1 T ¯ 1 + + μ q T ¯ q ) q = q ! E Z T ¯ q = q ! λ t Σ μ q , Σ = E Z i T ¯ j i , j = 1 q .
The LHS ν ˜ ( λ ; μ ) is an homogeneous polynomial in λ and μ namely,
ν ˜ ( λ ; μ ) = ( q ! ) 2 a 1 + + a q = q b 1 + + b q = q E Z 1 a 1 Z q a q T ¯ 1 b 1 T ¯ q b q λ 1 a 1 λ q a q μ 1 b 1 μ q b q a 1 ! a q ! b 1 ! b q ! .
We have proved that for exponents such that a 1 + + a q = b 1 + + b q = q it holds
q ! E Z 1 a 1 Z q a q T ¯ 1 b 1 T ¯ q b q = 2 q λ 1 a 1 λ q a q μ 1 b 1 μ q b q ( λ t Σ μ ) q .
Finally, we can use the Faà di Bruno formula to compute the derivative in the RHS. Assume all exponents are equal 1, 1 = a 1 = = b q . The k-derivative of the power z z q is q ! z q k / k ! , so that, if we write λ i = α 1 and μ j = α j + q
2 q λ 1 λ q μ 1 μ q ( λ t Σ μ ) q = π Π ( 2 q ) q ! # π ( i , j = 1 q α i α j + q σ i j ) q # π B π # B α B ( i , j = 1 q α i α j + q σ i j ) .
The factor B π # B α B ( i , j = 1 q α i α j + q σ i j ) is zero unless the partition is of the form π = B = i , j | 1 i q , q + 1 j 2 q . In such a case, the factor is equal to i = 1 σ i , π ( i ) for some permutation π S ( q ) . Cancellation of q ! concludes the proof. ☐
Remark 2.
We note that the same argument shows that unequal lengths of the real and complex part give zero. In case of repeated components, i.e., non-unit exponent, the condition for nullity is the fact the sum of the two blocks of exponent are different.
Remark 3.
We observe that the complex case can be considered, in some perspective, more simpler than the real one. In fact, for example, when summing over Wick pairings, the former case considers only pairings matching Z i to T ¯ j variables; while the real case considers all pairings (and thus the sums involved are more complicated).

3. Computation of the Moments via Recurrence Relations

In the previous section we have offered a compact review of Wick’s theorem which is a tool for the computation of the moments of the CGD.
The case were there exponents in the complex moment are not all equal 1 is reduced to the case of the theorem by considering repeated components. However, repeated components lead to identities between terms in the expansion of the permanent, which is always an homogeneous polynomial in the covariances. In this section we derive expressions of the permanent where all the monomials appear once, presented as a system of recurrence relations among the moments of a Z C N p Σ . In conclusion, we present an explicit formula for the moments, which is an homogeneous polynomial in the elements of Σ in standard form.
Let us first introduce some definitions.
Definition 1.
Let α = ( n 1 , , n p ; m 1 , , m p ) Z 0 2 p be a multi-index, let Z C N p Σ , and let ν ( α ) = E ( Z 1 n 1 Z p n p Z ¯ 1 m 1 Z ¯ p m p ) be the α-moment of Z.
1.
The sets N = h = 1 , , p | n h 0 and M = k = 1 , , p | m k 0 are the supporting sets of the multi-index α.
2.
Each element of the matrix Σ is an elementary moment, σ h k = ν ( β h k ) , with β h k = e h + e p + k , h , k = 1 , , p and { e j } j = 1 , , 2 p the canonical basis of R 2 p .
3.
Given h N and k M , α h k is the multi-index ( n 1 , , n h 1 , , n p ; m 1 , , m k 1 , , m p ) : α h k = α β h k .
We derive the recurrence relations among the moments, explicitly computing the integrals by part. Our proof of Proposition 2 requires the following lemma. Recall that D and D + are the Wirtinger derivatives, as in Equation (3).
Lemma 1.
Let us assume that the covariance matrix Σ is not degenerate and let φ be the associated density of Equation (7). For each bounded function f : C C with Wirtinger derivatives bounded by a polynomial, the following relations hold:
1.
D + f ( z ) φ ( z ) d z = f ( z ) D + φ ( z ) d z , and analogously for D .
2.
z ¯ φ ( z ; Σ ) = Σ t D φ ( z ; Σ ) and z φ ( z ; Σ ) = Σ D + φ ( z ; Σ ) .
Proof of 1.
We prove the thesis component-wise, dropping the index j.
D + f ( z ) φ ( z ) d z = 1 2 x f ( x , y ) + i y f ( x , y ) φ ( x , y ) d x d y = f ( x , y ) 1 2 x + i y φ ( x , y ) d x d y = f ( z ) D + φ ( z ) d z .
Proof of 2.
Let g ( z ) = z * Σ 1 z so that φ ( z ; Σ ) = 1 π p det Σ exp g . We prove that z ¯ = Σ t D g and z = Σ D + g , and so the thesis follows trivially. We have x k z = e k and y k z = i e k , with { e 1 , , e p } the canonical basis of R p . As the directional derivative of g in the direction r is d r g ( z ) = r * Σ 1 z + z * Σ 1 r , then
x k g ( z ) = e k * Σ 1 z + z * Σ 1 e k and y k g ( z ) = ( i e k ) * Σ 1 z + z * Σ 1 ( i e k ) ,
and we have for each k = 1 , , p that
D k + g ( z ) = 1 2 e k * Σ 1 z + z * Σ 1 e k + i 2 ( i e k ) * Σ 1 z + z * Σ 1 ( i e k ) = 1 2 e k t Σ 1 z + z * Σ 1 e k + e k t Σ 1 z z * Σ 1 e k = e k t Σ 1 z ,
hence Σ D + g ( z ) = Σ Σ 1 z = z . ☐

3.1. Recurrence Relations

We prove in the following proposition recurrence relations in which a moment is expressed as a linear combination of moments with total degree reduced by 2. The proof is based on Lemma 1, hence it assumes that the covariance matrix is non-degenerate.
Proposition 2 (Recurrence relations for the moments).
Given the multi-index α with supporting sets N and M, there are # N + # M 2 p recurrence relations for the moment ν ( α )
ν ( α ) = k M m k σ h k ν ( α h k ) , h N , a n d ν ( α ) = h N n h σ h k ν ( α h k ) , k M .
As a consequence, the moment is zero unless h = 1 p n h = k = 1 p m k .
Proof. 
Let f ( z , α ) = j = 1 p z j n j z ¯ j m j be the complex monomial with exponent α , and let h N . We denote by { e 1 , , e 2 p } , the canonical basis of R 2 p .
ν ( α ) = C p f ( z , α ) φ ( z ; Σ ) d z = C p f ( z , α e h ) z h φ ( z ; Σ ) d z = C p f ( z , α e h ) e h t Σ D + φ ( z ; Σ ) d z by Lemma 1.2 = k = 1 p σ h k C p f ( z , α e h ) D k + φ ( z ; Σ ) d z = k M σ h k C p D k + f ( z , α e h ) φ ( z ; Σ ) d z by Lemma 1.1 = k M σ h k m k C p f ( z , α e h e p + k ) φ ( z ; Σ ) d z = k M σ h k m k ν ( α r k ) .
By considering z ¯ k instead of z h , we can prove that ν ( α ) = h N n h σ h k ν ( α h k ) for each k M .
Notice that such the proof holds without requiring any conditions on h = 1 p n h and k = 1 p m k . The stated sufficient condition for the nullity of the moment, h N n h k M m k , is derived by considering a linear combination of the recurrence relations. In fact,
h N n h ν ( α ) k M m k ν ( α ) = h N n h k M m k σ h k ν ( α h k ) k M m k h N n h σ h k ν ( α h k )
implies ν ( α ) h N n h k M m k = 0 hence, ν ( α ) = 0 if h N n h k M m k . ☐
Remark 4.
If h = 1 p n h = k = 1 p m k = q , the recurrence relations in Proposition 2 coincide with the recurrence formula for computing the permanent of a q × q matrix Γ, derived from Σ splitting the h-th row in n h copies and the k-th column in m k copies.
The recursive formula for the permanent of a q × q matrix Γ, developed with respect to the r-th row is, see [6]:
per ( Γ ) = j = 1 q γ r j per ( Γ r j )
where Γ r j is obtained from Γ deleting the r-th row and the j-th column. Suppose that the first n 1 rows and the first m 1 columns of Γ are obtained repeating n 1 times the first row of Σ and m 1 times the first colum of Σ, respectively. If 1 r n 1 and 1 j m 1 , then γ r j = σ 11 , and so
j = 1 m 1 γ r j per ( Γ 1 j ) = σ 11 j = 1 m 1 per ( Γ 1 j ) = m 1 σ 11 per ( Γ 11 )
since the matrices Γ 1 j are all the same between them for 1 j m 1 . The matrix Γ 11 is associated to the multi-index α 11 = ( n 1 1 , n 2 , , n p , m 1 1 , m 2 , , m p ) . Then per ( Γ 11 ) = ν ( α 11 ) and so j = 1 m 1 γ 1 j per ( Γ 1 j ) = m 1 σ 11 ν ( α 11 ) . The thesis follows by considering the sums associated to successive blocks of repeated columns.
The nullity of a moment depends also on the sparsity of the covariance matrix. For example, here is a simple sufficient condition.
Corollary 1.
If there exists h N such that σ h k = 0 for all k M or if there exists k M such that σ h k = 0 for all h N then ν ( α ) = 0 .
Proof. 
In such cases the recurrence relations in Proposition 2 consist of null addends. ☐

3.2. Closed Form

The recurrence relations in Proposition 2 allow us to compute a complex moment as a linear function of lower degree moments. Hence, each complex moment is a polynomial of the elementary moments σ h k , h N , k M . For example, in the simple case where α is proportional to a multi-index of an elementary moment, α = n β h k , n Z 0 , β h k = e h + e p + k , h N , k M , then each recurrence relation contains one term only, so that
ν ( n β h k ) = n ! ν ( β h k ) n = n ! σ h k n .
In general, the value of ν ( α ) , with α such that h N n h = k M m k , can be obtained considering that α is generated by the vectors with integer coefficients:
α = h N , k M a h k β h k with a h k Z 0 .
The coefficient vector a = [ a h k ] h N k M is not uniquely determined. For instance, if α = ( 2 , 2 , 1 , 3 ) then
( 2 , 2 , 1 , 3 ) = ( 1 , 0 , 1 , 0 ) + ( 1 , 0 , 0 , 1 ) + 2 ( 0 , 1 , 0 , 1 ) = 2 ( 1 , 0 , 0 , 1 ) + ( 0 , 1 , 1 , 0 ) + ( 0 , 1 , 0 , 1 ) .
Considering all the possible integer coefficient vectors a that produce the same α , leads to define the subset I ( α ) Z 0 p 2 associated to each α -moments.
Definition 2.
Let p N and let α = ( n 1 , , n p ; m 1 , , m p ) be a multi-index. The set I ( α ) Z 0 p 2 contains all integer vectors a = ( a 11 , , a 1 p , a 21 , , a 2 p , , a p 1 , , a p p ) Z 0 p 2 such that l i j ( α , a ) a i j L i j ( α , a ) , i , j = 1 , , p , where the bounds are defined by
l i j ( α , a ) = 0 h = 1 i n h h = j + 1 p m h h i ; k j ( h , k ) ( i , j ) a h k , L i j ( α , a ) = n i k = 1 j 1 a i k m j h = 1 i 1 a h j .
Some elements of a I ( α ) are uniquely determined, as shown in the following Proposition.
Proposition 3.
Let α = ( n 1 , , n p , m 1 , , m p ) be a multi-index.
1.
The free elements of the vector a are ( p 1 ) 2 . In fact, for each h = 1 , , p 1 , the elements a h p , a p h and a p p are:
a h p = n h j = 1 p 1 a h j a p k = m k i = 1 p 1 a i k a p p = n p j = 1 p 1 m j + i , j = 1 p 1 a i j .
2.
If there exists an index r such that n r = 0 , then a r k = 0 , for k = 1 p . Analogously, if there exists an index s such that m s = 0 , then a h s = 0 , for h = 1 p .
For simplicity α and a are omitted. This proof is based on Proposition A1 in Appendix A that states 0 l i j L i j .
Proof of 1.
We show, by induction, the thesis for a i p .
Base step: n 1 k = 1 p 1 a 1 k = l 1 p L 1 p = m 1 n 1 k = 1 p 1 a 1 k and so a 1 p = n 1 k = 1 p 1 a 1 k .
Induction step: assume a h p = n h k = 1 p 1 a h k for h < i , that is n h = k = 1 p a h k . We obtain
l i p = h = 1 i n h h = 1 i ; k = 1 p ( h , k ) ( i , p ) a h k = h = 1 i 1 n h k = 1 p a h k + n i k = 1 p 1 a i k = n i k = 1 p 1 a i k
and so, since l i p L i p = ( n i k = 1 p 1 a i k ) ( m p h = 1 p 1 a h p ) , we conclude that a i p = n i k = 1 p 1 a i k .
The proof for a p j is analogous. Finally, from a i p and a p j , we obtain, by direct computation, a p p . ☐
Proof of 2.
We have
0 L r p n r k = 1 p 1 a r k = k = 1 p 1 a r k
and so, a r k = 0 , k = 1 , , ( p 1 ) . Since a r p = n r k = 1 p 1 a r k , we have a r p = 0 .
Analogously for a h s . ☐
Example 1.
If p = 3 we have
I ( α ) = a = ( a 11 , a 12 , a 21 , a 22 ) Z 0 4 | 0 ( n 1 m 2 m 3 ) a 11 n 1 m 1 , 0 ( n 1 m 3 a 11 ) a 12 ( n 1 a 11 ) m 2 , 0 ( n 1 + n 2 m 2 m 3 a 11 ) a 21 n 2 ( m 1 a 11 ) , 0 ( n 1 + n 2 m 3 a 11 a 12 a 21 ) a 22 ( n 2 a 21 ) ( m 2 a 12 ) .
The following theorem gives an explicit expression of the α -moment of C N p Σ . The proof is based on several propositions given in the Appendix A.
Theorem 2.
Let α be a multi-index with N and M supporting sets. Assume that h N n h = k M m k . Then the α-moment of C N p Σ is
ν ( α ) = h = 1 p n h ! m h ! a I ( α ) h , k = 1 p σ h k a h k a h k !
by setting σ h k 0 = 1 also when σ h k = 0 . The set I ( α ) is as in Definition 2.
Proof. 
We denote by P ( a ) the product
P ( a ) = h , k = 1 p σ h k a h k a h k !
First, we show that h = 1 p n h ! m h ! a I ( α ) P ( a ) is the elementary moment σ h k when α = β h k . Let α = β h k . Since n i = m j = 0 for each i h and j k , Item 2 of Proposition 3 implies that a i r = a r j = 0 , for i h , j k and r p , that is a h k is the unique element of the vector a different from zero. Furthermore a h k = 1 since l h k = 1 and L h k = 1 and the thesis follows.
Now, we show that h = 1 p n h ! m h ! a I ( α ) P ( a ) satisfies the recurrence relations for a general α . Let us consider k = 1 p m k σ h k ν ( α h k ) , with α h k = α e h e p + k . Let a h k + be the vector a containing the value a h k + 1 instead of a h k : a h k + = a + e ( h 1 ) p + k . From Equation (12), for a I ( α h k ) , we have
σ h k P ( a ) = σ h k a h k + 1 a h k ! r , s = 1 ( r , s ) ( h , k ) p σ r s a r s a r s ! = ( a h k + 1 ) σ h k a h k + 1 ( a h k + 1 ) ! r , s = 1 ( r , s ) ( h , k ) p σ r s a r s a r s ! = ( a h k + 1 ) P ( a h k + ) .
Let δ ( α ) = r = 1 p n r ! m r ! a I ( α ) P ( a ) . From Equations (12) and (13) it follows
k = 1 p m k σ h k δ ( α h k ) = r = 1 p n r ! m r ! k = 1 p m k σ h k m k n h a I ( α h k ) P ( a ) = 1 n h r = 1 p n r ! m r ! k = 1 p a I ( α h k ) ( a h k + 1 ) P ( a h k + )
In Proposition A3 we have shown that a I ( α h k ) ( a h k + 1 ) P ( a h k + ) = a I ( α ) a h k P ( a ) . Then
k = 1 p m k σ h k δ ( α h k ) = 1 n h r = 1 p n r ! m r ! k = 1 p a I ( α ) a h k P ( a ) = r = 1 p n r ! m r ! a I ( α ) k = 1 p a h k n h P ( a ) ,
and so, since Proposition 3 shows s = 1 p a h k = n h , we obtain k = 1 p m k σ h k δ ( α h k ) = δ ( α ) . The thesis follows because the function δ ( α ) satisfies the recurrence relations and coincides with the elementary moments, so that δ ( α ) = ν ( α ) .
Analogously if we consider h = 1 p n h σ h k ν ( α h k ) . ☐
Remark 5.
For each elementary moment σ h k = 0 , if a h k 0 , the corresponding addend of the sum on I ( α ) is null. Then, in the sum on I ( α ) are present only the addends such that a h k = 0 , since we assume 0 0 = 1 .
Remark 6.
It should be noted that Equation (11) of Theorem 2 contains the multinomial coefficients
h = 1 p n h a h 1 a h p and h = 1 p m h a 1 h a p h
that are related to the cardinality of the special permutations of equal terms in the permanent. This remark prompts for a purely combinatorial proof of the equation for the moments. However, it should be noted that the specific value of α induces on each a h k the constrains provided by the limits l h k and L h k that are stated in Definition 2 and Proposition 3. We do not follow here this line of investigation. We thank an anonymous referee for suggesting this remark.
Example 2.
Let us consider the case with p = 2 and p = 3 .
  • If p = 2 and n 1 + n 2 = m 1 + m 2 , then ν ( α ) = n 1 ! n 2 ! m 1 ! m 2 ! a I ( α ) P ( a ) , where
    I ( α ) = { a 11 Z 0 | 0 ( n 1 m 2 ) a 11 n 1 m 1 } a n d P ( a ) = σ 11 a 11 a 11 ! σ 12 n 1 a 11 ( n 1 a 11 ) ! σ 21 m 1 a 11 ( m 1 a 11 ) ! σ 22 n 2 m 1 + a 11 ( n 2 m 1 + a 11 ) ! .
  • If p = 3 and n 1 + n 2 + n 3 = m 1 + m 2 + m 3 , then ν ( α ) = n 1 ! n 2 ! n 3 ! m 1 ! m 2 ! m 3 ! a I ( α ) P ( a ) , where I ( α ) is shown in Example 1 and
    P ( a ) = σ 11 a 11 a 11 ! σ 12 a 12 a 12 ! σ 21 a 21 a 21 ! σ 22 a 22 a 22 ! σ 13 n 1 h = 1 2 a 1 h ( n 1 h = 1 2 a 1 h ) ! σ 23 n 2 h = 1 2 a 2 h ( n 2 h = 1 2 a 2 h ) ! σ 31 m 1 h = 1 2 a h 1 ( m 1 h = 1 2 a h 1 ) ! σ 32 m 2 h = 1 2 a h 2 ( m 2 h = 1 2 a h 2 ) ! σ 33 n 3 m 1 m 2 + h , k = 1 2 a h k ( n 3 m 1 m 2 + h , k = 1 2 a h k ) ! .
Example 3 (Running example).
Let Z C N 5 Σ , with Σ such that
σ 13 = σ 14 = σ 24 = σ 25 = σ 35 = σ 45 = 0 .
Let α = ( 1 , 2 , 0 , 2 , 1 ; 1 , 1 , 1 , 1 , 2 ) , where h = 1 5 n h = k = 1 5 m k = 6 , so that the condition of Theorem 2 is satisfied. Here N = { 1 , 2 , 4 , 5 } and M = { 1 , 2 , 3 , 4 , 5 } .
Since n 3 = 0 , from Proposition 3 it follows that a 3 k = 0 , k = 1 , , 5 . Denoting by R i ( j ) = k = 1 j 1 a i k and by C j ( i ) = h = 1 i 1 a h j , with R i ( 1 ) = 0 and C j ( 1 ) = 0 , the set I ( α ) is defined by:
0 a 11 1 0 a 12 1 R 1 ( 2 ) 0 a 13 1 R 1 ( 3 ) 0 a 14 1 R 1 ( 4 ) a 15 = 1 R 1 ( 5 ) 0 a 21 1 C 1 ( 2 ) 0 a 22 ( 2 R 2 ( 2 ) ) ( 1 C 2 ( 2 ) ) 0 a 23 ( 2 R 2 ( 3 ) ) ( 1 C 3 ( 2 ) ) 0 1 R 1 ( 4 ) R 2 ( 4 ) C 4 ( 2 ) a 24 ( 2 R 2 ( 4 ) ) ( 1 C 4 ( 2 ) ) a 25 = 2 R 2 ( 5 )
0 a 41 1 C 1 ( 4 ) 0 1 R 1 ( 2 ) R 2 ( 2 ) R 4 ( 2 ) C 2 ( 4 ) a 42 ( 2 R 4 ( 2 ) ) ( 1 C 2 ( 4 ) ) 0 2 R 1 ( 3 ) R 2 ( 3 ) R 4 ( 3 ) C 3 ( 4 ) a 43 ( 2 R 4 ( 3 ) ) ( 1 C 3 ( 4 ) ) 0 3 R 1 ( 4 ) R 2 ( 4 ) R 4 ( 4 ) C 4 ( 4 ) a 44 ( 2 R 4 ( 4 ) ) ( 1 C 4 ( 4 ) ) a 45 = 2 R 4 ( 5 )
a 51 = 1 C 1 ( 5 ) a 52 = 1 C 2 ( 5 ) a 53 = 1 C 3 ( 5 ) a 54 = 1 C 4 ( 5 ) a 55 = k = 1 4 C k ( 5 ) 3
The moment is:
ν ( α ) = 8 a I ( α ) σ 11 a 11 a 11 ! σ 12 a 12 a 12 ! σ 13 a 13 a 13 ! σ 14 a 14 a 14 ! σ 15 1 R 1 ( 5 ) ( 1 R 1 ( 5 ) ) ! σ 21 a 21 a 21 ! σ 22 a 22 a 22 ! σ 23 a 23 a 23 ! σ 24 a 24 a 24 ! σ 25 2 R 2 ( 5 ) ( 2 R 2 ( 5 ) ) ! σ 41 a 41 a 41 ! σ 42 a 42 a 42 ! σ 43 a 43 a 43 ! σ 44 a 44 a 44 ! σ 45 2 R 4 ( 5 ) ( 2 R 4 ( 5 ) ) ! σ 51 1 C 1 ( 5 ) ( 1 C 1 ( 5 ) ) ! σ 52 1 C 2 ( 5 ) ( 1 C 2 ( 5 ) ) ! σ 53 1 C 3 ( 5 ) ( 1 C 3 ( 5 ) ) ! σ 54 1 C 4 ( 5 ) ( 1 C 4 ( 5 ) ) ! σ 55 k = 1 4 C k ( 5 ) 3 ( k = 1 4 C k ( 5 ) 3 ) !
where the null elementary moments are highlighted.
As specified in Remark 5, we set a h k = 0 when σ h k = 0 . In this example we obtain a 15 = a 21 = a 22 = a 43 = a 44 = a 55 = 1 and the other a h k = 0 and so the moment ν ( α ) is
ν ( α ) = 8 σ 15 σ 21 σ 22 σ 43 σ 44 σ 55 .

4. Factorisation

In this section we present a factorisation of the complex moments which depends on the null elements of Σ and on the supporting sets N and M of the given moment. The argument we use is not based on independence assumptions. Such a factorisation is possible when there exists a non trivial partition of the supporting sets induced by the non null elements of covariance matrix Σ .
Definition 3.
Let N and M be the supporting sets of a multi-index α. The graph induced by α is the bipartite graph G = ( N , M , E ) , where the edges are defined by E = { ( h , k ) N × M | σ h k 0 } . Let { C s } s = 1 Q be the connected components of G , C s = ( N s , M s , E s ) . The connected components of G define a partitions of N and M, that we call the partition induced by Σ.
Notice that the partitions of M and N are uniquely defined and they can be the trivial partition.
Theorem 3.
Let { N r } r = 1 Q and { M r } r = 1 Q be the partition induced by Σ. Let α r be the multi-index α restricted to N r M r , α r = ( n h , m h ) h N r M r . The moment ν ( α ) is given by
ν ( α ) = ν ( α 1 ) ν ( α 2 ) ν ( α Q ) .
Proof. 
We use the notations of Theorem 2. Let A r = N r × M r . There are no edges in G outside each connected component, σ h k = 0 if ( h , k ) r A r . According to the argument in Remark 5, each a h k I ( α ) is zero unless ( h , k ) r A r . By cancelling trivial factors, we have
a I ( α ) h , k = 1 p σ h k a h k a h k ! = a I ( α ) r = 1 Q ( h , k ) A r σ h k a h k a h k ! .
It follows that
a I ( α ) h , k = 1 p σ h k a h k a h k ! = a I * ( α ) ( h , k ) A 1 σ h k a h k a h k ! ( h , k ) A Q σ h k a h k a h k ! , I * ( α ) = { a I ( α ) | a h k = 0 , ( h , k ) r A r } ,
so that
ν ( α ) = r = 1 Q h N r n h ! k M r m k ! a h k , ( h , k ) A 1 a h k , ( h , k ) A Q ( h , k ) A 1 σ h k a h k a h k ! ( h , k ) A Q σ h k a h k a h k ! = r = 1 Q h N r n h ! k M r m k ! a h k , ( h , k ) A r ( h , k ) A r σ h k a h k a h k ! = ν ( α 1 ) ν ( α 2 ) ν ( α Q ) .
The factorisation of the previous theorem reduces the computational complexity for computing ν ( α ) , since each factor is a moment of lower order. The computation of the connected components of a graph requires linear time, in terms of the numbers of its nodes and its edges, see [7].
In presence of a non-trivial induced partition of the supporting sets, the following corollary shows necessary conditions for the nullity of the moment ν ( α ) .
Corollary 2.
Under the hypothesis of Theorem 3, the moment ν ( α ) is null if there exists r { 1 , , Q } such that
h N r M r n h k N r M r m k .
Proof. 
If there exists r { 1 , , Q } such that h N r M r n h k N r M r m k , then ν ( α r ) is null and the thesis follows. ☐
Remark 7.
If there exists an h N such that σ h k = 0 for each k M or if there exists an k M such that σ h k = 0 for each h N , then Corollary 1 shows that ν ( α ) = 0 . This is a degenerate case, where the bipartite graph G has a connected component ( , M r ) or ( N r , ) , for some r. It follows that, for example, if there exists a connected component ( , M r ) , then k N r M r m k 0 and h N r M r n h = 0 since h M r does not belong to N. In such a case, we have ν ( α ) = 0 .
Example 4 (Running example—continued).
As in Example 3, let Z C N 5 Σ with Σ such that σ 13 = σ 14 = σ 24 = σ 25 = σ 35 = σ 45 = 0 .
Let N = { 1 , 2 , 3 , 4 , 5 } and M = { 1 , 4 , 5 } . Then there exist the non trivial induced partition of N and M, given by N 1 = { 1 , 2 , 5 } and M 1 = { 1 , 5 } , N 2 = { 3 , 4 } and M 2 = { 4 } . In such a case the matrix [ σ h k ] h N , k M is
[ σ h k ] h N , k M = σ 11 0 σ 15 σ 21 0 0 0 σ 34 0 0 σ 44 0 σ 51 0 σ 55 = by row / column permutations = σ 11 σ 15 0 σ 21 0 0 σ 51 σ 55 0 0 0 σ 34 0 0 σ 44 .
The permuted matrix highlights the induced partitions. The conditions of Corollary 2 for the nullity of ν ( α ) are
n 1 + n 2 + n 5 m 1 + m 5 o r n 3 + n 4 m 4 .
We compute the moments with two different α.
1.
Let α = ( 2 , 2 , 2 , 2 , 2 ; 2 , 0 , 0 , 2 , 6 ) . In such a case ν ( α ) = 0 even if h N n h = k M m k = 10 . In fact n 3 + n 4 m 4 .
2.
Let α = ( 2 , 1 , 1 , 1 , 1 ; 2 , 0 , 0 , 2 , 2 ) . From Theorem 3 the moment ν ( α ) factorises in ν ( α 1 ) ν ( α 2 ) , where α 1 = ( n 1 , n 2 , n 5 ; m 1 , m 2 , m 5 ) = ( 2 , 1 , 1 ; 2 , 0 , 2 ) and α 2 = ( n 3 , n 4 ; m 3 , m 4 ) = ( 1 , 1 ; 0 , 2 ) .
We compute, separately, ν ( α 1 ) and ν ( α 2 ) applying the restrictions on I ( α 1 ) and I ( α 2 ) specified in Proposition 3 and in Remark 5, and the formulæ for p = 3 and p = 2 of Example 2:
ν ( α 1 ) = 4 σ 15 2 σ 21 σ 51 + 8 σ 11 σ 21 σ 55 a n d ν ( α 2 ) = 2 σ 34 σ 44 .

5. Discussion

This piece of research about the moments of the CGD was originally motivated by the desire to evaluate the approximation error of a cubature formula with nodes on a suitable subset of the complex roots of the unity, first introduced in [8].
In this paper, we discussed particular cases of the Wick’s theorem for the CGD. When the exponent α of the complex moment is not 0–1 valued, the permanent has repeated terms in the sum. By collecting them, one obtains the form of Theorem 2, which is an homogeneous polynomial in the covariances. By the way of this expression, cases of factorisation of the moments have been derived in Theorem 3.
The relevance of Theorem 2 is mainly theoretical. On one side, it allows for the analysis of the moment’s behaviour as function of the elementary moments σ i j . On the other side, our the proof of the factorisation depends on it. The factorisation, if any, reduces the computational complexity of ν ( α ) .
Let us discuss briefly the complexity of the computation of moments that do not factorise. We can compare the method presented in this paper, based on Equation (11), with the method based on the permanent, see Equation (9). For simplicity, we consider Z C N p Σ , with Σ without null elements, and all the exponents of the moment equal to n, i.e., α = ( n , , n ; n , , n ) .
The computation of the moment as the permanent of a n p × n p matrix requires ( n p ) ! products using a raw algorithm and it requires ( n p ) 2 n p 1 products using the Ryser algorithm, see [6].
The optimisation of the algorithm for Equation (11) is outside the scope of this paper. The raw implementation of such a formula requires first to compute, only once, j ! , for j = 1 , , p . Then, for each a I ( α ) , the computation of P ( a ) which requires 2 n p products, since h , k = 1 p a h k = n p . A conservative upper bound of # I ( α ) is obtained by assuming that each a i j [ 0 , n ] . It follows that # I ( α ) < n ( p 1 ) 2 , and so the the complexity of Equation (11) is less than 2 p n 1 + ( p 1 ) 2 . Actually, the # I ( α ) is much smaller in most cases. For instance, if p = 5 , n = 6 then the effective # I ( α ) = 1.6 × 10 8 , while n ( p 1 ) 2 = 2.8 × 10 12 .
Notice that the complexity of our formula, 2 p n 1 + ( p 1 ) 2 , is much smaller than the complexity of the raw version of the permanent, ( n p ) ! . Furthermore, comparing the complexity of the Ryser algorithm to the upper bound of our algorithm, we find, by direct computation, that n p 2 n p 1 > 2 n p n ( p 1 ) 2 when p is small and n is large. For example our algorithm is less expensive for p = 5 and n 12 . We expect that the Ryser optimisation techniques applied to Equation (11) will lead to a further reduction in complexity.

Author Contributions

Conceptualization, C.F., G.P. and M.P.R.; Methodology, C.F., G.P. and M.P.R.; Writing, review and editing, C.F., G.P. and M.P.R.

Funding

The research was funded by FRA 2017, University of Genova.

Acknowledgments

The authors thank G. Peccati (Luxembourg University) for suggesting relevant references and E. Di Nardo (Università di Torino) for providing useful insights in the application of Faà di Bruno formula. G. Pistone acknowledges the support of de Castro Statistics and Collegio Carlo Alberto. C. Fassino is member of GNCS-INDAM, M.P. Rogantin and G. Pistone are members of GNAMPA-INDAM.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Properties of the bounds lij and Lij

The following proposition states that the definition of I ( α ) in Definition 2 is consistent.
Proposition A1.
Let α, a, l i j ( α , a ) and L i j ( α , a ) be as in Definition 2. If the entries a h k of the vector a, h i , k j , ( h , k ) ( i , j ) , satisfy the bounds l h k ( α , a ) a h k L h k ( α , a ) , then 0 l i j ( α , a ) L i j ( α , a ) ,
Proof. 
For simplicity α and a are omitted. Obviously, for each i , j , we have l i j 0 and so a i j 0 . If ( i , j ) = ( 1 , 1 ) , l 11 = 0 ( n 1 k = 2 p m k ) and L 11 = n 1 m 1 . The thesis follows straightforward. The case with ( i j ) ( 1 , 1 ) is proved by induction.
  • Base steps. We prove that l 1 , j L 1 , j , by induction on j. The inequalities hold for ( i , j ) = ( 1 , 1 ) . We assume l 1 , j 1 a 1 , j 1 L 1 , j 1 .
    We have a 1 , j 1 L 1 , j 1 n 1 k = 1 j 2 a 1 k , that is n 1 k = 1 j 1 a 1 k 0 and so L 1 j = ( n 1 k = 1 j 1 a 1 k ) m j 0 . We show that L 1 j n 1 k = j + 1 p m k k = 1 j 1 a 1 k and so, since L 1 j 0 , we conclude L 1 j l 1 j . If L 1 j = n 1 k = 1 j 1 a 1 k , obviously, L 1 j n 1 k = j + 1 p m k k = 1 j 1 a 1 k . The case L 1 j = m j also implies l 1 j L 1 j . In fact, from the inductive hypothesis, we have a 1 , j 1 l 1 , j 1 n 1 k = j p m k k = 1 j 2 a 1 k , that is m j n 1 k = j + 1 p m k k = 1 j 1 a 1 k . Analogously, the relation between l i , 1 and L i , 1 can be shown.
  • Induction step. We show that l i j L i j , by assuming that l h k a h k L h k for h < i , k p , so that l i 1 , j a i 1 , j L i 1 , j , and for h p , k < j , so that l i , j 1 a i , j 1 L i , j 1 . It follows that L i j 0 since the inequalities
    a i 1 , j m j h = 1 i 2 a h j and a i , j 1 n i k = 1 j 2 a i k imply m j h = 1 i 1 a h j 0 and n i k = 1 j 1 a i k 0 .
    Furthermore, from l i , j 1 a i , j 1 and from l i 1 , j a i 1 , j it follows
    h = 1 i n h k = j p m k h = 1 i k = 1 j 1 a h k 0 and h = 1 i 1 n h k = j + 1 p m k h = 1 i 1 k = 1 j a h k 0
    and so, by adding m j h = 1 i 1 a h j and n i k = 1 j 1 a i k to both sides of the first and of second relation, respectively,
    m j h = 1 i 1 a h j h = 1 i n h k = j + 1 p m k h = 1 i ; k = 1 j ( h , k ) ( i , j ) a h k n i h = 1 j 1 a i h h = 1 i n h k = j + 1 p m k h = 1 i ; k = 1 j ( h , k ) ( i , j ) a h k
    We conclude that L i j l i j . In fact L i j 0 and
    L i , j = n i k = 1 j 1 a i k m j h = 1 i 1 a h j h = 1 i n h h = j + 1 p m h h = i ; k j 1 ( h , k ) ( i , j ) a h k .
Proposition A1 also holds when some values n r and m s are equal to zero.
Proposition A2.
Let α = ( n 1 , m 1 , , n p , m p ) Z 0 2 p and let a = ( a 1 , 1 , , a 1 , p , , a p , p ) Z 0 p 2 be a vector belonging to I ( α ) . If a i j coincides with a bound, then some a h k are uniquely determined.
1.
Let a i j = L i j ( α , a ) = m j h = 1 i 1 a h j . Then a q j = 0 for q = i + 1 , , p .
2.
Let a i j = L i j ( α , a ) = n i k = 1 j 1 a i k . Then a i t = 0 for t = j + 1 , , p .
3.
Let a i j = l i j ( α , a ) = h = 1 i n h h = j + 1 p m h h i ; k j ( h , k ) ( i , j ) a h k .
Then a i t = m t h = 1 i 1 a h t , a q j = n q k = 1 j 1 a q k and a q t = 0 , for q = i + 1 , , p , t = j + 1 , , p .
Proof. 
For simplicity α and a are omitted.
  • Let a i j = L i j = m j h = 1 i 1 a h j . Then m j = h = 1 i a h j . We show, by induction, that a q j = 0 for q > i .
    • Base step: q = i + 1 . We have 0 a ( i + 1 ) , j L ( i + 1 ) , j m j h = 1 i a h j = 0 .
    • Induction step: let a h j = 0 for h = ( i + 1 ) , ( q 1 ) . Then 0 a q j L q j m j h = 1 q 1 a h j = m j h = 1 i a h j = 0 .
  • Analogously to previous Item.
  • Since a i j = h = 1 i n h k = j + 1 p m k h i ; k j ( h , k ) ( i , j ) a h k then h = 1 i n h k = j + 1 p m k = h i ; k j a h k and so, for each t > j it holds
    l i , t = h = 1 i n h k = t + 1 p m k h = 1 i k = 1 ( h , k ) ( i , t ) t a h k = h = 1 i n h k = j + 1 p m k + k = j + 1 t m k h = 1 i k = 1 ( h , k ) ( i , t ) t a h k = k = j + 1 t m k + h = 1 i k = 1 j a h k h = 1 i k = 1 ( h , k ) ( i , t ) t a h k = k = j + 1 t m k h = 1 i 1 k = j + 1 t a h k k = j + 1 t 1 a i k = k = j + 1 t ( m k h = 1 i 1 a h k ) k = j + 1 t 1 a i k
    We show by induction that a i t = m t h = 1 i 1 a h t for each t > j .
    • Base step: t = j + 1 . By Equation (A1) with t = j + 1 we have
      l i , j + 1 = m j + 1 h = 1 i 1 a h , j + 1 L i , j + 1 m j + 1 h = 1 i 1 a h , j + 1 ,
      that is l i , j + 1 = L i , j + 1 , and so the thesis follows since l i , j + 1 a i , j + 1 L i , j + 1 .
    • Induction step: let a i k = m k h = 1 i 1 a h k for j + 1 k < t . By Equation (A1)
      l i t = m t h = 1 i 1 a h t + k = j + 1 t 1 ( m k h = 1 i 1 a h k ) k = j + 1 t 1 a i k = m t h = 1 i 1 a h t + k = j + 1 t 1 a i k k = j + 1 t 1 a i k = m t h = 1 i 1 a h t L i t l i t
      and so l i t = L i t , and the thesis follows since l i t a i t L i t .
    Analogously, we can show a q j = n q k = 1 j 1 a q k for q > i .
    Furthermore, since a q j = L q j = n q k = 1 j 1 a q k for q = i + 1 , , p , then from Item 1 it follows a q t = 0 for t = j + 1 , , p and so, by varying q i + 1 we obtain the thesis.
Proposition A3.
Let α be a multi-index, let α h k = α e h e p + k , and let I ( α ) be as in Definition (2). Let P ( a ) be as in Equation (12). Then
a I ( α h k ) ( a h k + 1 ) P ( a 11 , , a h k + 1 , , a p p ) = a I ( α ) a h k P ( a ) .
Proof. 
We define the set I h k * = b | b i j = a i j , ( i , j ) ( h , k ) ; b h k = a h k + 1 ; a I ( α h k ) . We consider the bounds for b I h k * . Using n h 1 and m k 1 instead of n h and m k , by direct computation we obtain the following conditions.
  • Let j < k . Since j + 1 < k + 1 , that is j + 1 k , we have
    0 r = 1 i n r s = j + 1 p m s + 1 r i ; s j ; ( r , s ) ( i , j ) b r s b i j n i s = 1 j 1 b i s m j r = 1 i 1 b r j
  • Let i = h and j < k . Since i = h and j + 1 < k + 1 , that is j + 1 k , we have
    0 r = 1 h n r s = j + 1 p m s r h ; s j ; ( r , s ) ( h , j ) b r s b h j n h 1 s = 1 j 1 b h s m j r = 1 h 1 b r j
  • Let i < h and j = k . Since i < h and j = k , i.e., j + 1 > k + 1 , we have
    0 r = 1 i n r s = k + 1 p m s r i ; s k ; ( r , s ) ( i , k ) b r s b i k n i s = 1 k 1 b i s m k 1 r = 1 i 1 b r k
  • Let i = h and j = k . Since b h k = a h k + 1 , we have
    1 r = 1 h n r s = k + 1 p m s r h ; s k ; ( r , s ) ( h , k ) b r s b h k n h s = 1 k 1 b h s m k r = 1 h 1 b r k
  • Let i > h or j > k . Analysing each case ( i > h and j < k , j = k , j > k ; j > k and i < h , i = h ) as in the previous items, we have
    0 r = 1 i n r s = j + 1 p m s r i ; s j ; ( r , s ) ( i , j ) b r s b i j n i s = 1 j 1 b i s m j r = 1 i 1 b r j
It follows that the set I h k * is strictly contained in I ( α ) . The vectors b I ( α ) \ I h k * are such that b h k = 0 or at least a component coincides with a bound l i j or L i j . This latter condition implies that b h k = 0 . In fact, if b I ( α ) \ I h k * then
b i j = r = 1 i n r s = j + 1 p m s r i ; s j ; ( r , s ) ( i , j ) b r s = l i j ( α ) if i < h , j < k or b h j = n h s = 1 j 1 b h s = L h j ( α ) if j < k or b i k = m k s = 1 k 1 b r k = L i k ( α ) if i < h .
From Proposition A2, we have b h k = 0 , i.e., a h k + 1 = 0 . We conclude that
a I ( α h k ) ( a h k + 1 ) P ( a 11 , , a h k + 1 , , a p p ) = b I h k * b h k P ( b ) = b I ( α ) b h k P ( b )
since the coefficients b I ( α ) \ I h k * correspond to null addends, since b h k = 0 . ☐

References

  1. Itô, K. Complex multiple Wiener integral. Jpn. J. Math. 1952, 22, 63–86. [Google Scholar] [CrossRef]
  2. Goodman, N.R. Statistical analysis based on a certain multivariate complex Gaussian distribution. (An introduction). Ann. Math. Stat. 1963, 34, 152–177. [Google Scholar] [CrossRef]
  3. Janson, S. Gaussian Hilbert Spaces; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  4. Fraenkel, L.E. Formulae for high derivatives of composite functions. Math. Proc. Camb. Philos. Soc. 1978, 83, 159–165. [Google Scholar] [CrossRef]
  5. Barvinok, A. Integration and optimization of multivariate polynomials by restriction onto a random subspace. Found. Comput. Math. 2007, 7, 229–244. [Google Scholar] [CrossRef]
  6. Ryser, H.J. Combinatorial Mathematics; The Carus Mathematical Monographs, No. 14; John Wiley and Sons, Inc.: New York, NY, USA, 1963; xiv+154. [Google Scholar]
  7. Hopcroft, J.; Tarjan, R. Algorithm 447: Efficient Algorithms for Graph Manipulation. Commun. ACM 1973, 16, 372–378. [Google Scholar] [CrossRef]
  8. Fassino, C.; Riccomagno, E.; Rogantin, M.P. Cubature rules and expected value of some complex functions. arXiv, 2017; arXiv:math.PR/1709.08910. [Google Scholar]

Share and Cite

MDPI and ACS Style

Fassino, C.; Pistone, G.; Rogantin, M.P. Computing the Moments of the Complex Gaussian: Full and Sparse Covariance Matrix. Mathematics 2019, 7, 263. https://doi.org/10.3390/math7030263

AMA Style

Fassino C, Pistone G, Rogantin MP. Computing the Moments of the Complex Gaussian: Full and Sparse Covariance Matrix. Mathematics. 2019; 7(3):263. https://doi.org/10.3390/math7030263

Chicago/Turabian Style

Fassino, Claudia, Giovanni Pistone, and Maria Piera Rogantin. 2019. "Computing the Moments of the Complex Gaussian: Full and Sparse Covariance Matrix" Mathematics 7, no. 3: 263. https://doi.org/10.3390/math7030263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop