Next Article in Journal
Application of Information Theory for an Entropic Gradient of Ecological Sites
Previous Article in Journal
Effects of Fatty Infiltration of the Liver on the Shannon Entropy of Ultrasound Backscattered Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Differential Entropy of the Joint Distribution of Eigenvalues of Random Density Matrices

1
School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
2
School of Applied Science, Department of Mathematics, Harbin University of Science and Technology, Harbin 150080, China
3
Department of Mathematics, Anhui University of Technology, Ma’Anshan 243032, China
4
Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018, China
5
College of Mathematics and Computer Science, Fujian Normal University, Fuzhou 350018, China
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(9), 342; https://doi.org/10.3390/e18090342
Submission received: 5 August 2016 / Revised: 11 September 2016 / Accepted: 19 September 2016 / Published: 21 September 2016
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We derive exactly the differential entropy of the joint distribution of eigenvalues of Wishart matrices. Based on this result, we calculate the differential entropy of the joint distribution of eigenvalues of random mixed quantum states, which is induced by taking the partial trace over the environment of Haar-distributed bipartite pure states. Then, we investigate the differential entropy of the joint distribution of diagonal entries of random mixed quantum states. Finally, we investigate the relative entropy between these two kinds of distributions.

1. Introduction

The notion of entropy is ubiquitous in diverse fields, such as physics, mathematics and information theory. It was given a number of meanings in various contexts. In fact, the entropy functions have their roots in statistical mechanics [1]; they originated in the work of Boltzmann, who studied the relation between entropy and probability in physical systems in the 1870s. In thermodynamics, entropy is commonly understood as a measure of disorder [2]. According to the second law of thermodynamics, the entropy of an isolated system never decreases; such a system eventually attains its maximum entropy. In information theory, Shannon entropy:
H ( p ) : = j p j ln p j ,
of a discrete probability distribution was introduced by Shannon in his seminal papers in 1948 [3]. Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source [4].
Differential entropy (also referred to as continuous entropy) is a measure of the average surprisal of a continuous random variable. The differential entropy of a related continuous probability density function p ( x ) is defined by:
h ( p ) : = p ( x ) ln p ( x ) d x .
This seems to be the natural extension of discrete Shannon entropy. However, the differential entropy is a very different quantity, since it can be positive or negative.
For a given density matrix ρ (i.e., nonnegative matrix with unit trace), its von Neumann entropy is defined as:
S ( ρ ) : = Tr ρ ln ρ ,
where ln ρ is in the sense of the functional calculus of ρ. In fact, S ( ρ ) = j λ j ( ρ ) ln λ j ( ρ ) , where λ j ( ρ ) stands for the eigenvalues of ρ.
There also exist some generalized entropies, such as the classical and quantum Rényi entropy, the Tsallis entropy and unified ( r , s ) -relative entropy [5]. Additionally, they are generalizations of Shannon entropy and von Neumann entropy. They have many applications in statistical physics and information theory.
Recently, a handbook of differential entropy has been published [6]. This book is intended as a practical introduction to the topic of differential entropy in information theory. Although this book gives many computations about the differential entropies of various continuous probability densities, the computation of the differential entropy about the joint eigenvalue distribution of Wishart matrices is still missing. More recently, the differential entropy is used as a tool to study uncertainty relations based on entropy power [7]. In view of this, we will derive exactly the differential entropy of the joint distribution of eigenvalues of Wishart matrices. Based on this result, we calculate the differential entropy of the joint distribution of eigenvalues of random mixed quantum states induced by partial tracing over the environment of uniformly-distributed bipartite pure states. We derive the joint distribution of diagonal entries of random mixed quantum states induced in such a way. Moreover, we investigate the differential entropy of the joint distribution of diagonal entries of random mixed quantum states and obtain that it is not less than the differential entropy of the joint distribution of eigenvalues of random mixed quantum states in higher dimension. This fact could be put in relation with the so-called Schur’s theorem [8], which states that the vector consisting of diagonal entries of a matrix is majorized by the vector consisting of its eigenvalues, and hence, we get that the diagonal entropy of a quantum state ρ is no less than its von Neumann entropy. This is the similarity between von Neumann entropy and differential entropy, i.e., S ( ρ diag ) S ( ρ ) . Note that the entropy difference S ( ρ diag ) S ( ρ ) is a figure of merit for quantifying quantum coherence in the literature very recently [9].
This paper is organized as follows. In Section 2, we introduce a related random matrix model, i.e., Wishart random matrix ensemble, that of the induced random quantum states. This model is of central importance in quantum information theory. We present the joint distribution of eigenvalues of the Wishart random matrix ensemble. Section 3 deals with the calculation of the differential entropies of the joint distribution of eigenvalues of the Wishart ensemble. Then, the differential entropy of the joint distributions of eigenvalues and the diagonals of random quantum states are calculated, respectively, in Section 4 and Section 5. Finally, in Section 6, we investigate the relative entropy between these two kinds of distributions. We conclude the paper with Section 7. Additionally, some necessary materials are provided in Appendix A and Appendix B.

2. Wishart Matrices

We use the notation x N ( μ , σ 2 ) to indicate a Gaussian random real variable x with mean μ and variance σ 2 . Let Z denote an m × n ( m n ) complex random matrix [10,11]. These elements are independent complex random variables subject to N C ( 0 , 1 ) = N ( 0 , 1 2 ) + i N ( 0 , 1 2 ) with Gaussian densities:
1 π exp z i j 2 ,
where Re ( z i j ) , Im ( z i j ) are independent identically distributed (i.i.d.) Gaussian random real variables with mean zero and variance 1 2 . Such a random matrix Z is called a Ginibre random matrix.
Definition 1
(Wishart matrix [12]). With m × n random matrices Z specified as above, define complex Wishart ensembles as consisting of matrices W = Z Z . The matrices W = Z Z are referred to as (uncorrelated) Wishart matrices. All such Wishart matrices form the so-called Wishart ensemble.

2.1. Joint Probability Density of the Eigenvalues of the Wishart Ensemble

Consider the elements of the matrices Z constituting the Wishart ensemble Z Z as complex zero-mean i.i.d. Gaussian variables with the variance σ 2 . This case leads to the invariant class of ensembles referred to as Gaussian Unitary Ensembles (GUE). As chosen previously, m n for definiteness. The probability distribution followed by Z is:
Pr ( Z ) exp 1 2 σ 2 Tr Z Z ,
where σ 2 is the variance of each distinct real component of the matrix elements of Z. Indeed, let Z = [ z i j ] be a complex random matrix, where z i j = Re ( z i j ) + 1 Im ( z i j ) with Re ( z i j ) , Im ( z i j ) N ( 0 , σ 2 ) . The probability distribution of Z is just the joint distribution of all matrix elements z i j of Z. Thus:
Pr ( Z ) = i = 1 m j = 1 n Pr ( z i j ) = i = 1 m j = 1 n Pr Re ( z i j ) , Im ( z i j ) i = 1 m j = 1 n exp 1 2 σ 2 Re ( z i j ) 2 exp 1 2 σ 2 Im ( z i j ) 2 = i = 1 m j = 1 n exp 1 2 σ 2 z i j 2 = exp 1 2 σ 2 i = 1 m j = 1 n z i j 2 = exp 1 2 σ 2 Tr Z Z .
Let σ 2 = 1 2 , then Pr ( Z ) exp Tr Z Z . The respective joint probability density of eigenvalues ( μ j [ 0 , ) , j = 1 , , m ) of Z Z comes out as [13]:
q ( μ 1 , , μ m ) = C q 1 i < j m ( μ i μ j ) 2 j = 1 m μ j n m e μ j .
The normalization C q can be evaluated using Selberg’s integral [10] and turns out to be:
C q = 1 j = 1 m [ j ! ( n j ) ! ] .
Remarking that the density q is symmetric (i.e., q ( μ τ ( 1 ) , , μ τ ( m ) ) = q ( μ 1 , , μ m ) for all permutations τ S m , the symmetric group of { 1 , , m } ), hence, we are dealing with integrals of the form:
I m ( n ) ( f ) = 0 0 f ( μ 1 ) q ( μ 1 , , μ m ) d μ 1 d μ m .
We shall perform the computations with any test function f, and we shall specialize the final result to the cases f ( μ ) = μ k for any nonnegative integer k.
Proposition 1
([14]). Assume that α = n m . The following representation for I m ( n ) ( f ) holds for any suitable function f:
I m ( n ) ( f ) = 0 f ( μ ) ϕ ( μ ) d μ ,
where the function ϕ admits the following expression by means of generalized Laguerre polynomials:
ϕ ( μ ) = ( m 1 ) ! ( m + α 1 ) ! L m 1 ( α + 1 ) ( μ ) 2 L m 2 ( α + 1 ) ( μ ) L m ( α + 1 ) ( μ ) μ α e μ .
Here, the generalized Laguerre polynomials with parameter α [15] are defined by:
L k ( α ) ( μ ) = j = 0 k ( 1 ) k k + α k j μ j j ! .
Remark 1.
For example, for f ( μ ) = μ , we have:
0 μ ϕ ( μ ) d μ = Γ ( m ) Γ ( n ) 0 L m 1 ( α + 1 ) ( μ ) 2 L m 2 ( α + 1 ) ( μ ) L m ( α + 1 ) ( μ ) μ α + 1 e μ = Γ ( m ) Γ ( n ) 0 L m 1 ( α + 1 ) ( μ ) 2 μ α + 1 e μ = Γ ( m ) Γ ( n ) Γ ( n + 1 ) Γ ( m ) = n .
Because ϕ ( μ ) is also a probability density function, i.e., the distribution density of a single eigenvalue of the Wishart random matrix ensemble, we can still consider its differential entropy:
0 ϕ ( μ ) ln ϕ ( μ ) d μ .
However, exact calculation of this integral seems forbidden.

2.2. Joint Probability Density of the Eigenvalues of Random Density Matrices

For the mathematical treatment of a quantum system, one usually associates with it a Hilbert space whose vectors describe the states of that system. In our situation, we associate with A and B two complex Hilbert spaces H A and H B with respective dimensions m and n, which are assumed here to be such that m n . In these settings, the vectors of the spaces H A and H B describe the states of the systems A and B. Those of the tensorial product H A H B (of dimension m n ) then describe the states of the combined system A B .
It will be helpful throughout this paper to make use of a simple correspondence between the linear operator spaces L X , Y and Y X , for given complex Euclidean spaces X and Y . We define the mapping:
vec : L X , Y Y X
to be the linear mapping that represents a change of bases from the standard basis of L X , Y to the standard basis of Y X . Specifically, in the Dirac notation, this mapping amounts to flipping a bra to a ket; we define:
vec ( | i j | ) = | i j ,
at this point, the mapping is determined for every A = i , j A i j | i j | L X , Y by linearity [16]. For convenience, we denote vec ( A ) : = | A . Clearly Tr X ( | A B | ) = A B for A , B L X , Y . In this problem, we assume that X = H B = C n and Y = H A = C m .
If the bipartite pure state | ψ H A H B is randomized, it can be considered as a random vector | G with random coordinates G i j , i = 1 , , m ; j = 1 , , n , the probability distribution of which is the uniform distribution on the unit sphere S of H A H B . That is, for any test function f:
E ( f ( G ) ) = S f ( ψ ) d s ( ψ ) ,
where d s ( ψ ) denotes the unique, normalized, unitary invariant measure upon the pure state manifold of normalized state vectors | ψ . The most transparent realization of d s ( ψ ) is offered by the following delta function prescription (in the sense of distribution) [17]:
f ( ψ ) d s ( ψ ) = Γ ( N ) π N f ( ψ ) δ ( 1 ψ | ψ ) [ d ψ ] ,
where [ d ψ ] stands for the volume element, defined by:
[ d ψ ] : = [ d ( Re ( ψ ) ) ] [ d ( Im ( ψ ) ) ] = j = 1 N d x j d y j
for | ψ = ( ψ 1 , , ψ N ) T and ψ j = x j + 1 y j R + 1 R ( j = 1 , , N ) . Besides, if we denote r j = ψ j 2 , we see that the joint probability density function of ( r 1 , , r N ) is given by [18]:
Γ ( N ) δ 1 j = 1 N r j j = 1 N d r j .
Consider the matrix elements G i j as i.i.d. complex Gaussian variables with zero-mean, such that the fixed-trace condition Tr G G = 1 is satisfied for G = [ G i j ] . The random reduced density matrix ρ A G = Tr B ( | G G | ) associated with the random pure state | G is of the matrix form G G , i.e., ρ A G = G G for | G H A H B , where G : H B H A is a m × n random rectangular matrix of the random coordinates G i j . The corresponding random eigenvalues of ρ A G are denoted by λ 1 , , λ m . The probability distribution for G is given by:
Pr ( G ) δ 1 Tr G G .
Correspondingly, the joint probability density function of eigenvalues ( λ j [ 0 , 1 ] , j = 1 , , m ) is obtained [19] as:
p ( λ 1 , , λ m ) = C p 1 i < j m ( λ i λ j ) 2 j = 1 m λ j n m
where:
λ = ( λ 1 , , λ m ) Δ m 1 : = λ R + m : j λ j = 1 .
The normalization constant in the above equation is:
C p = Γ ( m n ) C q
where C q is referred to (3) and Γ ( z ) = 0 t z 1 e t d t is the Gammafunction defined for Re ( z ) > 0 .
In the following, we always let Γ ( x ) , ψ ( x ) be the Gamma function and Digammafunction, respectively. It is well known that:
ψ ( x ) = d ln Γ ( x ) d x = Γ ( x ) Γ ( x ) ,
ψ ( k + 1 ) = H k γ ,
where H k = j = 1 k 1 j and γ 0 . 57721 is Euler constant. We must note that m n is a restriction at any time.

3. The Differential Entropy of the Joint Distribution of the Eigenvalues of the Wishart Ensemble

In this section, we calculate the differential entropy of the joint distribution of eigenvalues of the Wishart ensemble. This differential entropy is given by the following integral:
h ( q ) = 0 0 q ( μ 1 , , μ m ) ln q ( μ 1 , , μ m ) k = 1 m d μ k ,
where:
q ( μ 1 , , μ m ) = C q 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k , C q = 1 k = 1 m k ! ( n k ) ! .
Theorem 1.
The differential entropy of the joint distribution of eigenvalues of the Wishart ensemble is given by the following:
h ( q ) = m n + m ψ ( 2 ) ln C q k = n m n 1 k ψ ( k + 1 ) k = 1 m k ψ ( k + 1 ) .
Proof. 
Combining Conditions (12) and (13), we get that:
h ( q ) = 0 0 k = 1 m d μ k C q 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k × ln C q + ln 1 i < j m ( μ i μ j ) 2 + ln k = 1 m μ k n m + ln k = 1 m e μ k = C q 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k ln C q + C q 0 0 k = 1 m d μ k ln 1 i < j m ( μ i μ j ) 2 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k + C q 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k ln k = 1 m μ k n m + C q 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k ln k = 1 m e μ k = C q I 1 ln C q + C q I 2 + C q I 3 + C q I 4 .
where:
I 1 = 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k , I 2 = 0 0 k = 1 m d μ k ln 1 i < j m ( μ i μ j ) 2 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k , I 3 = 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k ln k = 1 m μ k n m , I 4 = 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k ln k = 1 m e μ k .
Let:
I m ( α , r ) = 0 0 1 i < j m μ i μ j 2 r k = 1 m μ k α e μ k d μ k .
It is well known [10] that:
I m ( α , r ) = k = 0 m 1 Γ ( α + 1 + k r ) Γ ( 1 + ( k + 1 ) r ) Γ ( 1 + r ) .
Thus:
I 1 = I m ( n m , 1 ) = 1 C q , I 2 = r I m ( α , r ) ( α , r ) = ( n m , 1 ) .
By the equalities (10) and (11), we get that:
r I m ( α , r ) = r k = 0 m 1 Γ ( α + 1 + k r ) Γ ( 1 + ( k + 1 ) r ) Γ ( 1 + r ) = Γ ( α + 1 ) r k = 1 m 1 Γ ( α + 1 + k r ) × k = 1 m Γ ( 1 + k r ) Γ ( 1 + r ) = Γ ( α + 1 ) r k = 1 m 1 Γ ( α + 1 + k r ) × k = 1 m Γ ( 1 + k r ) Γ ( 1 + r ) + Γ ( α + 1 ) k = 1 m 1 Γ ( α + 1 + k r ) × r k = 1 m Γ ( 1 + k r ) Γ ( 1 + r ) = I m ( α , r ) k = 1 m 1 k ψ ( α + 1 + k r ) + I m ( α , r ) k = 1 m k ψ ( 1 + k r ) m ψ ( 1 + r ) = I m ( α , r ) k = 1 m 1 k ψ ( α + 1 + k r ) + k = 1 m k ψ ( 1 + k r ) m ψ ( 1 + r ) .
Thus:
I 2 = r I m ( α , r ) ( α , r ) = ( n m , 1 ) = I m ( n m , 1 ) k = 1 m 1 k ψ ( n m + 1 + k ) + k = 1 m k ψ ( 1 + k ) m ψ ( 2 ) = 1 C q k = n m + 1 n ( k n + m 1 ) ψ ( k ) + k = 1 m k ψ ( 1 + k ) m ψ ( 2 ) .
Similarly,
I 3 = ( n m ) 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k ln k = 1 m μ k = ( n m ) α I m ( α , r ) ( α , r ) = ( n m , 1 ) = ( n m ) I m ( α , r ) k = 0 m 1 ψ ( α + 1 + k r ) ( α , r ) = ( n m , 1 ) = ( n m ) I m ( n m , 1 ) k = 0 m 1 ψ ( n m + 1 + k ) = n m C q k = n m + 1 n ψ ( k ) .
By the equalities (4) and (6), we have:
I 4 = 0 0 k = 1 m d μ k 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k k = 1 m μ k = m 0 0 k = 1 m d μ k μ 1 1 i < j m ( μ i μ j ) 2 k = 1 m μ k n m e μ k = m C q μ 1 q ( μ 1 , , μ m ) k = 1 m d μ k = m C q n .
Therefore:
h ( q ) = C q I 1 ln C q + C q I 2 + C q I 3 + C q I 4 = ln C q + k = n m + 1 n ( k n + m 1 ) ψ ( k ) + k = 1 m k ψ ( k + 1 ) m ψ ( 2 ) + ( n m ) k = n m + 1 n ψ ( k ) m n = m n + m ψ ( 2 ) ln C q k = n m n 1 k ψ ( k + 1 ) k = 1 m k ψ ( k + 1 ) .
That is, we have obtained the result. ☐
Remark 2.
The formula in (15) is key in the proof of the Theorem above. It is a direct consequence of Selberg’s integral formula [10]:
0 0 1 i < j m ( x i x j ) 2 β k = 1 m x k α e x k d x k = k = 0 m 1 Γ ( α + 1 + k β ) Γ ( 1 + ( k + 1 ) β ) Γ ( 1 + β ) .
Remark 3.
For m = n , we have:
h ( q ) = ln k = 1 m k ! ( m k ) ! + m 2 + m ( 1 γ ) + m ψ ( m + 1 ) 2 k = 1 m k ψ ( k + 1 ) .
In order to reduce the above formula, we will use the following equality:
k = 1 n k H k = n ( n + 1 ) 2 H n n ( n 1 ) 4 ,
which is the result (B4) in the Appendix B.
h ( q ) = ln k = 1 m k ! ( m k ) ! + m 2 + m ( 1 γ ) + m ( H m γ ) 2 k = 1 m k ( H k γ ) = ln k = 1 m k ! ( m k ) ! m 2 H m + ( 3 2 + γ ) m 2 + ( 1 2 γ ) m .
Remark 4.
By (B5), we have:
1 2 H k k < ln k ! k ψ ( k + 1 ) < 1 2 H k k + 1 2 k k + 1 .
Taking the sum from one and m, we have:
k = 1 m ( 1 2 H k k ) < k = 1 m ln k ! k = 1 m k ψ ( k + 1 ) < k = 1 m ( 1 2 H k k ) + 1 2 k = 1 m k k + 1 .
Similarly,
k = n m n 1 ( 1 2 H k k ) < k = 1 m ln ( n k ) ! k = n m n 1 k ψ ( k + 1 ) < k = n m n 1 ( 1 2 H k k ) + 1 2 k = n m n 1 k k + 1 .
By (B2) and the simple calculation, we have:
1 2 ( m + 1 ) H m ( m 2 2 + m ) < k = 1 m ln k ! k = 1 m k ψ ( k + 1 ) < 1 2 m H m m 2 2 m 2 + m 2 ( m + 1 ) 1 2 n H n 1 2 ( n m ) H n m m ( 2 n m ) 2 < k = 1 m ln ( n k ) ! k = n m n 1 k ψ ( k + 1 )
< 1 2 ( n 1 ) H n 1 2 ( n m 1 ) H n m + m ( m 2 n + 1 ) 2
Combining (23), (24) and (14), we have:
( 1 2 ( m + 1 ) H m m γ ) + ( 1 2 n H n 1 2 ( n m ) H n m ) < h ( q ) < 1 2 m H m + 1 2 ( n 1 ) H n 1 2 ( n m 1 ) H n m + m ( 1 γ ) + m 2 ( m + 1 ) .
Again:
( 1 2 ( m + 1 ) H m m γ ) + ( 1 2 n H n 1 2 ( n m ) H n m ) = ( 1 2 m H n + 1 2 ( m + 1 ) H m m γ ) + 1 2 ( n m ) ( H n H n m ) m ( H m γ ) + 1 2 ( n m ) ( H n H n m ) > 0 ,
hence h ( q ) > 0 .

4. The Differential Entropy of the Joint Distribution of the Eigenvalues of Random Quantum States

In this section, we calculate the differential entropy of the joint distribution of eigenvalues of random quantum states. This differential entropy is given by the following integral:
h ( p ) = Δ m 1 p ( λ 1 , , λ m ) ln p ( λ 1 , , λ m ) k = 1 m d λ k ,
where:
p ( λ 1 , , λ m ) = C p 1 i < j m ( λ i λ j ) 2 k = 1 m λ k n m , ( λ 1 , , λ m ) Δ m 1 , C p = ( m n 1 ) ! k = 1 m [ k ! ( n k ) ! ] .
Theorem 2.
The differential entropy of the joint distribution of eigenvalues of random density matrices induced by partial tracing over Haar-distributed bipartite pure states is given by the following:
h ( p ) = ln k = 1 m k ! ( n k ) ! + m ψ ( 2 ) ln Γ ( m n ) + m ( n 1 ) ψ ( m n ) k = n m n 1 k ψ ( k + 1 ) k = 1 m k ψ ( k + 1 ) .
Proof. 
In fact,
h ( p ) = 0 0 δ 1 j = 1 m λ j p ( λ 1 , , λ m ) ln p ( λ 1 , , λ m ) k = 1 m d λ k .
Let:
F ( t ) = 0 0 δ t j = 1 m λ j p ( λ 1 , , λ m ) ln p ( λ 1 , , λ m ) k = 1 m d λ k .
Performing Laplace transform ( t s ) of F ( t ) leads to:
F ˜ ( s ) = 0 0 exp s j = 1 m λ j p ( λ 1 , , λ m ) ln p ( λ 1 , , λ m ) k = 1 m d λ k
By (8) and (13),
C p C q i = 1 m e u i q ( μ 1 , , μ m ) = p ( μ 1 , , μ m ) , p ( μ 1 s , , μ m s ) = s m n m p ( μ 1 , , μ m ) .
Let λ k = μ k s for any k = 1 , 2 , , m . By the relations (28), we have:
F ˜ ( s ) = 0 0 C p C q s m m n q ( μ 1 , , μ m ) ln C p C q s m m n q ( μ 1 , , μ m ) exp j = 1 m μ j j = 1 m d μ j s m = C p C q s m n 0 0 q ( μ 1 , , μ m ) ln C p C q s m m n + ln q ( μ 1 , , μ m ) + j = 1 m μ j j = 1 m d μ j = Γ ( m n ) s m n ln C p C q s m m n + h ( q ) m 0 0 μ 1 q ( μ 1 , , μ m ) j = 1 m d μ j .
The last equality follows by Equation (9).
By Equations (6) and (9), we get:
F ˜ ( s ) = Γ ( m n ) s m n ln C p C q s m m n + h ( q ) m n = Γ ( m n ) s m n ( m n m ) ln s + h ( q ) ln Γ ( m n ) m n = ( m n m ) Γ ( m n ) s m n ln s + Γ ( m n ) s m n ( h ( q ) ln Γ ( m n ) m n ) .
By the properties of the Laplace transform in Appendix A, we have:
L ln t + γ ( s ) = L { ln t } ( s ) + γ L { 1 } ( s ) = ln s + γ s + γ s = ln s s .
Performing the inverse of the Laplace transform and combining with the convolution property on the inverse of Laplace transform, then the following equalities hold:
L 1 s m n ln s ( t ) = L 1 s 1 m n ln s s ( t ) = L 1 s 1 m n L 1 ln s s .
By:
L 1 s m ( t ) = t m 1 Γ ( m ) ,
we get:
L 1 s m n ln s ( t ) = 1 Γ ( m n 1 ) t m n 2 ln t + γ = 1 Γ ( m n 1 ) 0 t x m n 2 ( ln ( t x ) + γ ) d x = 1 Γ ( m n 1 ) 0 t x m n 2 ln ( t x ) d x γ Γ ( m n 1 ) 0 t x m n 2 d x = 1 Γ ( m n 1 ) 0 t x m n 2 ln ( t x ) d x γ t m n 1 Γ ( m n ) .
Let t x = t y , then we get that:
L 1 s m n ln s ( t ) = t m n 1 ln t Γ ( m n ) t m n 1 Γ ( m n 1 ) 0 1 ( 1 y ) m n 2 ln y d y γ t m n 1 Γ ( m n ) = t m n 1 Γ ( m n ) ln t + γ + ( m n 1 ) 0 1 ( 1 y ) m n 2 ln y d y .
Next, we calculate the following integral:
0 1 ( 1 y ) m n 2 ln y d y .
Consider the Betafunction:
B ( α , β ) = 0 1 ( 1 y ) α 1 y β 1 d y = Γ ( α ) Γ ( β ) Γ ( α + β ) .
We have now:
B ( α , β ) β = 0 1 ( 1 y ) α 1 y β 1 ln y d y ,
and note that:
B ( α , β ) β = Γ ( α ) β Γ ( β ) Γ ( α + β ) = ( ψ ( β ) ψ ( α + β ) ) B ( α , β ) ,
then:
0 1 ( 1 y ) m n 2 ln y d y = B ( α , β ) β ( α , β ) = ( m n 1 , 1 ) = ( ψ ( 1 ) ψ ( m n ) ) B ( m n 1 , 1 ) = ψ ( 1 ) ψ ( m n ) m n 1 .
Taking the above equality into (31), we have:
L 1 s m n ln s ( t ) = t m n 1 Γ ( m n ) ( ψ ( m n ) ln t ) .
Combining Equalities (29), (30) and (32), we have:
F ( t ) = t m n 1 ( m n m ) ψ ( m n ) ln t + h ( q ) ln Γ ( m n ) m n .
Thus, we derive that:
h ( p ) = F ( 1 ) = ( m n m ) ψ ( m n ) + h ( q ) ln Γ ( m n ) m n ,
by the relation (14), we obtain:
h ( p ) = ln k = 1 m k ! ( n k ) ! + m ψ ( 2 ) ln Γ ( m n ) + m ( n 1 ) ψ ( m n ) k = n m n 1 k ψ ( k + 1 ) k = 1 m k ψ ( k + 1 ) .
 ☐
Remark 5.
In particular, if m = n , by (B4) in Appendix B, we have:
h ( p ) = ln k = 1 m k ! ( m k ) ! m 2 H m + ( m 2 m ) H m 2 1 ln Γ ( m 2 ) + 1 2 m 2 + 1 2 m .
We already know that the discrete version of differential entropy is nothing but the Shannon entropy. However, differential entropy is not a proper entropy, but instead an information gain [3]. Analogously, we can follow Shannon to consider the entropy power for the joint distribution of eigenvalues of random density matrices. However, this is not the goal of this paper.
Remark 6.
Next, we give two bounds of h ( p ) . It follows from (20) that:
( m n 1 2 ) H m n 1 + ( m n 1 ) ( γ + 1 ) 1 2 m n 1 m n < ln Γ ( m n ) < ( m n 1 2 ) H m n 1 + ( m n 1 ) ( γ + 1 ) .
Combining (23), (24), (34) and (27), we obtain:
1 2 ( m + 1 ) H m + 1 2 n H n 1 2 ( n m ) H n m + ( 1 2 m ) H m n 1 ( γ + 3 2 ) + 1 2 m n < h ( p ) < ( 1 2 m ) H m n 1 + m 2 H m + 1 2 ( n 1 ) H n 1 2 ( n m 1 ) H n m + ( m γ 1 ) + m 2 ( m + 1 ) .
In particular, if m = n , we gain:
( 1 2 m ) H m 2 1 + ( m + 1 2 ) H m ( γ + 3 2 ) + 1 2 m 2 < h ( p ) < ( 1 2 m ) H m 2 1 + ( m 1 2 ) H m + ( m γ 1 ) + 1 2 m m + 1 .

5. The Differential Entropy of the Joint Distribution of the Diagonal Entries of Random Density Matrices

We have presented the joint distribution of diagonal entries of random quantum states ρ as [20]:
Ψ ( ρ 11 , , ρ m m ) = Γ ( m n ) Γ ( n ) m j = 1 m ρ j j n 1 = C Ψ j = 1 m ρ j j n 1 ,
where j = 1 m ρ j j = 1 and C Ψ = Γ ( m n ) Γ ( n ) m . Next, we calculate the differential entropy of the joint distribution of the diagonal part of random quantum states. This differential entropy is given by the following integral:
h ( Ψ ) = δ 1 j = 1 m ρ j j Ψ ( ρ 11 , , ρ m m ) ln Ψ ( ρ 11 , , ρ m m ) d ρ 11 d ρ m m = ln C Ψ ( n 1 ) C Ψ δ 1 j = 1 m ρ j j ln j = 1 m ρ j j j = 1 m ( ρ j j n 1 d ρ j j )
Theorem 3.
The differential entropy of the joint distribution of the diagonal entries of random density matrices induced by partial tracing over Haar-distributed bipartite pure states is given by the following:
h ( Ψ ) = m ( n 1 ) ψ ( m n ) ψ ( n ) ln Γ ( m n ) Γ ( n ) m .
In particular, if m = n , then:
h ( Ψ ) = m ( m 1 ) ψ ( m 2 ) ψ ( m ) ln Γ ( m 2 ) Γ ( m ) m .
Proof. 
Let us calculate:
δ 1 j = 1 m ρ j j j = 1 m ( ρ j j n 1 d ρ j j ) .
Now, we define:
F ( t ) : = δ t j = 1 m ρ j j j = 1 m ( ρ j j α 1 d ρ j j ) .
Performing the Laplace transform ( t s ) to F ( t ) and letting x j = s ρ j j , j = 1 , 2 , , m , we get:
F ˜ ( s ) : = 0 0 exp s j = 1 m ρ j j j = 1 m ( ρ j j α 1 d ρ j j ) = s α m 0 0 exp j = 1 m x j j = 1 m x j α 1 d x j = s α m Γ ( α ) m ,
implying that:
F ( t ) = t α m 1 Γ ( α m ) Γ ( α ) m .
Let t = 1 in Relations (38) and (39), then we obtain:
δ 1 j = 1 m ρ j j j = 1 m ρ j j α 1 d ρ j j = Γ ( α ) m Γ ( α m ) .
By taking the derivative with respect to α, we get:
δ 1 j = 1 m ρ j j ln j = 1 m ρ j j j = 1 m ρ j j α 1 j = 1 m d ρ j j = α Γ ( α ) m Γ ( α m ) = m Γ ( α ) m Γ ( α m ) ψ ( α ) ψ ( α m ) .
Taking α = n in the above equality, we have:
δ 1 j = 1 m ρ j j ln j = 1 m ρ j j j = 1 m ρ j j n 1 j = 1 m d ρ j j = m Γ ( n ) m Γ ( n m ) ψ ( n ) ψ ( n m ) .
Taking the above equality into (35), we have:
h ( Ψ ) = m ( n 1 ) ψ ( m n ) ψ ( n ) ln C Ψ ,
where C Ψ = Γ ( m n ) Γ ( n ) m . ☐
Remark 7.
In the following, we give upper and lower bounds of h ( p ) h ( Ψ ) . By Equations (27) and (36), we see that:
h ( p ) h ( Ψ ) = m ln Γ ( n ) + m ( n 1 ) ψ ( n ) + m ( 1 γ ) k = n m n 1 k ψ ( k + 1 ) k = 1 m k ψ ( k + 1 ) + ln ( k = 1 m k ! ( n k ) ! )
It follows from (20) that:
m ( n 1 2 ) H n 1 + m ( n 1 ) ( γ + 1 ) 1 2 m ( n 1 ) n < m ln Γ ( n ) < m ( n 1 2 ) H n 1 + m ( n 1 ) ( γ + 1 ) .
Together (23), (24), (41) with (40), we have:
1 2 ( m + 1 ) H m + 1 2 ( n m ) ( H n H n m ) + ( 1 n γ 3 2 ) m < h ( p ) h ( Ψ ) < 1 2 m H m + 1 2 ( n m 1 ) ( H n H n m ) γ m + m 2 n + m 2 ( m + 1 ) .
In particular, if m = n , then:
1 2 ( m + 1 ) H m ( γ + 3 2 ) m + 1 < h ( p ) h ( Ψ ) < 1 2 ( m 1 ) H m γ m + 1 1 2 ( m + 1 ) .
We see that when 1 2 1 + 1 m H m ( γ + 3 2 ) + 1 m is nonnegative when m becomes large enough, that is, h ( p ) h ( Ψ ) > 0 for large m, e.g., m 36 . Note that for the case where m = n = 2 , h ( p ) h ( Ψ ) = ln 2 > 0 , but however, m = n = 3 , h ( p ) h ( Ψ ) = ln 3 3 2 < 0 . In summary, when the dimension m = n is with the set { 1 , , 36 } , the positivity of entropy difference is not deterministic. In higher dimension, the positivity is always invariant. The reason why this phenomenon appears is beyond the scope of the paper.
Remark 8.
We should at least briefly mention the body of work consisting of the so-called “Page’s conjecture” [21], which was subsequently solved in [22,23,24]. Although Page’s question is different than the one studied here, the two are related. Page asked what is the average von Neumann entropy of a random quantum state from the induced ensemble; we ask what is the differential entropy of the whole ensemble of random density matrices. Similarly, we give a compact formula of the average diagonal entropy for induced random quantum states in [20,25].
Remark 9.
In [26], the authors presented an exact relation between p and Ψ, which is described as follows:
p ( λ ) = 1 k = 1 m k ! Δ ( λ ) Δ ( λ ) Ψ ( λ ) ,
where Δ ( λ ) = i < j ( λ j λ i ) is the Vandermonde determinant and Δ ( λ ) the differential operator i < j λ i λ j . Furthermore, we can consider their respective Fourier transformations p ^ and Ψ ^ (see below for the meaning of notations) and establish uncertainty relations for Fourier transformations. Indeed, denote the joint probability density of the entries of induced random density matrices given by [27]:
P ( ρ ) δ ( 1 Tr ρ ) Det n m ( ρ ) .
Theoretically, we can also calculate the differential entropy of P as:
P ( ρ ) ln P ( ρ ) [ d ρ ] .
Moreover, we can also ask for the relation between h ( p ) and h ( P ) . Denote its Fourier transformation by:
F ( P ) ( K ) : = P ^ ( K ) = e i   Tr K ρ P ( ρ ) [ d ρ ] ,
where K is Hermitian. Clearly, P ^ ( K ) = P ^ ( U K U ) for any unitary U by the unitary invariance; thus, we can write P ^ ( κ ) , where κ is the eigenvalues of Hermitian K. In [26], Mejía et al. shows that:
Ψ ( z ) = 1 ( 2 π ) m e i κ , z P ^ ( κ ) [ d κ ] .
We find that ([7], Equation (11)):
h ( Ψ ) + h ( P ^ ) ln e 2 m .
All potential problems mentioned here will be considered in future research.

6. Relative Differential Entropy p and Ψ

The relative differential entropy of related continuous probability densities p 1 ( x ) and p 2 ( x ) is defined by:
h ( p 1 | | p 2 ) : = p 1 ( x ) ln p 1 ( x ) p 2 ( x ) d x .
Similarly, we define the relative differential entropy between p and Ψ. That is,
h ( p | | Ψ ) : = p ( λ ) ln p ( λ ) Ψ ( λ ) d λ = h ( p ) p ( λ ) ln Ψ ( λ ) d λ ,
h ( Ψ | | p ) : = Ψ ( λ ) ln Ψ ( λ ) p ( λ ) d λ = h ( Ψ ) Ψ ( λ ) ln p ( λ ) d λ .
Theorem 4.
The relative differential entropy of the joint distribution of the eigenvalues to diagonal entries of random density matrices induced by partial tracing over Haar-distributed bipartite pure states is given by the following:
h ( p | | Ψ ) = ln k = 1 m k ! ( n k ) ! m ψ ( 2 ) + m ln Γ ( n ) k = n m n 1 ( n k 1 ) ψ ( k + 1 ) + k = 1 m k ψ ( k + 1 )
In particular, if m = n , then:
h ( p | | Ψ ) = ln k = 1 m k ! ( m k ) ! + m ln ( m 1 ) ! + m H m + 1 2 m 2 3 2 m .
Proof. 
It suffices to calculate the second term in the right side of the Equation (49).
p ( λ ) ln Ψ ( λ ) d λ = ln C Ψ + ( n 1 ) C p δ 1 j = 1 m λ j 1 i < j m ( λ i λ j ) 2 ln j = 1 m λ j j = 1 m λ j n m d λ j
Let:
f ( t ) = δ t j = 1 m λ j 1 i < j m ( λ i λ j ) 2 ln j = 1 m λ j j = 1 m λ j n m d λ j
As in Section 5, we have:
f ˜ ( s ) = 0 0 1 i < j m ( λ i λ j ) 2 ln j = 1 m λ j j = 1 m e s λ j λ j n m d λ j = s m n 0 0 1 i < j m ( μ i μ j ) 2 ln j = 1 m μ j j = 1 m e μ j μ j n m d μ j m s m n ln s 0 0 1 i < j m ( μ i μ j ) 2 j = 1 m e μ j μ j n m d μ j .
Thus:
f ˜ ( s ) = ( n m ) 1 s m n I 3 m I 1 s m n ln s = C q 1 k = n m + 1 n ψ ( k ) s m n m C q 1 s m n ln s .
By (30) and (32), we have:
f ( t ) = C q 1 k = n m + 1 n ψ ( k ) L 1 ( s m n ) m C q 1 L 1 ( s m n ln s ) = k = n m + 1 n ψ ( k ) t m n 1 Γ ( m n ) C q m t m n 1 Γ ( m n ) C q H m n 1 ln t γ .
Furthermore, by (9), we gain:
f ( 1 ) = k = n m + 1 n ψ ( k ) Γ ( m n ) C q m ψ ( m n ) Γ ( m n ) C q .
This indicates that:
p ( λ ) ln Ψ ( λ ) d λ = ln C Ψ + ( n 1 ) C p f ( 1 ) = ln C Ψ + ( n 1 ) k = n m + 1 n ψ ( k ) m ψ ( m n ) .
Taking (27) and (52) into (50), we gain (51). ☐
Theorem 5.
The relative differential entropy of the joint distribution of diagonal entries to the eigenvalues of random density matrices induced by partial tracing over Haar-distributed bipartite pure states is given by the following:
h ( Ψ | | p ) = m ( n 1 ) ψ ( m n ) ψ ( n ) m ln Γ ( n ) + ln k = 1 m k ! ( n k ) ! Γ ( m n ) Γ ( m n m ) ψ ( n ) ( m n 1 2 m 2 1 2 m ) m n ψ ( m n m ) m ( m 1 ) 2 γ .
Proof. 
It suffices to calculate the second term in the right side of the Equation (50).
Ψ ( λ ) ln p ( λ ) d λ = C Ψ j = 1 m λ j n 1 ln C p 1 i < j m ( λ i λ j ) 2 j = 1 m λ j n m d λ = ln C p + C Ψ δ 1 j = 1 m λ j j = 1 m λ j n 1 ln 1 i < j m ( λ i λ j ) 2 j = 1 m λ j n m d λ .
Let:
f ( t ) = δ t j = 1 m λ j j = 1 m λ j n 1 ln 1 i < j m ( λ i λ j ) 2 j = 1 m λ j n m d λ ,
we have:
f ˜ ( s ) = exp ( s j = 1 m λ j ) j = 1 m λ j n 1 ln 1 i < j m ( λ i λ j ) 2 j = 1 m λ j n m d λ .
Let λ k = μ k s for any k = 1 , 2 , , m . Then:
f ˜ ( s ) = s m m n j = 1 m e μ j μ j n 1 ln s m n 1 i < j m ( μ i μ j ) 2 j = 1 m μ j n m d μ = m n s m m n ln s j = 1 m e μ j μ j n 1 d u + s m m n j = 1 m e μ j μ j n 1 ln 1 i < j m ( μ i μ j ) 2 j = 1 m μ j n m d μ .
Therefore, we have:
f ( t ) = m n L 1 ( s m m n ln s ) j = 1 m e μ j μ j n 1 d u + L 1 ( s m m n ) j = 1 m e μ j μ j n 1 ln 1 i < j m ( μ i μ j ) 2 j = 1 m μ j n m d μ .
Since j = 1 m e μ j μ j n 1 d u = I m ( n 1 , 0 ) = Γ ( n ) m , we gain:
f ( t ) = m n t m n m 1 Γ ( m n m ) ψ ( m n m ) ln t Γ ( n ) m + t m n m 1 Γ ( m ( n 1 ) ) j = 1 m e μ j μ j n 1 ln 1 i < j m ( μ i μ j ) 2 j = 1 m μ j n m d μ .
Now, we calculate the following integral:
j = 1 m e μ j μ j n 1 ln 1 i < j m ( μ i μ j ) 2 j = 1 m μ j n m d μ = j = 1 m e μ j μ j n 1 ln 1 i < j m ( μ i μ j ) 2 + ( n m ) j = 1 m e μ j μ j n 1 ln j = 1 m μ j d μ .
By (16) and (17), we have:
j = 1 m e μ j μ j n 1 ln 1 i < j m ( μ i μ j ) 2 = I m ( α , r ) r | ( α , r ) = ( n 1 , 0 ) = m ( m 1 ) 2 Γ ( n ) m ( ψ ( n ) γ ) ,
j = 1 m e μ j μ j n 1 ln j = 1 m μ j d μ = I m ( α , r ) α | ( α , r ) = ( n 1 , 0 ) = m ψ ( n ) Γ ( n ) m .
Furthermore,
f ( 1 ) = Γ ( n ) m Γ ( m n m ) ψ ( n ) ( m n 1 2 m 2 1 2 m ) m n ψ ( m n m ) m ( m 1 ) 2 γ .
This indicates that:
h ( Ψ | | p ) = h ( Ψ ) ln C p C Ψ f ( 1 ) = m ( n 1 ) ψ ( m n ) ψ ( n ) + ln Γ ( m n ) Γ ( n ) m ln Γ ( m n ) + ln k = 1 m k ! ( n k ) ! Γ ( m n ) Γ ( n ) m f ( 1 ) = m ( n 1 ) ψ ( m n ) ψ ( n ) m ln Γ ( n ) + ln k = 1 m k ! ( n k ) ! Γ ( m n ) Γ ( m n m ) ψ ( n ) ( m n 1 2 m 2 1 2 m ) m n ψ ( m n m ) m ( m 1 ) 2 γ .
 ☐
Remark 10.
Here, we just calculate the differential relative entropies between p and Ψ. Clearly, they are different from the quantum relative entropy. That is, the differential relative entropy is just a numerical factor, but however, the quantum relative entropy is a trace quantity of the functions of density matrices ρ and σ, i.e., S ( ρ | | σ ) : = Tr ρ ( ln ρ ln σ ) . Moreover, we cannot get the Pinsker-like inequality for the differential entropy of two probability distributions, just like the one for quantum relative entropy.

7. Conclusions

The preset paper deals with different entropic quantities associated with the Wishart random matrix model. More precisely, we compute the differential entropy (also known as the Gibbs–Boltzmann entropy) for the joint probability distribution of the eigenvalues of the Wishart ensemble. Then, we consider a related random matrix model, that of the induced random quantum states. This model is of central importance in quantum information theory. The differential entropy of the joint eigenvalue distribution of the random induced states is also computed, as well as that of the diagonal part. Finally, the relative entropy between the distribution of diagonal elements and that of the eigenvalues for random density matrices is computed. In the future research, we will focus on how to use differential entropy to quantify coherence. We hope that the methods and results in this paper can shed new light on the related problems in quantum information theory.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Nos. 11401007, 11301124, 11371114 and 11301077).

Author Contributions

Lin Zhang designed the study. Laizhen Luo, Jiamei Wang and Shifang Zhang performed the calculations and wrote the paper. Laizhen Luo reviewed and edited the manuscript. All of the authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Laplace Transform

The Laplace transform [28] is a frequency domain approach for continuous time signals irrespective of whether the system is stable or unstable.
Definition A1.
The Laplace transform of a function f ( t ) , defined for all real numbers t 0 , is the function F ( s ) , which is a unilateral transform defined by:
F ( s ) = 0 e s t f ( t ) d t .
The parameter s is the complex number frequency: s = x + 1 y ( x , y R ) . Other notations for the Laplace transform include L { f } or, alternatively, L { f ( t ) } instead of F.
The inverse Laplace transform is given by the following complex integral:
Definition A2.
The inverse Laplace transform is defined as follows:
f ( t ) = L 1 { F } ( t ) = 1 2 π i lim T r i T r + i T e s t F ( s ) d s .
where r is a real number, so that the contour path of integration is in the region of convergence of F ( s ) .
Given the functions f ( t ) and g ( t ) and their respective Laplace transforms L { f ( t ) } ( s ) = F ( s ) and L { g ( t ) } ( s ) = G ( s ) :
f ( t ) = L 1 { F ( s ) } ( t ) , g ( t ) = L 1 { G ( s ) } ( t ) ,
The following table is a list of the properties of the unilateral Laplace transform used in the present paper:
  • Linearity: L { a f ( t ) + b g ( t ) } ( s ) = a F ( s ) + b G ( s ) .
  • Convolution: L ( f g ) ( t ) ( s ) = F ( s ) G ( s ) .
  • q-th power (for complex q): L { t q H ( t ) } = Γ ( q + 1 ) s q + 1 for Re ( s ) > 0 and q > 1 ;
  • natural logarithm: L ln t H ( t ) ( s ) = ln s + γ s for Re ( s ) > 0 , where γ is the Euler constant.

Appendix B. An Estimate on ln k = 1 m k ! ( n k ) !

First let us notice two properties for the k-th harmonic number. The first one is:
k = 1 n H k = ( n + 1 ) ( H n + 1 1 ) .
Indeed, H k = H n ( H n H k ) = H n 1 k + 1 + + 1 n for 1 k n 1 , thus:
k = 1 n H k = H n + k = 1 n 1 H k = H n + k = 1 n 1 H n 1 k + 1 + + 1 n = n H n k = 2 n k 1 k = ( n + 1 ) H n n ,
therefore,
k = 1 n H k = ( n + 1 ) H n + 1 1 n + 1 n = ( n + 1 ) ( H n + 1 1 ) .
The second one is:
k = 1 n k H k = n ( n + 1 ) 2 H n n ( n 1 ) 4 = n ( n + 1 ) 2 H n + 1 1 2 .
Let:
s k = j = 1 k j = k ( k + 1 ) 2 .
Then:
k = 1 n k H k = s n + 1 2 ( s n s 1 ) + + 1 n ( s n s n 1 ) = 1 + 1 2 + + 1 n s n k = 2 n s k 1 k = s n H n k = 2 n k 1 2 = s n H n 1 2 s n 1 .
Therefore:
k = 1 n k H k = n ( n + 1 ) 2 H n n ( n 1 ) 4 .
By H n + 1 = H n + 1 n + 1 , we obtain the equality (B4).
Next, we will give an estimate on ln k = 1 m k ! ( n k ) ! . The following inequality [29] will be used: for any positive integer k,
1 2 · 1 k + 1 < H k ln k γ < 1 2 · 1 k .
By taking the sum from one to m on the above equation, we get:
1 2 · k = 1 m 1 k + 1 < k = 1 m H k k = 1 m ln k m γ < 1 2 · k = 1 m 1 k .
It is equivalent to the following:
1 2 ( H m + 1 1 ) < k = 1 m H k k = 1 m ln k m γ < 1 2 H m .
Moreover, by (B1) and H m + 1 = H m + 1 m + 1 , we have k = 1 m H k = ( m + 1 ) H m m ; hence:
1 2 H m 1 2 m m + 1 < ( m + 1 ) H m m ( 1 + γ ) ln ( m ! ) < 1 2 H m .
Therefore:
m + 1 2 H m m ( 1 + γ ) < ln ( m ! ) < m + 1 2 H m m ( 1 + γ ) + 1 2 m m + 1 .
Again, for any positive integer k, we obtain:
k + 1 2 H k k ( 1 + γ ) < ln ( k ! ) < k + 1 2 H k k ( 1 + γ ) + 1 2 k k + 1 .
By taking the sum from one to m for the above inequalities, we get that:
( m + 1 ) 2 2 H m m ( m + 1 ) 2 3 2 + γ < ln k = 1 m k ! < ( m + 1 ) 2 2 H m m ( m + 1 ) 2 3 2 + γ + 1 2 ( m + 1 H m + 1 ) .
By a similar way, we have:
1 2 n 2 H n 1 2 ( n m ) 2 H n m + 1 4 m ( 3 m 6 n + 1 ) + γ 2 m ( m 2 n + 1 ) < ln k = 1 m ( n k ) ! < 1 2 ( n 2 1 ) H n 1 2 [ ( n m ) 2 1 ] H n m + 3 4 m ( m 2 n + 1 ) + γ 2 m ( m 2 n + 1 ) .
Since ln k = 1 m k ! ( n k ) ! = ln k = 1 m k ! + ln k = 1 m ( n k ) ! and H m + 1 = H m + 1 m + 1 , then by (B6) and (B7), we find that:
1 2 n 2 H n 1 2 ( n m ) 2 H n m + 1 2 ( m + 1 ) 2 H m 1 2 m ( 3 n + 1 ) γ m n < ln k = 1 m k ! ( n k ) ! < 1 2 ( n 2 1 ) H n 1 2 [ ( n m ) 2 1 ] H n m + 1 2 m ( m + 2 ) H m 1 2 m ( 3 n + 2 γ n 1 ) + 1 2 m m + 1 .
In particular, if m = n , then:
m 2 + m + 1 2 H m 3 2 + γ m 2 + 1 2 m < ln k = 1 m k ! ( m k ) ! < m 2 + m 1 2 H m 3 2 + γ m 2 1 2 m + 1 2 m m + 1 .

References

  1. Garbaczewski, P. Differential entropy and dynamics of uncertainty. J. Stat. Phys. 2006, 123, 315–355. [Google Scholar] [CrossRef]
  2. Brandao, F.; Horodecki, M.; Ng, N.; Oppenheim, J.; Wehner, S. The second laws of quantum thermodynamics. Proc. Natl. Acad. Sci. USA 2015, 112, 3275–3279. [Google Scholar] [CrossRef] [PubMed]
  3. Shannon, C.E. A Mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423; 623–656. [Google Scholar] [CrossRef]
  4. Cover, T.M.; Thomas, J.A. Elements in Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006; p. 792. [Google Scholar]
  5. Wang, J.M.; Wu, J.D.; Cho, M. Unified (r,s)-relative entropy. Int. J. Theor. Phys. 2011, 50, 1282–1295. [Google Scholar] [CrossRef]
  6. Michalowic, J.V.; Nichols, J.M.; Bucholtz, F. Handbook of Differential Entropy; CRC Press: Boca Raton, FL, USA, 2013; p. 244. [Google Scholar]
  7. Jizba, P.; Ma, Y.; Hayes, A.; Dunningham, J.A. One-parameter class of uncertainty relations based on entropy power. 2016; arXiv: 1606.01094. [Google Scholar]
  8. Bhatia, R.; Cramer, M.; Plenio, M.B. Matrix Analysis; Springer: Berlin/Heidelberg, Germany, 2013; p. 349. [Google Scholar]
  9. Baumgratz, T.; Cramer, M.; Plenio, M.B. Quantifying coherence. Phys. Rev. Lett. 2014, 113, 140401. [Google Scholar] [CrossRef] [PubMed]
  10. Mehta, M.L. Random Matrices, 3rd ed.; Elsevier: Amsterdam, The Netherlands, 2004; p. 706. [Google Scholar]
  11. Hiai, F.; Petz, D. The Semicirlce Law, Free Random Variables and Entropy; American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar]
  12. Wishart, J. The generalised product moment distribution in samples from a normal multivariate population. Biometrika 1928, 20, 32–52. [Google Scholar] [CrossRef]
  13. James, A.T. Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. Ann. Math. Stat. 1964, 35, 475–501. [Google Scholar] [CrossRef]
  14. Lachal, A. Probabilistic approach to Page’s formula for the entropy of a quantum system. Stochasitcs 2008, 78, 157–178. [Google Scholar]
  15. Andrews, G.E.; Askey, R.; Roy, R. Special Functions; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  16. Watrous, J. Theory of Quantum Information. Available online: https://cs.uwaterloo.ca/watrous/TQI/ (accessed on 12 September 2016).
  17. Jones, K.R.W. Riemann-liouville integration and reduced distributions on hyperspheres. J. Phys. A Math. Gen. 1998, 24, 1237. [Google Scholar] [CrossRef]
  18. Singh, U.; Zhang, L.; Pati, L. Average coherence and its typicality for random pure states. Phys. Rev. A 2016, 93, 032125. [Google Scholar] [CrossRef]
  19. Życzkowski, K.; Sommers, H.-J. Hilbert-Schmidt volume of the set of mixed quantum states. J. Phys. A Math. Gen. 2013, 36, 10115. [Google Scholar] [CrossRef]
  20. Zhang, L. Average coherence and its typicality for random mixed quantum states. 2016; arXiv:1607.02294. [Google Scholar]
  21. Page, D.N. Average entropy of a subsystem. Phys. Rev. Lett. 1995, 71, 1291. [Google Scholar] [CrossRef] [PubMed]
  22. Foong, S.K.; Kanno, S. Proof of Page’s conjecture on the avearge entropy of a subsystem. Phys. Rev. Lett. 1994, 72, 1148. [Google Scholar] [CrossRef] [PubMed]
  23. Sánchez-Ruiz, J. Simple proof of Page’s conjecture on the average entropy of a subsystem. Phys. Rev. E 1994, 52, 5653. [Google Scholar] [CrossRef]
  24. Sen, S. Average entropy of a quantum subsystem. Phys. Rev. Lett. 1996, 77, 1. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, L.; Singh, U.; Pati, A.K. Average subentropy and coherence of random mixed quantum states. 2015; arXiv:1510.08859. [Google Scholar]
  26. Mejía, J.; Zapata, C.; Botero, A. The difference between two random mixed quantum states: Exact and asymptotic spectral analysis. 2015; arXiv:1511.07278. [Google Scholar]
  27. Życzkowski, K.; Sommers, H.J. Induced measures in the space of mixed quantum states. J. Phys. A Math. Gen. 2001, 34, 7111. [Google Scholar] [CrossRef]
  28. Williams, J. Laplace Transforms; Allen Unwin: Sydeny, Australia, 1973. [Google Scholar]
  29. Young, R.M. Euler’s constant. Math. Gaz. 1991, 75, 187–190. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Luo, L.; Wang, J.; Zhang, L.; Zhang, S. The Differential Entropy of the Joint Distribution of Eigenvalues of Random Density Matrices. Entropy 2016, 18, 342. https://doi.org/10.3390/e18090342

AMA Style

Luo L, Wang J, Zhang L, Zhang S. The Differential Entropy of the Joint Distribution of Eigenvalues of Random Density Matrices. Entropy. 2016; 18(9):342. https://doi.org/10.3390/e18090342

Chicago/Turabian Style

Luo, Laizhen, Jiamei Wang, Lin Zhang, and Shifang Zhang. 2016. "The Differential Entropy of the Joint Distribution of Eigenvalues of Random Density Matrices" Entropy 18, no. 9: 342. https://doi.org/10.3390/e18090342

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop