Next Article in Journal
Information Measures for Generalized Order Statistics and Their Concomitants under General Framework from Huang-Kotz FGM Bivariate Distribution
Next Article in Special Issue
Classical and Quantum H-Theorem Revisited: Variational Entropy and Relaxation Processes
Previous Article in Journal
Fluctuating Diffusivity of RNA-Protein Particles: Analogy with Thermodynamics
Previous Article in Special Issue
Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Rényi Entropy Power to Information Scan of Quantum States

1
FNSPE, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1, Czech Republic
2
Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2021, 23(3), 334; https://doi.org/10.3390/e23030334
Submission received: 18 February 2021 / Revised: 8 March 2021 / Accepted: 9 March 2021 / Published: 12 March 2021
(This article belongs to the Special Issue The Statistical Foundations of Entropy)

Abstract

:
In this paper, we generalize the notion of Shannon’s entropy power to the Rényi-entropy setting. With this, we propose generalizations of the de Bruijn identity, isoperimetric inequality, or Stam inequality. This framework not only allows for finding new estimation inequalities, but it also provides a convenient technical framework for the derivation of a one-parameter family of Rényi-entropy-power-based quantum-mechanical uncertainty relations. To illustrate the usefulness of the Rényi entropy power obtained, we show how the information probability distribution associated with a quantum state can be reconstructed in a process that is akin to quantum-state tomography. We illustrate the inner workings of this with the so-called “cat states”, which are of fundamental interest and practical use in schemes such as quantum metrology. Salient issues, including the extension of the notion of entropy power to Tsallis entropy and ensuing implications in estimation theory, are also briefly discussed.

1. Introduction

The notion of entropy is undoubtedly one of the most important concepts in modern science. Very few other concepts can compete with it in respect to the number of attempts to clarify its theoretical and philosophical meaning [1]. Originally, the notion of entropy stemmed from thermodynamics, where it was developed to quantify the annoying inefficiency of steam engines. It then transmuted into a description of the amount of disorder or complexity in physical systems. Though many such attempts were initially closely connected with the statistical interpretation of the phenomenon of heat, in the course of time, they expanded their scope far beyond their original incentives. Along those lines, several approaches have been developed in attempts to quantify and qualify the entropy paradigm. These have been formulated largely independently and with different applications and goals in mind. For instance, in statistical physics, entropy counts the number of distinct microstates compatible with a given macrostate [2], in mathematical statistics, it corresponds to the inference functional for an updating procedure [3], and in information theory, it determines a limit on the shortest attainable encoding scheme [2,4].
Particularly distinct among these are the information-theoretic entropies (ITEs). This is not only because they discern themselves through their firm operational prescriptions in terms of coding theorems and communication protocols [5,6,7,8,9], but because they also offer an intuitive measure of disorder phrased in terms of missing information about a system. Apart from innate issues in communication theory, ITEs have also proved to be indispensable tools in other branches of science. Typical examples are provided by chaotic dynamical systems and multifractals (see, e.g., [10] and citations therein). Fully developed turbulence, earthquake analysis, and generalized dimensions of strange attractors provide further examples [11]. An especially important arena for ITEs in the past two decades has been quantum mechanics (QM) with applications ranging from quantum estimation and coding theory to quantum entanglement. The catalyst has been an infusion of new ideas from (quantum) information theory [12,13,14,15], functional analysis [16,17], condensed matter theory [18,19], and cosmology [20,21]. On the experimental front, the use of ITEs has been stimulated not only by new high-precision instrumentation [22,23] but also by, e.g., recent advances in stochastic thermodynamics [24,25] or observed violations of Heisenberg’s error-disturbance uncertainty relations [26,27,28,29,30].
In his seminal 1948 paper, Shannon laid down the foundations of modern information theory [5]. He was also instrumental in pointing out that, in contrast with discrete signals or messages where information is quantified by (Shannon’s) entropy, the cases with continuous variables are less satisfactory. The continuous version of Shannon’s entropy (SE)— the so-called differential entropy, may take negative values [5,31], and so it does not have the same status as its discrete-variable counterpart. To solve a number of information-theoretic problems related to continuous cases Shannon shifted the emphasis from the differential entropy to yet another object—entropy power (EP). The EP describes the variance of a would-be Gaussian random variable with the same differential entropy as the random variable under investigation. EP was used by Shannon [5,6] to bound the capacity of non-Gaussian additive noise channels. Since then, the EP has proved to be essential in a number of applications ranging from interference channels to secrecy capacity [32,33,34,35,36]. It has also led to new advances in information parametric statistics [37,38] and network information theory [39]. Apart from its significant role in information theory, the EP has found wide use in pure mathematics, namely in the theory of inequalities [39] and mathematical statistics and estimation theory [40].
Recent developments in information theory [41], quantum theory [42,43], and complex dynamical systems in particular [10,44,45] have brought about the need for a further extension of the concept of ITE beyond Shannon’s conventional type. Consequently, numerous generalizations have started to proliferate in the literature ranging from additive entropies [31,46] through a rich class of non-additive entropies [47,48,49,50,51,52] to more exotic types of entropies [53]. Particularly prominent among such generalizations are ITEs of Rényi and Tsallis, which both belong to a broader class of so-called Uffink entropic functionals [54,55]. Both Rényi entropy (RE) and Tsalli entropy (TE) represent one-parameter families of deformations of Shannon’s entropy. An important point related to the RE is that the RE is not just a theoretical construct, but it has a firm operational meaning in terms of various coding theorems [8,9]. Consequently, REs, along with their associated Rényi entropy powers (REPs), are, in principle, experimentally accessible [8,56,57]. That is indeed the case in specific quantum protocols [58,59,60]. In addition, REPs of various orders are often used as convenient measures of entanglement—e.g., REP of order 2, i.e., N 2 represents tangle τ (with τ being concurrence) [61], N 1 / 2 is related to both fidelity F and robustness R of a pure state [62], N quantifies the Bures distance to the closest separable pure state [63], etc. Even though our main focus here will be on REs and REPs since they are more pertinent in information theory, we will include some discussion related to Tsallis entropy powers at the end of this paper.
The aim of this paper is twofold. First, we wish to appropriately extend the notion of SE-based EP to the RE setting. In contrast to our earlier works on the topic [13,64], we will do it now by framing REP in the context of RE-based estimation theory. This will be done by judiciously generalizing such key notions as the De Bruijn identity, isoperimetric inequality (and ensuing Cramér–Rao inequality), and Stam inequality. In contrast to other similar works on the subject [65,66,67,68], our approach is distinct in three key respects: (a) we consistently use the notion of escort distribution and escort score vector in setting up the generalized De Bruijn identity and Fisher information matrix, (b) we generalize Stam’s uncertainty principle, and (c) Rényi EP is related to variance of the reference Gaussian distribution rather than the Rényi maximizing distribution. As a byproduct, we derive within such a generalized estimation theory framework the Rényi-EP-based quantum uncertainty relations (REPUR) of Schrödinger–Roberston type. The REPUR obtained coincides with our earlier result [13] that was obtained in a very different context by means of the Beckner–Babenko theorem. This in turn serves as a consistency check of the proposed generalized estimation theory. Second, we identify interesting new playgrounds for the Rényi EPs obtained. In particular, we asked ourselves a question: assuming one is able in specific quantum protocols to measure Rényi EPs of various orders, how does this constrain the underlying quantum state distribution? To answer this question, we invoke the concept of the information distribution associated with a given quantum state. The latter contains a complete “information scan” of the underlying state distribution. We set up a reconstruction method based on Hausdorff’s moment problem [69] to show explicitly how the information probability distribution associated with a given quantum state can be numerically reconstructed from EPs. This is a process that is analogous to quantum-state tomography. However, whereas tomography extracts the full density matrix from an ensemble using many measurements on a tomographically complete basis, the EP reconstruction method extracts the probability density on a given basis. This is an alternative approach that may be advantageous, for example, in quantum metrology schemes, where only knowledge of the local probability density rather than the full quantum state is needed [70].
The paper is structured as follows. In Section 2, we introduce the concept of Rényi’s EP. With quantum metrology applications in mind, we discuss this in the framework of estimation theory. First, we duly generalize the notion of Fisher information (FI) by using a Rényi entropy version of De Bruijn’s identity. In this connection, we emphasize the role of the so-called escort distribution, which appears naturally in the definition of higher-order score functions. Second, we prove the RE-based isoperimetric inequality and ensuing Cramér–Rao inequality and find how the knowledge of Fisher information matrix restricts possible values of Rényi’s EP. Finally, we further illuminate the role of Rényi’s EP by deriving (through the Stam inequality) Rényi’s EP-based quantum uncertainty relations for conjugate observables. To flesh this out, the second part of the paper is devoted to the development of the use of Rényi EPs to extract the quantum state from incomplete data. This is of particular interest in various quantum metrology protocols. To this end, we introduce in Section 3 the concepts of information distribution, and, in Section 4, we show how cumulants of the information distribution can be obtained from knowledge of the EPs. With the cumulants at hand, one can reconstruct the underlying information distribution in a process which we call an information scan. Details of how one could explicitly realize such an information scan for quantum state PDFs are provided in Section 5. There we employ generalized versions of Gram–Charlier A and the Edgeworth expansion. In Section 6, we illustrate the inner workings of the information scan using the example of a so-called cat state. This state is of interest in applications of quantum physics such as quantum-enhanced metrology, which is concerned with the optimal extraction of information from measurements subject to quantum mechanical effects. The cat state we consider is a superposition of the vacuum state and a coherent state of the electromagnetic field; two cases are studied comprising different probabilistic weightings of the superposition state corresponding to balanced and unbalanced cat states. Section 7 is dedicated to EPs based on Tsallis entropy. In particular, we show that Rényi and Tsallis EPs coincide with each other. This, in turn, allows us to phrase various estimation theory inequalities in terms of TE. In Section 7, we end with conclusions. For the reader’s convenience, we relegate some technical issues concerning the generalized De Bruijn identity and associated isoperimetric and Stam inequalities to three appendices.

2. Rényi Entropy Based Estimation Theory and Rényi Entropy Powers

In this section, we introduce the concept of Rényi’s EP. With quantum metrology applications in mind, we discuss this in the framework of estimation theory. This will not only allow us to find new estimation inequalities, such as the Rényi-entropy-based De Bruijn identity, isoperimetric inequality, or Stam inequality, but it will also provide a convenient technical and conceptual frame for deriving a one-parameter family of Rényi-entropy-power-based quantum-mechanical uncertainty relations.

2.1. Fisher Information—Shannon’s Entropy Approach

First, we recall that the Fisher information matrix J ( X ) of a random vector { X i } in R D with the PDF F ( x ) is defined as [38]
J ( X ) = cov ( V ( X ) ) ,
where the covariance matrix is associated with the random zero-mean vector—the so-called score vector, as
V ( x ) = F ( x ) / F ( x ) .
A corresponding trace of J ( X ) , i.e.,
J ( X ) = Tr ( J ( X ) ) = var ( V ( X ) ) = E ( V 2 ( X ) ) ,
is known as the Fisher information. Both the FI and FI matrix can be conveniently related to Shannon’s differential entropy via De Bruijn’s identity [66,67].
De Bruijn’s identity: Let { X i } be a random vector in R D with the PDF F ( x ) and let { Z i G } be a Gaussian random vector (noise vector) with zero mean and unit-covariance matrix, independent of { X i } . Then,
d d ϵ H ( X + ϵ Z G ) | ϵ = 0 = 1 2 J ( X ) ,
where
H ( X ) = R D F ( x ) log F ( x ) d x ,
is Shannon’s differential entropy (measured in nats). In the case when the independent additive noise { Z i } is non-Gaussian with zero mean and covariance matrix Σ = cov ( Z ) , then the following generalization holds [67]:
d d ϵ H ( X + ϵ Z ) | ϵ = 0 = 1 2 Tr J ( X ) Σ .
The key point about De Bruijn’s identity is that it provides a very useful intuitive interpretation of FI, namely, FI quantifies the sensitivity of transmitted (Shannon type) information to an arbitrary independent additive noise. An important aspect that should be stressed in this context is that FI as a quantifier of sensitivity depends only on the covariance of the noise vector, and thus it is independent of the shape of the noise distribution. This is because De Bruijn’s identity remains unchanged for both Gaussian and non-Gaussian additive noise with the same covariance matrix.

2.2. Fisher Information—Rényi’s Entropy Approach

We now extend the notion of the FI matrix to the Rényi entropy setting. A natural way to do it is via an extension of De Bruijn’s identity to Rényi entropies. In particular, the following statement holds:
Generalized De Bruijn’s identity: Let { X i } be a random vector in R D with the PDF F ( x ) and let { Z i } be an independent (generally non-Gaussian) noise vector with the zero mean and covariance matrix Σ = cov ( Z ) , then, for any q > 0
d d ϵ I q ( X + ϵ Z ) | ϵ = 0 = 1 2 q Tr J q ( X ) Σ ,
where
I q = 1 1 q log R D F q ( x ) d x , q > 0 ,
is Rényi’s differential entropy (measured in nats) with I 1 = H . The ensuing FI matrix of order q has the explicit form
J q ( X ) = cov q ( V q ( X ) ) ,
with the score vector
V q ( x ) = ρ q ( x ) / ρ q ( x ) = q F ( x ) / F ( x ) = q V ( x ) .
Here, ρ q = F q / R D F q d x is the so-called escort distribution [71]. The “ cov q ” denotes the covariance matrix computed with respect to ρ q . Proofs of both the conventional (i.e., Shannon entropy based) and generalized (i.e., Rényi entropy based) De Bruijn’s identity are provided in Appendix A. There we also discuss some further useful generalizations of De Bruijn’s identity. Finally, as in the Shannon case, we define the FI of order q—denoted as J q ( X ) , as
Tr J q ( X ) J q ( X ) .

2.3. Rényi’s Entropy Power and Generalized Isoperimetric Inequality

Similarly as in conventional estimation theory, one can expect that there should exist a close connection between the FI matrix J q ( X ) and the corresponding Rényi entropy power N p ( X ) . In Shannon’s information theory, such a connection is phrased in terms of isoperimetric inequality [67]. Here, we prove that a similar relationship works also in Rényi’s information theory.
Let us start by introducing the concept of Rényi’s entropy power. This is defined as the solution of the equation [13,64]
I p X = I p N p ( X ) · Z G ,
where { Z i G } represents a Gaussian random vector with a zero mean and unit covariance matrix. Thus, N p ( X ) denotes the variance of a would be Gaussian distribution that has the same Rényi information content as the random vector { X i } described by the PDF F ( x ) . Expression (12) was studied in [13,64,72], where it was shown that the only class of solutions of (12) is
N p ( X ) = 1 2 π p p / p exp 2 D I p ( X ) ,
with 1 / p + 1 / p = 1 and p R + . In addition, when p 1 + , one has N p ( X ) N ( X ) , where N ( X ) is the conventional Shannon entropy power [5]. In this latter case, one can use the asymptotic equipartition property [55,73] to identify N ( X ) with “typical size” of a state set, which in the present context is the effective support set size for a random vector. This, in turn, is equivalent to Einstein’s entropic principle [74]. In passing, it should be noted that the form of the Rényi EP expressed in (13) is not universally accepted version. In a number of works, it is defined merely as an exponent of RE, see, e.g., [75,76]. Our motivation for the form (13) is twofold: first, it has a clear interpretation in terms of variances of Gaussian distributions and, second, it leads to simpler formulas, cf. e.g., Equation (22).
Generalized isoperimetric inequality: Let { X i } be a random vector in R D with the PDF F ( x ) . Then,
1 D N q ( X ) J q ( X ) N q ( X ) [ det ( J q ( X ) ) ] 1 / D 1 ,
where the Rényi parameter q 1 . We relegate the proof of the generalized isoperimetric inequality to Appendix B.
It is also worth noting that the relation (14) implies another important inequality. By using the fact that the Shannon entropy is maximized (among all PDF’s with identical covariance matrix Σ ) by the Gaussian distribution, we have N 1 ( X ) det ( Σ ) 1 / D (see, e.g., [77]). If we further employ that I q is a monotonously decreasing function of q, see, e.g., [31,78], we can write (recall that q 1 )
q 1 / ( q 1 ) e N q N 1 = exp ( 2 D I 1 ) 2 π e det ( Σ ) 1 / D .
The isoperimetric inequality (14) then implies
det ( Σ ( X ) ) q 1 / ( q 1 ) D e D det ( J q ( X ) ) 1 e D det ( J q ( X ) ) .
We can further use the inequality
1 D Tr ( A ) [ det ( A ) ] 1 / D ,
(valid for any positive semi-definite D × D matrix A ) to write
σ 2 ( X ) = 1 D Tr ( Σ ( X ) ) = 1 D i = 1 D Var ( X i ) D q 1 / ( q 1 ) e J q ( X ) D e J q ( X ) ,
where σ 2 is an average variance per component.
Relations (16)–(18) represent the q-generalizations of the celebrated Cramér–Rao information inequality. In the limit of q 1 , we recover the standard Cramér–Rao inequality that is widely used in statistical inference theory [38,79]. A final logical step needed to complete the proof of REPURs is represented by the so-called generalized Stam inequality. To this end, we first define the concept of conjugate random variables. We say that random vectors { X i } and { Y i } in R D are conjugate if their respective PDF’s F ( x ) and G ( y ) can be written as
F ( x ) = | φ F ( x ) | 2 / | | φ F | | 2 2 , G ( y ) = | φ G ( y ) | 2 / | | φ G | | 2 2 ,
where the (generally complex) probability amplitudes φ F ( x ) L 2 ( R D ) and φ G ( y ) L 2 ( R D ) are mutual Fourier images, i.e.,
φ F ( x ) = φ ^ G ( x ) = R D e 2 π i x . y φ G ( y ) d y ,
and analogously for φ G ( y ) = φ ^ F ( y ) . With this, we can state the generalized Stam inequality.
Generalized Stam inequality (Stam’s uncertainty principle): Let { X i } and { Y i } be conjugate random vectors in R D . Then,
16 π 2 N q ( Y ) [ det ( J r ( X ) ) ] 1 / D ,
is valid for any r [ 1 , ) and q [ 1 / 2 , 1 ] that are connected via the relation 1 / r + 1 / q = 2 . In particular, if we define r = 2 r and q = 2 q , then r and q are Hölder conjugates. A proof of the generalized Stam inequality is provided in Appendix C.
Let us now consider Hölder conjugate indices p and q with p [ 2 , ) (so that q [ 1 , 2 ] ). Combining the isoperimetric inequality (14) together with the generalized Stam inequality (21), we obtain the following one-parameter class of REP-based inequalities
N p / 2 ( X ) N q / 2 ( Y ) = N p / 2 ( X ) [ det ( J p / 2 ( X ) ) ] 1 / D [ det ( J p / 2 ( X ) ) ] 1 / D N q / 2 ( Y ) N q / 2 ( Y ) [ det ( J p / 2 ( X ) ) ] 1 / D 1 16 π 2 .
By symmetry, the role of q and p can be reversed. In Refs. [13,64], we presented an alternative derivation of inequalities (22) that was based on the Beckner–Babenko theorem. There it was also proved that the inequality saturates if and only if the distributions involved are Gaussian. The only exception to this rule is for the asymptotic values p = 1 and q = (or vice versa) where the saturation happens whenever the peak of F ( x ) and tail of G ( y ) (or vice versa) are Gaussian.
The passage to quantum mechanics is quite straightforward. First, we realize that, in QM, the Fourier conjugate wave functions are related via two reciprocal relations
ψ F ( x ) = R D e i y · x / ψ G ( y ) d y ( 2 π ) D / 2 , ψ G ( y ) = R D e i y · x / ψ F ( x ) d x ( 2 π ) D / 2 .
The Plancherel (or Riesz–Fischer) equality implies that, when | | ψ F | | 2 = 1 , then also automatically | | ψ G | | 2 = 1 (and vice versa). Thus, the connection between amplitudes φ F and φ G from (19) and amplitudes ψ F and ψ G from (23) is
φ F ( x ) = ( 2 π ) D / 4 ψ F ( 2 π x ) , φ G ( y ) = ( 2 π ) D / 4 ψ G ( 2 π y ) .
The factor ( 2 π ) D / 4 ensures that also φ F and φ G functions are normalized (in the sense of | | | | 2 ) to unity; however, due to Equation (19), it might be easily omitted. The corresponding Rényi EPs change according to
N p / 2 ( X ) N p / 2 ( F ) N p / 2 ( | ψ F | 2 ) = 2 π N p / 2 ( F ) , N q / 2 ( Y ) N q / 2 ( G ) N q / 2 ( | ψ G | 2 ) = 2 π N q / 2 ( G ) ,
and hence REP-based inequalities (22) acquire in the QM setting a simple form
N p / 2 ( | ψ F | 2 ) N q / 2 ( | ψ G | 2 ) 2 4 .
This represents an infinite tower of mutually distinct (generally irreducible) REPURs [13].
At this point, some comments are in order. First, historically, the most popular quantifier of quantum uncertainty has been variance because it is conceptually simple and relatively easily extractable from experimental data. The variance determines the measure of uncertainty in terms of the fluctuation (or spread) around the mean value, which, while useful for many distributions, does not provide a sensible measure of uncertainty in a number of important situations including multimodal [12,13,64] and heavy-tailed distributions [13,14,64]. To deal with this, a multitude of alternative (non-variance based) measures of uncertainty in quantum mechanics (QM) have emerged. Among these, a particularly prominent role is played by information entropies such as the Shannon entropy [63], Rényi entropy [63,64], Tsallis entropy [80], associated differential entropies, and their quantum-information generalizations [13,15,64]. REPURs (26) fit into this framework of entropic QM URs. In connection with (26), one might observe that the conventional URs based on variances—so-called Robertson–Schrödinger URs [81,82]) and Shannon differential entropy based URs (e.g., Hirschman or Białynicki–Birula URs [15,83]) naturally appear as special cases in this hierarchy. Second, the ITEs enter quantum information theory typically in three distinct ways: (a) as a measure of the quantum information content (e.g., how many qubits are needed to encode the message without loss of information), (b) as a measure of the classical information content (e.g., amount of information in bits that can be recovered from the quantum system) and (c) to quantify the entanglement of pure and mixed bipartite quantum states. Logarithms in base 2 are used because, in quantum information, one quantifies entropy in bits and qubits (rather than nats). This in turn also modifies Rényi’s EP as
1 2 π p p / p e 2 D 1 2 π p p / p 2 2 D .
In the following, we will employ this QM practice.

3. Information Distribution

To put more flesh on the concept of Rényi’s EP, we devote the rest of this paper to the development of the methodology and application of Rényi EPs in extracting quantum states from incomplete data. The technique of quantum tomography is widely used for this purpose and involves making many different measurements on an ensemble of identical copies of a quantum state with a tomographically complete measurement basis [84,85]. This process is very measurement-intensive, scaling exponentially with the number of particles and so methods have been developed to approximate it with fewer measurements [86].
However, the method of Rényi EPs provides an efficient alternative approach. Instead of reconstructing the full quantum state, this process extracts the PDF of the quantum state in a given basis. For a broad class of quantum metrology problems, local rather than global approaches are preferred [70] and, for these, the local PDF of the state at each sensor is needed rather than the full density matrix. With this in mind, we first start with the notion of the information distribution.
Let F ( x ) be the PDF for the random variable X . We define the information random variable i X ( X ) so that i X ( x ) = log 2 1 / F ( x ) . In other words, i X ( x ) represents the information in x with respect to F ( x ) . In this connection, it is expedient to introduce the cumulative distribution function for i X ( X ) as
( y ) = y d ( i X ) = R D F ( x ) θ ( log 2 F ( x ) + y ) d x .
The function ( y ) thus represents the probability that the random variable i X ( X ) is less than or equal to y. We have denoted the corresponding probability measure as d ( i X ) . Taking the Laplace transform of both sides of (28), we get
L { } ( s ) = R D F ( x ) e s log 2 F ( x ) s d x = E e s log 2 F s ,
where E denotes the mean value with respect to F . By assuming that ( x ) is smooth, then the PDF associated with i X ( X ) —the so-called information PDF—is
g ( y ) = d ( y ) d y = L 1 E e s log 2 F ( y ) .
Setting s = ( p 1 ) log 2 , we have
L { g } ( s = ( p 1 ) log 2 ) = E 2 ( 1 p ) i X .
The mean here is taken with respect to the PDF g. Equation (31) can also be written explicitly as
R D d x F p ( x ) = R g ( y ) 2 ( 1 p ) y d y .
Note that, when F p is integrable for p [ 1 , 2 ] , then (32) ensures that the moment-generating function for g ( x ) PDF exists. Thus, in particular, the moment-generating function exists when F ( x ) represents Lévy α -stable distributions, including the heavy-tailed stable distributions (i.e, PDFs with the Lévy stability parameter α ( 0 , 2 ] ). The same holds for F ^ and p [ 2 , ) due to the Beckner–Babenko theorem [13,87,88].

4. Reconstruction Theorem

Since L { g } ( s ) is the moment-generating function of the random variable i X ( X ) , one can generate all moments of the PDF g ( x ) (if they exist) by taking the derivatives of L { g } with respect to s. From a conceptual standpoint, it is often more useful to work with cumulants rather than moments. Using the fact that the cumulant generating function is simply the (natural) logarithm of the moment-generating function, we see from (32) that the differential RE is a reparametrized version of the cumulant generating function of the information random variable i X ( X ) . In fact, from (31), we have
I p ( X ) = 1 ( 1 p ) log 2 E 2 ( 1 p ) i X .
To understand the meaning of REPURs, we begin with the cumulant expansion (33), i.e.,
p I 1 p ( X ) = log 2 e n = 1 κ n ( X ) n ! p log 2 e n ,
where κ n ( X ) κ n ( i X ) denotes the n-th cumulant of the information random variable i X ( X ) (in units of bits n ). We note that
κ 1 ( X ) = E i X ( X ) = H ( X ) , κ 2 ( X ) = E i X ( X ) 2 ( E i X ( X ) ) 2 ,
i.e., they represent the Shannon entropy and varentropy, respectively. By employing the identity
I 1 p ( X ) = D 2 log 2 2 π ( 1 p ) 1 / p N 1 p ( X ) ,
we can rewrite (34) in the form
log 2 N 1 p ( X ) = log 2 ( 1 p ) 1 / p 2 π + 2 D n = 1 κ n ( X ) n ! p log 2 e n 1 .
From (37), one can see that
κ n ( X ) = n D 2 ( log 2 e ) n 1 d n 1 log 2 N 1 p ( X ) d p n 1 p = 0 + D 2 ( log 2 e ) n ( n 1 ) ! + δ 1 n log 2 π ,
where δ 1 n is the Kronecker delta function that has a value of one if n = 1 , or zero otherwise. In terms of the Grünwald–Letnikov derivative formula (GLDF) [89], we can rewrite (38) as
κ n ( X ) = lim Δ 0 n D 2 ( log 2 e ) n Δ n 1 k = 0 n 1 ( 1 ) k n 1 k log N 1 + k Δ ( X ) + D 2 ( log 2 e ) n ( n 1 ) ! + δ 1 n log 2 π .
Thus, in order to determine the first m cumulants of i X ( X ) , we need to know all N 1 , N 1 + Δ , , N 1 + ( m 1 ) Δ entropy powers. In practice, Δ corresponds to a characteristic resolution scale for the entropy index which will be chosen appropriately for the task at hand, but is typically of the order 10 2 . Note that the last term in (38) and (39) can be also written
D 2 ( log 2 e ) n ( n 1 ) ! + δ 1 n log 2 π = κ n ( Z G 1 I ) κ n ( i Y ) ,
with Y being the random variable distributed with respect to the Gaussian distribution Z G 1 I with the unit covariance matrix.
When all the cumulants exist, then the problem of recovering the underlying PDF for i X ( X ) is equivalent to the Stieltjes moment problem [90]. Using this connection, there are a number of ways to proceed; the PDF in question can be reconstructed e.g., in terms of sums involving orthogonal polynomials (e.g., the Gram–Charlier A series or the Edgeworth series [91]), the inverse Mellin transform [92], or via various maximum entropy techniques [93]. Pertaining to this, the theorem of Marcinkiewicz [94] implies that there are no PDFs for which κ m = κ m + 1 = = 0 for m 3 . In other words, the cumulant generating function cannot be a finite-order polynomial of degree greater than 2. The important exceptions, and indeed the only exceptions to Marcinkiewicz’s theorem are the Gaussian PDFs that can have the first two cumulants nontrivial and κ 3 = κ 4 = = 0 . Thus, apart from the special case of Gaussian PDFs where only N 1 and N 1 + Δ are needed, one needs to work with as many entropy powers N 1 + k Δ , k N (or ensuing REPURs) as possible to receive as much information as possible about the structure of the underlying PDF. In theory, the whole infinite tower of REPURs would be required to uniquely specify a system’s information PDF. Note that, for Gaussian information PDFs, one needs only N 1 and N 1 + Δ to reconstruct the PDF uniquely. From (37) and (39), we see that knowledge of N 1 corresponds to κ 1 ( X ) = H ( X ) while N 1 + Δ further determines κ 2 , i.e., the varentropy. Since N 1 is involved (via (39)) in the determination of all cumulants, it is the most important entropy power in the tower. Thus, the entropy powers of a given process have an equivalent meaning to the PDF: they describe the morphology of uncertainty of the observed phenomenon.
We should stress that the focus of the reconstruction theorem we present is on cumulants κ n which can be directly used for a shape estimation of g ( x ) but not F ( x ) . However, by knowing g ( y ) , we have a complete “information scan” of F ( x ) . Such an information scan is, however, not unique, indeed, two PDFs that are rearrangements of each other—i.e., equimeasurable PDFs, have identical ( y ) and g ( y ) . Even though equimeasurable PDFs cannot be distinguished via their entropy powers, they can be, as a rule, distinguished via their respective momentum-space PDFs and associated entropy powers. Thus, the information scan has a tomographic flavor to it. From the multi-peak structure of g ( y ) , one can determine the number and height of the stationary points. These are invariant characteristics of a given family of equimeasurable PDFs. This will be further illustrated in Section 6.

5. Information Scan of Quantum-State PDF

With knowledge of the entropy powers, the question now is how we can reconstruct the information distribution g ( x ) . The inner workings of this will now be explicitly illustrated with the (generalized) Gram-Charlier A expansion. However, other—often more efficient methods—are also available [91]. Let κ n be cumulants obtained from entropy powers and let G ( x ) be some reference PDF whose cumulants are γ k . The information PDF g ( x ) can be then written as [91]
g ( x ) = exp k = 1 ( κ k γ k ) ( 1 ) k ( d k / d x k ) k ! G ( x ) .
With hindsight, we choose the reference PDF G ( x ) to be a shifted gamma PDF, i.e.,
G ( x ) G ( x | a , α , β ) = e ( x a ) / β ( x a ) α 1 β α Γ [ α ] ,
with a < x < , β > 0 , α > 0 . In doing so, we have implicitly assumed that the F ( y ) PDF is in the first approximation equimeasurable with the Gaussian PDF. To reach a corresponding matching, we should choose a = log 2 ( 2 π σ 2 ) / 2 , α = 1 / 2 and β = log 2 e . Using the fact that [95]
( β ) k + 1 / 2 d k G ( x | a , 1 / 2 , β ) k ! d x k = x a β k L k ( 1 / 2 k ) x a β G ( x | a , 1 / 2 , β ) ,
(where L k δ is an associated Laguerre polynomial of order k with parameter δ ) and given that κ 1 = γ 1 = α β + a = log 2 ( 2 π σ 2 e ) / 2 , and γ k = Γ ( k ) α β k = ( log 2 e ) k / 2 for k > 1 we can write (41) as
g ( x ) = G ( x | a , 1 / 2 , β ) 1 + ( κ 2 γ 2 ) β 1 / 2 x a 2 L 2 ( 5 / 2 ) x a β ( κ 3 γ 3 ) β 1 / 2 x a 3 L 3 ( 7 / 2 ) x a β + .
If needed, one can use a relationship between the moments and the cumulants (Faà di Bruno’s formula [94]) to recast the expansion (44) into more familiar language. For the Gram–Charlier A expansion, various formal convergence criteria exist (see, e.g., [91]). In particular, the expansion for nearly Gaussian equimeasurable PDFs F ( y ) converges quite rapidly and the series can be truncated fairly quickly. Since in this case one needs fewer κ k ’s in order to determine the information PDF g ( x ) , only EPs in the small neighborhood of the index 1 will be needed. On the other hand, the further the F ( y ) is from Gaussian (e.g., heavy-tailed PDFs), the higher the orders of κ k are required to determine g ( x ) , and hence a wider neighborhood of the index 1 will be needed for EPs.

6. Example—Reconstruction Theorem and (Un)Balanced Cat State

We now demonstrate an example of the reconstruction in the context of a quantum system. Specifically, we consider cat states that are often considered in the foundations of quantum physics as well as in various applications, including solid state physics [96] and quantum metrology [97]. The form of the state we consider is ψ = N ( 0 + ν α / ν ) , where N = [ 1 + 2 ν exp ( α 2 / 2 ν 2 ) + ν 2 ] 1 / 2 is the normalization factor, 0 is the vacuum state, ν R a weighting factor, and α is the coherent state given by
α = e α 2 / 2 n = 0 α n n ! n ,
(taking α R ). For ν = 1 , we refer to the state as a balanced cat state (BCS) and for ν 1 , as an unbalanced cat state (UCS). Changing the basis of ψ to the eigenstates of the general quadrature operator
Y ^ θ = 1 2 a ^ e i θ + a ^ e i θ ,
where a ^ and a ^ are the creation and annihilation operators of the electromagnetic field, we find the PDF for the general quadrature variable y θ to be
F ( y θ ) = N 2 π 1 2 e y θ 2 1 + ν exp α 2 ν 2 2 1 + e 2 i θ 2 2 e i θ ν α y θ 2 ,
where N is the normalization constant. Setting θ = 0 and ν = 1 returns the PDF of the BCS for the position-like variable y 0 . With this, the Rényi EPs N 1 p ( χ ) are calculated and found to be constant across varying p. This is because F ( y 0 ) for the BCS is in fact a piecewise rearrangement of a Gaussian PDF (yet has an overall non-Gaussian structure) as depicted in Figure 1, thus N 1 p ( χ ) = σ 2 for all p, where σ 2 is the variance of the ‘would be Gaussian’. Taking the reference PDF to be G ( x ) = G ( x | a , α , β ) , with a = log 2 ( 2 π σ 2 ) / 2 , α = 1 / 2 and β = log 2 ( e ) , it is evident that ( κ k γ k ) = 0 for all k 1 , and from the Gram–Charlier A series (41), a perfect matching in the reconstruction is achieved. Furthermore, it can be shown that the variance of (47) increases with α , i.e., the variance increases as the peaks of the PDF diverge, which is in stark contrast to the Rényi EPs which remain constant for increasing α . This reveals the shortcomings of variance as a measure of uncertainty for non-Gaussian PDFs.
The peaks, located at F ( y θ ) = 2 a j + , where j is an index labelling the distinct peaks, give rise to sharp singularities in the target g ( x ) . With regard to the BCS position PDF, distributions of the conjugate parameter F ( y π / 2 ) distinguish F ( y 0 ) from its equimeasurable Gaussian PDF and hence the Rényi EPs also distinguish the different cases. The number of available cumulants k is computationally limited, but, as this grows, information about the singularities will be recovered in the reconstruction. In the following, we show how the tail convergence and location of a singularity for g ( x ) can be reconstructed using k = 5 .
We consider the case of a UCS with ν = 0.97 , α = 10 and we take θ = 0 in Equation (47) to find the PDF in the y 0 quadrature which is non-Gaussian for all piecewise rearrangements. As such, all REPs N 1 p vary with p and consequently all cumulants κ k carry information on g ( x ) . Here, we choose to reconstruct the UCS information distribution by means of the Edgeworth series [91] so that
g ( x ) = exp n j = 2 ( κ j γ j ) ( 1 ) j j ! d j d x j n j / 2 G ( x ) ,
where the reference PDF G ( x ) is again the shifted gamma distribution. Using the Edgeworth series, the information PDF is approximated by expanding in orders of n, which has the advantage over the Gram–Charlier A expansion discussed above of bounding the errors of the approximation. For the particular UCS of interest, expanding to order n 3 / 2 reveals convergence toward the analytic form of the information PDF shown as the target line in Figure 2. This shows that, for a given characteristic resolution, control over the first five Rényi EPs can be enough for a useful information scan of a quantum state with an underlying non-Gaussian PDF. In the example shown in Figure 2, we see that the information scan accurately predicts the tail behavior as well as the location of the singularity, which corresponds to the second (lower) peak of F ( y 0 ) .

7. Entropy Powers Based on Tsallis Entropy

Let us now briefly comment on the entropy powers associated with yet another important differential entropy, namely Tsallis differential entropy, which is defined as [47]
S q ( F ) = 1 ( 1 q ) R D F q ( x ) F ( x ) d x ,
where, as before, the PDF F ( x ) is associated with a random vector { X i } in R D .
Similarly to the RE case, the Tsallis entropy power N p T ( X ) is defined as the solution of the equation
S q X = S q T N q T ( X ) · Z G .
The ensuing entropy power has not been studied in the literature yet, but it can be easily derived by observing that the following scaling property for differential Tsallis entropy holds, namely
S q ( a X ) = S q ( X ) q ln q | a | D ,
where a R and the q-deformed sum and logarithm are defined as [11]: x q y = x + y + ( 1 q ) x y and ln q x = ( x 1 q 1 ) / ( 1 q ) , respectively. Relation (51) results from the following chain of identities:
S q ( a X ) = 1 1 q R D d D y R D d D x δ ( y a x ) F ( x ) q 1 = 1 1 q | a | D ( 1 q ) R D d D y F q ( y ) 1 = | a | D ( 1 q ) S q ( X ) + 1 1 q 1 1 q = | a | D ( 1 q ) S q ( X ) + ln q | a | D = ( 1 q ) ln q | a | D + 1 S q ( X ) + ln q | a | D = S q ( X ) q ln q | a | D .
We can further use the simple fact that
S q ( Z G ) = ln q ( 2 π q q / q ) D / 2 .
Here, q and q satisfy 1 / q + 1 / q = 1 with q R + . By combining (50), (51), and (53), we get
S q ( X ) = ln q ( 2 π q q / q ) D / 2 q ln q ( N q T ) D / 2 = ln q ( 2 π q q / q N q T ) D / 2 ,
where we have used the sum rule from the q-deformed calculus: ln q x q ln q y = ln q x y . Equation (54) can be resolved for N p T by employing the q-exponential, i.e., e q x = [ 1 + ( 1 q ) x ] 1 / ( 1 q ) , which (among others) satisfies the relation e q ln q x = ln q ( e q x ) = x . With this, we have
N q T ( X ) = 1 2 π q q / q exp q S q ( X ) 2 / D = 1 2 π q q / q exp 1 ( 1 q ) D / 2 2 D S q ( X ) .
In addition, when q 1 + , one has
lim q 1 N q T ( X ) = 1 2 π e exp 2 D H ( X ) = N ( X ) ,
where N ( X ) is the conventional Shannon entropy power and H ( X ) is the Shannon entropy [5].
In connection with Tsallis EP, we might notice one interesting fact, namely by starting from Rényi’s EP (considering RE in nats), we can write
N q ( X ) = 1 2 π q q / q exp 2 D I q ( X ) = 1 2 π q q / q d D x F q ( x ) 2 / ( D ( 1 q ) ) = 1 2 π q q / q e q S q T ( X ) 2 / D = N q T ( X ) .
Here, we have used a simple identity
d D x F q ( x ) 1 / ( 1 q ) = ( 1 q ) S q T ( X ) + 1 1 / ( 1 q ) = e q S q T ( X ) .
Thus, we have obtained that Rényi and Tsallis EPs coincide with each other. In particular, Rényi’s EPI (22) can be equivalently written in the form
N p / 2 T ( X ) N q / 2 T ( Y ) 1 16 π 2 .
Similarly, we could rephrase the generalized Stam inequality (21) and generalized isoperimetric inequality (14) in terms of Tsallis EPs. Though such inequalities are quite interesting from a mathematical point of view, it is not yet clear how they could be practically utilized in the estimation theory as there is no obvious operational meaning associated with Tsallis entropy (e.g., there is no coding theorem for Tsallis entropy). On the other hand, Tsallis entropy is an important concept in the description of entanglement [98]. For instance, Tsallis entropy of order 2 (also known as linear entropy) directly quantifies state purity [63].

8. Conclusions

In the first part of this paper, we have introduced the notion of Rényi’s EP. With quantum metrology applications in mind, we carried out our discussion in the framework of estimation theory. In doing so, we have generalized the notion of Fisher information (FI) by using a Rényi entropy version of De Bruijn’s identity. The key role of the escort distribution in this context was highlighted. With Rényi’s EP at hand, we proved the RE-based isoperimetric and Stam inequalities. We have further clarified the role of Rényi’s EP by deriving (through the generalized Stam inequality) a one-parameter family of Rényi EP-based quantum mechanical uncertainty relations. Conventional variance-based URs of Robertson-Schrödinger and Shannon differential entropy-based URs of Hirschman or Białynicki-Birula naturally appear as special cases in this hierarchy of URs. Interestingly, we found that the Tsallis entropy-based EP coincided with Rényi’s EP provided that the order is the same. This might open quite a new, hitherto unknown role for Tsallis entropy in estimation theory.
The second part of the paper was devoted to developing the application of Rényi’s EP for extracting quantum states from incomplete data. This is of particular interest in various quantum metrology protocols. To that end, we introduced the concepts of information distribution and showed how cumulants of the information distribution can be obtained from knowledge of EPs of various orders. With cumulants thus obtained, one can reconstruct the underlying information distribution in a process which we call an information scan. A numerical implementation of this reconstruction procedure was technically realized via Gram-Charlier A and Edgeworth expansion. For an explicit illustration of the information scan, we used the non-Gaussian quantum states—(un)balanced cat states. In this case, it was found that control of the first five significant Rényi EPs gave enough information for a meaningful reconstruction of the information PDF and brought about non-trivial information on the original balanced cat state PDF, such as asymptotic tail behavior or the heights of the peaks.
Finally, let us stress one more point. Rényi EP-based quantum mechanical uncertainty relations (26) basically represent a one-parameter class of inequalities that constrain higher-order cumulants of state distributions for conjugate observables [13]. In connection with this, the following two questions are in order. Assuming one is able to control Rényi EPs of various orders: (i) how do such Rényi EPs constrain the underlying state distribution and (ii) how do the ensuing REPURs restrict the state distributions of conjugate observables? The first question was tackled in this paper in terms of the information distribution and reconstruction theorem. The second question is more intriguing and has not yet been properly addressed. Work along these lines is presently under investigation.

Author Contributions

Conceptualization, P.J. and J.D.; Formal analysis, M.P.; Methodology, P.J. and M.P.; Validation, M.P.; Visualization, J.D.; Writing—original draft, P.J.; Writing—review & editing, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

P.J. and M.P. were supported by the Czech Science Foundation Grant No. 19-16066S. J.D. acknowledges support from DSTL and the UK EPSRC through the NQIT Quantum Technology Hub (EP/M013243/1).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ITEInformation-theoretic entropy
URUncertainty relation
RERényi entropy
TETsallis entropy
REPURRényi entropy-power-based quantum uncertainty relation
QMQuantum mechanics
EPEntropy power
FIFisher information
PDFProbability density function
EPIEntropy power inequality
REPRényi entropy power
BCSBalanced cat state
UCSUnbalanced cat state

Appendix A

Here, we provide an intuitive proof of the generalized De Bruijn identity.
Generalized De Bruijn identity I: By denoting the PDF associated with a random vector { X i } as F ( x ) and the noise PDF as G ( z ) , we might write the LHS of (7) as
d d ϵ I q ( X + ϵ Z ) | ϵ = 0 = 1 1 q d d ϵ log R D d y R D d x R D d z δ ( D ) y ( x + ϵ z ) F ( x ) G ( z ) q ϵ = 0 = 1 1 q d d ϵ log R D d y R D d z F ( y ϵ z ) G ( z ) q ϵ = 0 = 1 1 q d d ϵ log R D d y R D d z F ( y ) ϵ z i i F ( y ) + 1 2 ϵ z i z j i j F ( y ) + O ( ϵ 3 / 2 ) G ( z ) q ϵ = 0 = q 1 q R D d y ρ q ( y ) Σ i j i j F ( y ) 2 F ( y ) = q 2 R D d y ρ q ( y ) Σ i j V i ( y ) V j ( y ) = q 2 Tr [ cov q ( V ) Σ ] = 1 2 q Tr [ cov q ( V q ) Σ ] = 1 2 q Tr ( J q Σ ) .
It should be noted that our manipulations make sense only for any q > 0 , as only in these cases are RE and escort distributions well defined. The right-hand-side of (A1) can also be equivalently written as
1 2 q E q [ ( V q ) i E q ( ( V q ) i ) ] Σ i j [ ( V q ) j E q ( ( V q ) j ) ) ] , = 1 2 q E [ ( Z i E ( Z i ) ] ( J q ) i j ( X ) [ Z j E ( Z j ) ] ,
where the mean E q { } is performed with respect to the escort distribution ρ q , while E with respect to G distribution.
We note in passing that the conventional De Bruijn’s identity (6) emerges as a special case when q 1 . For the Gaussian noise vector, we can generalize the previous derivation in the following way:
Generalized De Bruijn’s identity II: Let { X i } be a random vector in R D with the PDF F ( x ) and let { Z i } be an independent Gaussian noise vector with the zero mean and covariance matrix Σ = cov ( Z G ) , then,
d d Σ i j I q ( X + Z G ) | Σ = 0 = q 1 q R D d y ρ q ( y ) i j F ( y ) 2 F ( y ) = 1 2 q R D d y ρ q ( y ) ( V q ) i ( V q ) j = 1 2 q ( J q ) i j .
The right-hand-side is equivalent to
1 2 q E q [ ( V q ) i E q ( ( V q ) i ) ] [ ( V q ) j E q ( ( V q ) j ) ) ] .
To prove the identity (A3), we might follow the same line of reasoning as in (A1). The only difference is that, while in (A1) we had a small parameter ϵ in which one could expand to all orders of correlation functions and easily perform differentiation and limit ϵ 0 for any noise distribution (with zero mean), the same procedure can not be done in the present context for a generic noise distribution. In fact, only the Gaussian distribution has the property that the higher-order correlation functions and their derivatives with respect to Σ i j are small when Σ is small. The latter is a consequence of the Marcinkiewicz theorem [99].

Appendix B

Here, we prove the Generalized isoperimetric inequality from Section 2. The starting point is the entropy-power inequality (EPI) [64]: Let X 1 and X 2 be two independent continuous random vectors in R D with probability densities F ( 1 ) q ( R D ) and F ( 2 ) p ( R D ) , respectively. Suppose further that λ ( 0 , 1 ) and r > 1 , and let
q = r ( 1 λ ) + λ r , p = r λ + ( 1 λ ) r ,
then the following inequality holds:
N r ( X 1 + X 2 ) N q ( X 1 ) 1 λ 1 λ N p ( X 2 ) λ λ .
Let us now consider a Gaussian noise vector Z G (independent of X ) with zero mean and covariance matrix Σ . Within this setting, we can write the following EPIs:
N r ( X + Z G ) ϵ λ 1 1 λ 1 λ 1 λ λ [ N q ( X ) ] 1 λ ,
N r ( X + Z G ) ϵ 1 λ 1 1 λ 1 λ 1 λ λ [ N p ( X ) ] λ ,
with ϵ det ( Σ ) 1 / D . Here, we have used the simple fact that N r ( Z G ) = det ( Σ ) 1 / D , irrespective of the value of r.
Let us now fix r and maximize the RHS of inequality (A7) with respect to λ and q provided we keep the constraint condition (A5). This yields the condition extremum
λ = ϵ N q ( X ) exp q ( 1 q ) d log N q ( X ) d q + O ( ϵ 2 ) .
With this, q turns out to be
q = r + ϵ ( 1 r ) r N r ( X ) exp ( 1 r ) r d log N r ( X ) d r + O ( ϵ 2 ) ,
which in the limit ϵ 0 reduces to q = r 1 . The latter implies that p = 1 . The result (A10) implies that the RHS of (A7) reads
N q ( X ) + ϵ exp ( 1 r ) r d log N r ( X ) d r 1 ( 1 r ) r d log N r ( X ) d r + O ( ϵ 2 ) .
Should we have started with the p index, we would arrive at an analogous conclusion. To proceed, we stick, without loss of generality, to the inequality (A7). This implies that
N r ( X + Z G ) N q ( X ) + ϵ exp ( 1 r ) r d log N r ( X ) d r 1 ( 1 r ) r d log N r ( X ) d r + + O ( ϵ 2 ) = N r ( X ) + [ N q ( X ) N r ( X ) ] + ϵ exp ( 1 r ) r d log N r ( X ) d r 1 ( 1 r ) r d log N r ( X ) d r + O ( ϵ 2 ) N r ( X ) + ϵ exp ( 1 r ) r d log N r ( X ) d r + O ( ϵ 2 ) .
To proceed, we employ the identity log N r ( X ) = 2 / D [ I r ( X ) I r ( Z 1 I G ) ] with Z 1 I G representing a Gaussian random vector with zero mean and unit covariance matrix, and the fact that I r is monotonously decreasing function of r, i.e., d I r / d r 0 (see, e.g., Ref. [78]). With this, we have
exp ( 1 r ) r d log N r ( X ) d r exp 2 ( r 1 ) r D d I r ( Z 1 I G ) d r = exp ( r 1 ) r d d r 1 r 1 log r = e r r / ( r 1 ) e 2 r .
Consequently, Equation (A12) can be rewritten as
N r ( X + Z G ) N q ( X ) Σ i j ϵ Σ i j e 2 r + O ( ϵ 2 / Σ i j ) .
At this stage, we are interested in the Σ i j 0 limit. In order to find the ensuing leading order behavior of ϵ / Σ i j , we can use L’Hospital’s rule, namely
ϵ Σ i j = d ϵ d Σ i j = d d Σ i j exp 1 D Tr ( log Σ ) = ϵ D ( Σ 1 ) i j .
Now, we neglect the sub-leading term of order O ( ϵ 2 / Σ i j ) in (A14) and take det ( ) 1 / D on both sides. This gives
det d N r ( X + Z G ) d Σ i j 1 / D Σ = 0 = 1 r D N r ( X ) [ det ( J r ( X ) ) ] 1 / D e 2 r D 1 r D ,
or equivalently
N r ( X ) [ det ( J r ( X ) ) ] 1 / D 1 .
At this stage, we can use the inequality of arithmetic and geometric means to write (note that J r = cov r ( V r ) is a positive semi-definite matrix)
1 D Tr ( J r ( X ) ) [ det ( J r ( X ) ) ] 1 / D .
Consequently, we have
1 D N r ( X ) Tr ( J r ( X ) ) = 1 D N r ( X ) J r ( X ) N r ( X ) [ det ( J r ( X ) ) ] 1 / D 1 ,
as stated in Equation (14).

Appendix C

In this appendix, we prove the Generalized Stam inequality from Section 2. We start with the defining relation (13), i.e.,
N q ( Y ) = 1 2 π q 1 / ( 1 q ) | | G | | q 2 q / [ ( 1 q ) D ] ,
and consider q [ 1 / 2 , 1 ] so that q / ( 1 q ) > 0 . For the q-norm, we can write
| | G | | q = R D d y | ψ G ( y ) | 2 q 1 / q = | | ψ G | | 2 q 2 | | ψ ^ G | | 2 r 2 = | | ψ F | | 2 r 2 = | | F | | r .
Here, 2 r and 2 q are Hölder conjugates so that r [ 1 , ] . The inequality employed is due to the Hausdorff–Young inequality (which in turn is a simple consequence of the Hölder inequality [64]). We further have
| | F | | r = R D d x | ψ F ( x ) | 2 r 1 / r R D d x | ψ F ( x ) | 2 r i i e i a · x a i 2 1 / r = r R D d x ( r 1 ) ρ r ( x ) V i ( x ) V i ( x ) + ρ r ( x ) i i F ( x ) F ( x ) e i a · x 1 / r × R D d x | ψ F ( x ) | 2 r 1 / r a i 2 / r r R D d x ( r 1 ) ρ r ( x ) V i ( x ) V i ( x ) + ρ r ( x ) i i F ( x ) F ( x ) cos ( a · x ) 1 / r × R D d x | ψ F ( x ) | 2 r 1 / r a i 2 / r r V D d x ρ r ( x ) i i F ( x ) F ( x ) cos ( a · x ) 1 / r R D d x | ψ F ( x ) | 2 r 1 / r a i 2 / r ,
where a R D is an arbitrary x -independent vector, i / x i and V D denotes a regularized volume of R D D-dimensional ball of a very large (but finite) radius R. In the first line of (A22), we have employed the triangle inequality | E r e i a · x | 1 (with equality if and only if a = 0 ), namely
R D d x | ψ F ( x ) | 2 r e i a · x = R D d x ρ r ( x ) e i a · x R D d x | ψ F ( x ) | 2 r R D d x | ψ F ( x ) | 2 r .
The inequality in the last line holds for a i = π / ( 2 R ) (for all i), since, in this case, cos ( a · x ) 0 for all x from the D-dimensional ball. In this case, one may further estimate the integral from below by neglecting the positive integrand ( r 1 ) ρ r ( x ) [ V i ( x ) ] 2 .
Note that (A22) implies
r E r F 1 i i F cos ( a · x ) a i 2 1 ,
with equality if and only if a 0 (to see this, one should apply L’Hospital’s rule). Equation (A24) allows for writing
| | F | | r r γ E r F 1 i i F cos ( a · x ) γ a i 2 γ R D d x | ψ F ( x ) | 2 r 1 / r r γ E r F 1 i i F cos ( a · x ) γ a i 2 γ 1 V D 1 1 / r = r γ E r F 1 i i F cos ( a · x ) γ a i 2 γ 1 C D 1 1 / r R D D / r ,
where γ > 0 is some as yet unspecified constant and C D = π D / 2 / Γ ( D / 2 + 1 ) . In deriving (A25), we have used the Hölder inequality
1 = R D d x 1 · | ψ F ( x ) | 2 R D d x 1 r 1 / r R D d x | ψ F ( x ) | 2 r 1 / r = V D 1 1 / r R D d x | ψ F ( x ) | 2 r 1 / r .
Here, and also in (A22) and (A25), V D = C D R D denotes the regularizated volume of R D .
As already mentioned, the best estimate of the inequality (A25) is obtained for a 0 . As we have seen, a i goes to zero as π / ( 2 R ) which allows for choosing the constant γ so that the denominator in (A25) stays finite in the limit R . This implies that γ = D / 2 D / ( 2 r ) . Consequently, (A25) acquires in the large R limit the form
| | F | | r [ 4 ( r 1 ) / r ] D / 2 D / 2 r [ Γ ( D / 2 + 1 ) ] 1 1 / r π 3 D / 2 3 D / 2 r [ ( J r ) i i ( X ) ] D / 2 D / 2 r ,
With this, we can write [see Equations (A20)–(A21)]
N q ( Y ) 1 ( 2 π ) 2 q 1 / ( 1 q ) [ ( J r ) i i ( X ) ] 1 16 π 2 [ ( J r ) i i ( X ) ] ,
where, in the last inequality, we have used the fact that q 1 / ( 1 q ) 1 / 4 for q [ 1 / 2 , 1 ] and that [ Γ ( D / 2 + 1 ) ] 2 / D π / 4 . As a final step, we employ Equations (A18) and (A28) to write
N q ( Y ) 1 16 π 2 D Tr ( J r ( X ) ) 1 16 π 2 [ det ( J r ( X ) ) ] 1 / D ,
which completes the proof of the generalized Stam’s inequality.

References

  1. Bennaim, A. Information, Entropy, Life in addition, the Universe: What We Know Amnd What We Do Not Know; World Scientific: Singapore, 2015. [Google Scholar]
  2. Jaynes, E.T. Papers on Probability and Statistics and Statistical Physics; D. Reidel Publishing Company: Boston, MA, USA, 1983. [Google Scholar]
  3. Millar, R.B. Maximum Likelihood Estimation and Infrence; John Wiley and Soms Ltd.: Chichester, UK, 2011. [Google Scholar]
  4. Leff, H.S.; Rex, A.F. (Eds.) Maxwell’s Demon 2: Entropy, Classical and Quantum Information, Computing; Institute of Physics: London, UK, 2002. [Google Scholar]
  5. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423; 623–656. [Google Scholar] [CrossRef] [Green Version]
  6. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: New York, NY, USA, 1949. [Google Scholar]
  7. Feinstein, A. Foundations of Information Theory; McGraw Hill: New York, NY, USA, 1958. [Google Scholar]
  8. Campbell, L.L. A Coding Theorem and Rényi’s Entropy. Inf. Control 1965, 8, 423–429. [Google Scholar] [CrossRef] [Green Version]
  9. Bercher, J.-F. Source Coding Escort Distributions Rényi Entropy Bounds. Phys. Lett. A 2009, 373, 3235–3238. [Google Scholar] [CrossRef] [Green Version]
  10. Thurner, S.; Hanel, R.; Klimek, P. Introduction to the Theory of Complex Systems; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  11. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Approaching a Complex World; Springer: New York, NY, USA, 2009. [Google Scholar]
  12. Bialynicki-Birula, I. Rényi Entropy and the Uncertainty Relations. AIP Conf. Proc. 2007, 889, 52–61. [Google Scholar]
  13. Jizba, P.; Ma, Y.; Hayes, A.; Dunningham, J.A. One-parameter class of uncertainty relations based on entropy power. Phys. Rev. E 2016, 93, 060104-1(R)–060104-5(R). [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Maassen, H.; Uffink, J.B.M. Generalized entropic uncertainty relations. Phys. Rev. Lett. 1988, 60, 1103–1106. [Google Scholar] [CrossRef] [PubMed]
  15. Bialynicki-Birula, I.; Mycielski, J. Uncertainty relations for information entropy in wave mechanics. Commun. Math. Phys. 1975, 44, 129–132. [Google Scholar] [CrossRef]
  16. Dang, P.; Deng, G.-T.; Qian, T. A sharper uncertainty principle. J. Funct. Anal. 2013, 265, 2239–2266. [Google Scholar] [CrossRef]
  17. Ozawa, T.; Yuasa, K. Uncertainty relations in the framework of equalities. J. Math. Anal. Appl. 2017, 445, 998–1012. [Google Scholar] [CrossRef] [Green Version]
  18. Zeng, B.; Chen, X.; Zhou, D.-L.; Wen, X.-G. Quantum Information MeetsQuantum Matter: From Quantum Entanglement to Topological Phase in Many-Body Systems; Springer: New York, NY, USA, 2018. [Google Scholar]
  19. Melcher, B.; Gulyak, B.; Wiersig, J. Information-theoretical approach to the many-particle hierarchy problem. Phys. Rev. A 2019, 100, 013854-1–013854-5. [Google Scholar] [CrossRef]
  20. Ryu, S.; Takayanagi, T. Holographic derivation of entanglement entropy from AdS/CFT. Phys. Rev. Lett. 2006, 96, 181602-1–181602-4. [Google Scholar] [CrossRef] [Green Version]
  21. Eisert, J.; Cramer, M.; Plenio, M.B. Area laws for the entanglement entropy—A review. Rev. Mod. Phys. 2010, 82, 277–306. [Google Scholar] [CrossRef] [Green Version]
  22. Pikovski, I.; Vanner, M.R.; Aspelmeyer, M.; Kim, M.S.; Brukner, Č. Probing Planck-Scale physics Quantum Optics. Nat. Phys. 2012, 8, 393–397. [Google Scholar] [CrossRef]
  23. Marin, F.; Marino, F.; Bonaldi, M.; Cerdonio, M.; Conti, L.; Falferi, P.; Mezzena, R.; Ortolan, A.; Prodi, G.A.; Taffarello, L.; et al. Gravitational bar detectors set limits to Planck-scale physics on macroscopic variables. Nat. Phys. 2013, 9, 71–73. [Google Scholar]
  24. An, S.; Zhang, J.-N.; Um, M.; Lv, D.; Lu, Y.; Zhang, J.; Yin, Z.-Q.; Quan, H.T.; Kim, K. Experimental test of the quantum Jarzynski equality with a trapped-ion system. Nat. Phys. 2014, 11, 193–199. [Google Scholar] [CrossRef] [Green Version]
  25. Campisi, M.; Hänggi, P.; Talkner, P. Quantum fluctuation relations: Foundations and applications. Rev. Mod. Phys. 2011, 83, 771–791. [Google Scholar] [CrossRef] [Green Version]
  26. Erhart, J.; Sponar, S.; Sulyok, G.; Badurek, G.; Ozawa, M.; Hasegawa, Y. Experimental demonstration of a universally valid error—Disturbance uncertainty relation in spin measurements. Nat. Phys. 2012, 8, 185–189. [Google Scholar] [CrossRef] [Green Version]
  27. Sulyok, G.; Sponar, S.; Erhart, J.; Badurek, G.; Ozawa, M.; Hasegawa, Y. Violation of Heisenberg’s error-disturbance uncertainty relation in neutron-spin measurements. Phys. Rev. A 2013, 88, 022110-1–022110-15. [Google Scholar] [CrossRef] [Green Version]
  28. Baek, S.Y.; Kaneda, F.; Ozawa, M.; Edamatsu, K. Experimental violation and reformulation of the Heisenberg’s error-disturbance uncertainty relation. Sci. Rep. 2013, 3, 2221-1–2221-5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Dressel, J.; Nori, F. Certainty in Heisenberg’s uncertainty principle: Revisiting definitions for estimation errors and disturbance. Phys. Rev. A 2014, 89, 022106-1–022106-14. [Google Scholar] [CrossRef] [Green Version]
  30. Busch, P.; Lahti, P.; Werner, R.F. Proof of Heisenberg’s Error-Disturbance Relation. Phys. Rev. Lett. 2013, 111, 160405-1–160405-5. [Google Scholar] [CrossRef] [Green Version]
  31. Jizba, P.; Arimitsu, T. The world according to Rényi: Thermodynamics of multifractal systems. Ann. Phys. 2004, 312, 17–59. [Google Scholar] [CrossRef]
  32. Liu, R.; Liu, T.; Poor, H.V.; Shamai, S. A Vector Generalization of Costa’s Entropy-Power Inequality with Applications. IEEE Trans. Inf. Theory 2010, 56, 1865–1879. [Google Scholar]
  33. Costa, M.H.M. On the Gaussian interference channel. IEEE Trans. Inf. Theory 1985, 31, 607–615. [Google Scholar] [CrossRef]
  34. Polyanskiy, Y.; Wu, Y. Wasserstein continuity of entropy and outer bounds for interference channels. arXiv 2015, arXiv:1504.04419. [Google Scholar] [CrossRef] [Green Version]
  35. Bagherikaram, G.; Motahari, A.S.; Khandani, A.K. The Secrecy Capacity Region of the Gaussian MIMO Broadcast Channel. IEEE Trans. Inf. Theory 2013, 59, 2673–2682. [Google Scholar] [CrossRef] [Green Version]
  36. De Palma, G.; Mari, A.; Lloyd, S.; Giovannetti, V. Multimode quantum entropy power inequality. Phys. Rev. A 2015, 91, 032320-1–032320-6. [Google Scholar] [CrossRef] [Green Version]
  37. Costa, M.H. A new entropy power inequality. IEEE Trans. Inf. Theory 1985, 31, 751–760. [Google Scholar] [CrossRef]
  38. Frieden, B.R. Science from Fisher Information: A Unification; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  39. Courtade, T.A. Strengthening the Entropy Power Inequality. arXiv 2016, arXiv:1602.03033. [Google Scholar]
  40. Barron, A.R. Entropy and the Central Limit Theorem. Ann. Probab. 1986, 14, 336–342. [Google Scholar] [CrossRef]
  41. Pardo, L. New Developments in Statistical Information Theory Based on Entropy and Divergence Measures. Entropy 2019, 21, 391. [Google Scholar] [CrossRef] [Green Version]
  42. Biró, T.; Barnaföldi, G.; Ván, P. New entropy formula with fluctuating reservoir. Physics A 2015, 417, 215–220. [Google Scholar] [CrossRef] [Green Version]
  43. Bíró, G.; Barnaföldi, G.G.; Biró, T.S.; Ürmössy, K.; Takács, Á. Systematic Analysis of the Non-Extensive Statistical Approach in High Energy Particle Collisions—Experiment vs. Theory. Entropy 2017, 19, 88. [Google Scholar] [CrossRef]
  44. Hanel, R.; Thurner, S. When do generalized entropies apply? How phase space volume determines entropy. Europhys. Lett. 2011, 96, 50003-1–50003-6. [Google Scholar] [CrossRef] [Green Version]
  45. Hanel, R.; Thurner, S.; Gell-Mann, M. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems. Proc. Natl. Acad. Sci. USA 2014, 111, 6905–6910. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Burg, J.P. The Relationship Between Maximum Entropy Spectra In addition, Maximum Likelihood Spectra. Geophysics 1972, 37, 375–376. [Google Scholar] [CrossRef]
  47. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  48. Havrda, J.; Charvát, F. Quantification Method of Classification Processes: Concept of Structural α-Entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  49. Frank, T.; Daffertshofer, A. Exact time-dependent solutions of the Renyi Fokker–Planck equation and the Fokker–Planck equations related to the entropies proposed by Sharma and Mittal. Physics A 2000, 285, 351–366. [Google Scholar] [CrossRef]
  50. Sharma, B.D.; Mitter, J.; Mohan, M. On measures of “useful” information. Inf. Control 1978, 39, 323–336. [Google Scholar] [CrossRef] [Green Version]
  51. Jizba, P.; Korbel, J. On q-non-extensive statistics with non-Tsallisian entropy. Physics A 2016, 444, 808–827. [Google Scholar] [CrossRef] [Green Version]
  52. Jizba, P.; Arimitsu, T. Generalized statistics: Yet another generalization. Physics A 2004, 340, 110–116. [Google Scholar] [CrossRef] [Green Version]
  53. Vos, G. Generalized additivity in unitary conformal field theories. Nucl. Phys. B 2015, 899, 91–111. [Google Scholar] [CrossRef] [Green Version]
  54. Uffink, J. Can the maximum entropy principle be explained as a consistency requirement? Stud. Hist. Phil. Mod. Phys. 1995, 26, 223–261. [Google Scholar] [CrossRef] [Green Version]
  55. Jizba, P.; Korbel, J. Maximum Entropy Principle in Statistical Inference: Case for Non-Shannonian Entropies. Phys. Rev. Lett. 2019, 122, 120601-1–120601-6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Jizba, P.; Arimitsu, T. Observability of Rényi’s entropy. Phys. Rev. E 2004, 69, 026128-1–026128-12. [Google Scholar] [CrossRef] [Green Version]
  57. Elben, A.; Vermersch, B.; Dalmonte, M.; Cirac, J.I.; Zoller, P. Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models. Phys. Rev. Lett. 2018, 120, 050406-1–050406-6. [Google Scholar] [CrossRef] [Green Version]
  58. Bacco, D.; Canale, M.; Laurenti, N.; Vallone, G.; Villoresi, P. Experimental quantum key distribution with finite-key security analysis for noisy channels. Nat. Commun. 2013, 4, 2363-1–2363-8. [Google Scholar] [CrossRef] [PubMed]
  59. Müller-Lennert, M.; Dupuis, F.; Szehr, O.; Fehr, S.; Tomamichel, M. On quantum Renyi entropies: A new generalization and some properties. J. Math. Phys. 2013, 54, 122203-1–122203-20. [Google Scholar] [CrossRef] [Green Version]
  60. Coles, P.J.; Colbeck, R.; Yu, L.; Zwolak, M. Uncertainty Relations from Simple Entropic Properties. Phys. Rev. Lett. 2012, 108, 210405-1–210405-5. [Google Scholar] [CrossRef] [Green Version]
  61. Minter, F.; Kuś, M.; Buchleitner, A. Concurrence of Mixed Bipartite Quantum States in Arbitrary Dimensions. Phys. Rev. Lett. 2004, 92, 167902-1–167902-4. [Google Scholar]
  62. Vidal, G.; Tarrach, R. Robustness of entanglement. Phys. Rev. A 1999, 59, 141–155. [Google Scholar] [CrossRef] [Green Version]
  63. Bengtsson, I.; Życzkowski, K. Geometry of Quantum States. An Introduction to Quantum Entanglement; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  64. Jizba, P.; Dunningham, J.A.; Joo, J. Role of information theoretic uncertainty relations in quantum theory. Ann. Phys. 2015, 355, 87–114. [Google Scholar] [CrossRef] [Green Version]
  65. Toranzo, I.V.; Zozor, S.; Brossier, J.-M. Generalization of the de Bruijn Identity to General ϕ-Entropies and ϕ-Fisher Informations. IEEE Trans. Inf. Theory 2018, 64, 6743–6758. [Google Scholar] [CrossRef]
  66. Rioul, O. Information Theoretic Proofs of Entropy Power Inequalities. IEEE Trans. Inf. Theory 2011, 57, 33–55. [Google Scholar] [CrossRef] [Green Version]
  67. Dembo, A.; Cover, T.M. Information Theoretic Inequalitis. IEEE Trans. Inf. Theory 1991, 37, 1501–1517. [Google Scholar] [CrossRef] [Green Version]
  68. Lutwak, E.; Lv, S.; Yang, D.; Zhang, G. Extensions of Fisher Information and Stam’s Inequality. IEEE Trans. Inf. Theory 2012, 58, 1319–1327. [Google Scholar] [CrossRef]
  69. Widder, D.V. The Laplace Transform; Princeton University Press: Princeton, NJ, USA, 1946. [Google Scholar]
  70. Knott, P.A.; Proctor, T.J.; Hayes, A.J.; Ralph, J.F.; Kok, P.; Dunningham, J.A. Local versus Global Strategies in Multi-parameter Estimation. Phys. Rev. A 2016, 94, 062312-1–062312-7. [Google Scholar] [CrossRef] [Green Version]
  71. Beck, C.; Schlögl, F. Thermodynamics of Chaotic Systems; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  72. Gardner, R.J. The Brunn-Minkowski inequality. Bull. Am. Math. Soc. 2002, 39, 355–405. [Google Scholar] [CrossRef] [Green Version]
  73. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  74. Einstein, A. Theorie der Opaleszenz von homogenen Flüssigkeiten und Flüssigkeitsgemischen in der Nähe des kritischen Zustandes. Ann. Phys. 1910, 33, 1275–1298. [Google Scholar] [CrossRef] [Green Version]
  75. De Palma, G. The entropy power inequality with quantum conditioning. J. Phys. A Math. Theor. 2019, 52, 08LT03-1–08LT03-12. [Google Scholar] [CrossRef] [Green Version]
  76. Ram, E.; Sason, I. On Rényi Entropy Power Inequalities. IEEE Trans. Inf. Theory 2016, 62, 6800–6815. [Google Scholar] [CrossRef] [Green Version]
  77. Stam, A. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inform. Control 1959, 2, 101–112. [Google Scholar] [CrossRef] [Green Version]
  78. Rényi, A. Probability Theory; Selected Papers of Alfred Rényi; Akadémia Kiado: Budapest, Hungary, 1976; Volume 2. [Google Scholar]
  79. Cramér, H. Mathematical Methods of Statistics; Princeton University Press: Princeton, NJ, USA, 1946. [Google Scholar]
  80. Wilk, G.; Włodarczyk, Z. Uncertainty relations in terms of the Tsallis entropy. Phys. Rev. A 2009, 79, 062108-1–062108-6. [Google Scholar] [CrossRef] [Green Version]
  81. Schrödinger, E. About Heisenberg Uncertainty Relation. Sitzungsber. Preuss. Akad. Wiss. 1930, 24, 296–303. [Google Scholar]
  82. Robertson, H.P. The Uncertainty Principle. Phys. Rev. 1929, 34, 163–164. [Google Scholar] [CrossRef]
  83. Hirschman, I.I., Jr. A Note on Entropy. Am. J. Math. 1957, 79, 152–156. [Google Scholar] [CrossRef]
  84. D’Ariano, M.G.; De Laurentis, M.; Paris, M.G.A.; Porzio, A.; Solimeno, S. Quantum tomography as a tool for the characterization of optical devices. J. Opt. B 2002, 4, 127–132. [Google Scholar] [CrossRef]
  85. Lvovsky, A.I.; Raymer, M.G. Continuous-variable optical quantum-state tomography. Rev. Mod. Phys. 2009, 81, 299–332. [Google Scholar] [CrossRef]
  86. Gross, D.; Liu, Y.-K.; Flammia, S.T.; Becker, S.; Eisert, J. Quantum State Tomography via Compressed Sensing. Phys. Rev. Lett. 2010, 105, 150401-1–150401-4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Beckner, W. Inequalities in Fourier Analysis. Ann. Math. 1975, 102, 159–182. [Google Scholar] [CrossRef]
  88. Babenko, K.I. An inequality in the theory of Fourier integrals. Am. Math. Soc. Transl. 1962, 44, 115–128. [Google Scholar]
  89. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach: New York, NY, USA, 1993. [Google Scholar]
  90. Reed, M.; Simon, B. Methods of Modern Mathematical Physics; Academic Press: New York, NY, USA, 1975; Volume XI. [Google Scholar]
  91. Wallace, D.L. Asymptotic Approximations to Distributions. Ann. Math. Stat. 1958, 29, 635–654. [Google Scholar] [CrossRef]
  92. Zolotarev, V.M. Mellin—Stieltjes Transforms in Probability Theory. Theory Probab. Appl. 1957, 2, 444–469. [Google Scholar] [CrossRef]
  93. Tagliani, A. Inverse two-sided Laplace transform for probability density functions. J. Comp. Appl. Math. 1998, 90, 157–170. [Google Scholar] [CrossRef] [Green Version]
  94. Lukacs, E. Characteristic Functions; Charles Griffin: London, UK, 1970. [Google Scholar]
  95. Pal, N.; Jin, C.; Lim, W.K. Handbook of Exponential and Related Distributions for Engineers and Scientists; Taylor & Francis Group: New York, NY, USA, 2005. [Google Scholar]
  96. Kira, M.; Koch, S.W.; Smith, R.P.; Hunter, A.E.; Cundiff, S.T. Quantum spectroscopy with Schrödinger-cat states. Nat. Phys. 2011, 7, 799–804. [Google Scholar]
  97. Knott, P.A.; Cooling, J.P.; Hayes, A.; Proctor, T.J.; Dunningham, J.A. Practical quantum metrology with large precision gains in the low-photon-number regime. Phys. Rev. A 2016, 93, 033859-1–033859-7. [Google Scholar] [CrossRef] [Green Version]
  98. Wei, L. On the Exact Variance of Tsallis Entanglement Entropy in a Random Pure State. Entropy 2019, 21, 539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Marcinkiewicz, J. On a Property of the Gauss law. Math. Z. 1939, 44, 612–618. [Google Scholar] [CrossRef]
Figure 1. Probability distribution function of a balanced cat state (BCS) for the quantum mechanical state’s position-like quadrature variable with α = 5 . This clearly displays an overall non-Gaussian structure; however, as this is a piecewise rearrangement of a Gaussian PDF for all α , we have that N 1 p = σ 2 for all p and α .
Figure 1. Probability distribution function of a balanced cat state (BCS) for the quantum mechanical state’s position-like quadrature variable with α = 5 . This clearly displays an overall non-Gaussian structure; however, as this is a piecewise rearrangement of a Gaussian PDF for all α , we have that N 1 p = σ 2 for all p and α .
Entropy 23 00334 g001
Figure 2. Reconstructed information distribution of an unbalanced cat state with ν = 0.97 and α = 10 . The Edgeworth expansion has been used here to order n 3 / 2 requiring control of the first five REPs. Good convergence of the tail behavior is evident as well as the location of the singularity corresponding to the second peak; a 2 + corresponds to the value of x at the point of intersection with the second (lower) peak of F ( y 0 ) .
Figure 2. Reconstructed information distribution of an unbalanced cat state with ν = 0.97 and α = 10 . The Edgeworth expansion has been used here to order n 3 / 2 requiring control of the first five REPs. Good convergence of the tail behavior is evident as well as the location of the singularity corresponding to the second peak; a 2 + corresponds to the value of x at the point of intersection with the second (lower) peak of F ( y 0 ) .
Entropy 23 00334 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jizba, P.; Dunningham, J.; Prokš, M. From Rényi Entropy Power to Information Scan of Quantum States. Entropy 2021, 23, 334. https://doi.org/10.3390/e23030334

AMA Style

Jizba P, Dunningham J, Prokš M. From Rényi Entropy Power to Information Scan of Quantum States. Entropy. 2021; 23(3):334. https://doi.org/10.3390/e23030334

Chicago/Turabian Style

Jizba, Petr, Jacob Dunningham, and Martin Prokš. 2021. "From Rényi Entropy Power to Information Scan of Quantum States" Entropy 23, no. 3: 334. https://doi.org/10.3390/e23030334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop