Useful Dual Functional of Entropic Information Measures

There are entropic functionals galore, but not simple objective measures to distinguish between them. We remedy this situation here by appeal to Born’s proposal, of almost a hundred years ago, that the square modulus of any wave function |ψ|2 be regarded as a probability distribution P. the usefulness of using information measures like Shannon’s in this pure-state context has been highlighted in [Phys. Lett. A1993, 181, 446]. Here we will apply the notion with the purpose of generating a dual functional [FαR:{SQ}⟶R+], which maps entropic functionals onto positive real numbers. In such an endeavor, we use as standard ingredients the coherent states of the harmonic oscillator (CHO), which are unique in the sense of possessing minimum uncertainty. This use is greatly facilitated by the fact that the CHO can be given analytic, compact closed form as shown in [Rev. Mex. Fis. E 2019, 65, 191]. Rewarding insights are to be obtained regarding the comparison between several standard entropic measures.


Introduction
Dual functionals map ordinary functionals onto real numbers. We are here interested in entropic functionals (EF). There are EFs galore, but no simple objective measures to distinguish between them. We remedy this situation in this work by appealing to Born's proposal, of almost a hundred years ago, that the square modulus of any wave function |ψ| 2 ought to be regarded as a probability distribution P.
We begin by reminding the reader that the notion of appealing to just a small quantity of expectation values so as to describe main features of physical systems underlies statistical mechanics, particularly in its information theory version, called MaxEnt by its creator (Jaynes). Indeed, theoretical developments of the last century led Jaynes to formulate his MaxEnt approach, which is known to yield the least biased representation consistent with the available data-amount [1][2][3][4][5][6][7][8].
For a similar, but purely quantum treatment in the style of Born, so as to describe pure quantum states, important advances were made in References [9][10][11][12][13][14][15][16], in which a "quantum entropy functional" S Q was utilized, and the MaxEnt approach profitably employed. As an aside, we mention that, precisely, the MaxEnt approach has become the main comparison-via, till now, to ascertain whether a certain entropic measure is more or less apt than another in describing a given scientific phenomenon.
Returning to the pure-states entropic measure S Q ; Q = (q 1 , q 2 , ..., q n ), the MaxEnt methodology was demonstrated to be very useful in describing both ground and excited states of variegated many-body problems [9][10][11][12][13]. It constituted a reasonable alternative to the celebrated Gutzwiller ansatz [15], and paved the way for rather interesting semi-classical treatments [16]. It has been shown to provide one with many-body wave functions of a better quality in several distinct scenarios, like the Hartree-Fock [10], the BCS [11], or the random phase approximation [13] ones. One appeals there to a Shannon's logarithmic ignorance measure [4] for the probability distribution P i , with a special choice for the probability distribution (PD) for, in an arbitrary basis |i >, in self explanatory notation.
The Quantum Entropic Functional S Q Several important properties of the quantum entropy S Q were demonstrated in Reference [16], namely: • S Q is a true ignorance function, in the sense of Brillouin. For a normalized, discrete probability distribution p i , for instance, Shannon's measure represents the missing information that one would need to possess so as to be in a "complete information" situation (CIS). In a CIS, just one p i = 1, while the remaining ones vanish [4,5].

•
There is a unique global minimum for S Q subject to appropriate MaxEnt constraints. • S Q obeys an H-theorem.

•
Ground state wave functions that maximize S Q satisfy the virial theorem and the hyper virial ones [17].
We see then that our ignorance measure [4] S Q exhibits adequate credentials to be seriously considered. the wave function (wf) we will be interested in here is that advanced in References [18,19], which compactly describes in simple analytic terms the coherent states of the harmonic oscillator (HO), advantageously replacing the usual, cumbersome infinite sum.

A Recently Developed Analytic, Compact Expression for Coherent States
Reference [18] introduced for the first time ever an analytic, compact expression for coherent states, that was a posteriori extensively discussed in Reference [19]. the new coherent states' compact expression advantageously replaces the customary Glauber's infinite expansion in terms of the harmonic oscillator eigenstates |n > . It reads These ψ α (x) are eigenfunctions of the annihilation operator a corresponding to the one dimensional HO. Thus, |α >= mw and a|α >= α|α > .
For α = 0 we have namely, the wave function (wf) for the HO-ground state, which is a coherent state itself. For simplicity, in what follows we set mw h = 1.
Given a certain operator A, it is certainly much easier to compute < α|A|α > (just one integral) than an infinite number of < n|A|n > (for n phonons) and then sum over them.
Our ψ α (x), eigenfunctions of the annihilation operator a corresponding to the one dimensional HO, exhibit a special property that is of the essence for our present purposes: they are states of minimum Heisenberg-uncertainty. Actually, this is their principal feature, to such an extent that it becomes its defining trait: a coherent state is that of minimum uncertainty (with regards to canonical conjugate variables). This translates into the fact that their associated quantal entropy S Q , a measure of ignorance [4], is unique in the sense that the associated quantum ignorance is minimal.
Our central proposal here emerges in this context-associate to any entropic functional S Q (P) a numerical real value. This value emerges when the P input of S Q is a coherent state. This idea is viable because, as we will see, this functional's numerical associated value m is independent of the displacement factor α of the coherent state. m is the same for any arbitrary α and thus uniquely characterizes any arbitrary dual

Some Different Monoparametric Ignorance Measures
Shannon's logarithmic measure (1) does not possess any parameter. Generalized entropic measures (GEMs) do [the best summary for them is, in our view, Reference [20] (and references therein). They have become quite popular in the last 30 years, being applied to variegated scientific areas of endeavor, from high energy physics to Economics. There are many GEMs [21], but we will limit ourselves in this Section to four monoparametric ones.

The Main Mathematical Tool of This Paper
The coherent state PD is, for complex α, given by and obviously depends only on the real part α R of α. Given the probability density F for our coherent state, our fundamental tool is to be introduced at this point, via the formal introduction of a dual functional F of a given ignorance measure S(F) (S is a functional of F). In practice, however, to evaluate F we just compute the functional S(F) We apply it now to our current five ignorance measures, beginning with Shannon's S S which is independent of α R ! This feature is common to all of our five measures, and can be generalized to other generalized measures.

Important Comment on the Meaning of Equation (18)
Let us consider now the specific real number associated with Shannon's measure N S is the minimum amount of ignorance displayed by Shannon's entropy. It could perhaps be thought of as a kind of information theory's counterpart of the uncertaintyh/2 of quantum mechanics, although the units are different in the two cases. This leasth/2 amount of ignorance (with regards to canonically conjugate variables) is physically unavoidable, of course. the Shannon quantum entropic functional S Q , instead, reflects an altogether distinct ignorance-amount (IA), that pertaining to the Born probability density |ψ(x)| 2 . Can this IA be diminished if one chooses a different entropic measure? This is a seemingly interesting question, that will be answered in the affirmative in the next Section below. Let us make perfectly clear the following notion. A given minimum IA for an entropic functional (EF) • in no way makes an EF "better" or "worse" than another EF, • but it serves the purpose of classifying EFs using it and • classification is the starting step of any scientific discipline [25].

Ignorance-Amount (IA) for Generalized Entropies
Our integrals over the variable x run always between −∞ and ∞.
Tsallis' entropy in the paradigmatic example [20]. In such case we will obtain a function N T (q) of q rather than a pure number. N T (q) depends on the specific value of the parameter q so that, after a straightforward computation, we get a real number N T for each q value. This real number arises from applying the super functional F α R to the functional S Tq [F α R ]. Indeed, while, in Rényi's case [20] we face the real numbers N R (q) Finally, for N K (q) -Kaniadakis, we find [22][23][24] The values of the super functional F are indeed independent of α R and are all functions of π [and for all but Shannon's, also of q]. the π-dependence comes, of course, from integrating a Gaussian function for the coherent states. We insist on the fact that we are facing here pure numbers. No physical units are involved.
If we carefully inspect the above equations, we will appreciate that, in some cases, the Shannon's IA is diminished for the generalized functionals. This will be clearly seen in the graphs that we will display below.

Generalizing the α R -Independence to Arbitrary Entropic Measures
Let G Q be an arbitrary entropic measure that depends upon a set of parameters Q and involves the coherent-state probability density F, with Q = (q 1 , q 2 , ..., q n ). We have the functional F α R (G C ) and we see that the α R dependence is gone, absorbed in a variables' change that one makes in performing the Gaussian integrations, as above.

Results: Four Numerical Quantities Associated to Each of Our Monoparametric Ignorance Measures
These N quantities are (1) N S , (2) N T (q), (3) N R (q), and (4) N K (q), associated respectively with Shannon, Tsallis, Rényi, and Kaniadakis. We plot and compare them. We see that Shannon's ignorance amount can indeed be diminished by other entropic measures. Figure 1 one clearly demonstrates the fact that the Shannon's ignorance amount is indeed decreased for q > 1 in both the Tsallis and Rényi instances. Instead, Kanidakis' functional achieves the same feat for q near zero. Tsallis' N T (q), the red one to Rényi's N R (q), and the blue one is that associated to Kaniadakis's N K (q).
In Figure 2 we compare the ignorance amounts (IA) associated with Tsallis (horizontal) and Rényi (vertical) entropic forms. The black curve displays N R (q) (vertical axis) versus N T (q) (horizontal one). A monotonic dependence is observed, as one should expect from the associated mathematical expressions for these entropic forms. the red curve tells us that Tsallis-IA is smaller than Rényi's one for q > 1. Viceversa for q < 1. Figure 3 makes the comparisons as Figure 2, but now relates (black curve) Kaniadakis (vertical) to Rényi (horizontal) functionals. Here the black curve depicts the highly non trivial relationship between them. Green for N R (q) and red for N K (q). the black curve displays N R (q) versus N K (q).

Sharma-Mittal Biparametric Ignorance Measure
It is defined in term of two parameters r and q as [26,27] so that where we have (arbitrarily, for comparisons ease) selected r = 2q − 1. For r = 2 one has while for r = 0.5 we have N SM (q, r) = F α r (S SM (q,0.5) ) = 2π The following graph (Figure 4) depicts our functional in terms of the pair (q, r).   (Sharma-Mittal). the independent variable is q. the green curve represents F α R (S Tq ) while the blue one displays F α R (S SM (q,2q−1) ) the black curve is different. It plots F α R (S Tq ) versus F α R (S SM (q,2q−1) ).
We appreciate the fact that Sharma-Mittal measure exhibits a smaller ignorance amount than the Tsallis one for (0 ≤ q ≤ ∞). This is to be expected, since there are two free parameters.

Value of Our Dual Functional When the S Q -Argument Is Not a Coherent State
For the sake of completeness, we show now that the numerical value m of F [S Q ], when we deal with S q [F 1 ] (with F 1 the probability density for the HO first excited state), is larger than that for the same dual functional, when the argument of S Q is a coherent state. This should lend credibility to the statement that coherent states' information measures yield minimum values for the dual functional.
The expression for the first excited state wave function is Then, so that (29) becomes where C = 0.57721566490 is Euler's constant. From (18) we see that m 1 > m(coherentstate).

Application to An Statistical Complexity (SC) Measure
Our entropy S Q can be viewed as the measure of the uncertainty associated to the basis-states on which the wave function (wf) is expanded (Cf. Equation (3)). We can regard the situation as that of a probabilistic physical processes described by the probability distribution p j = |c j | 2 ; j = 1; :::; N, P ≡ (p 1 ; p 2 , . . . , p N ), where P is a vector in a probability space. For S Q [P] = 0 the situation is that prevailing immediately after performing an experiment (wf "collapse" and minimum ignorance). On the other hand, our ignorance is maximal if S[P] = ln N (uniform probability). These two extreme circumstances of (i) maximum knowledge (or "perfect order") and (ii) maximum ignorance (or maximum "randomness") are regarded by many authors [1][2][3][4][5][6][7][8][9][10][11][28][29][30][31][32][33][34][35] as "trivial" ones. These authors have conceived the idea of devising a "measure" of the "statistical complexity" (SC) contained in P that would vanish in two extreme situations described above. We will analyze here, the quantum SC of which S Q is a basic ingredient. We will apply the quantifier C to the probability distribution (PD) P = |ψ α | 2 corresponding to coherent states. Accordingly, if C = 0 , the PD P would contain only trivial information. the larger C, the larger the amount of "non-triviality". At this stage of our discussion emerges an important and well known observation. No all the available information measures are equally able to detect non-triviality. They are equally 'informative.' This is why we will analyze the PD P above with different C−measures, entailing distinct information measures (IM). Im turn. we study two different C−definitions.

Shiner-Davison-Landsberg Complexity Measure for Distinct IM
We appeal to the simplest SC measure T SDL , devised by Shiner, Davison, and Landsberg (SDL) [36]. We first introduce the ratio H between S Q and the specific maximum value that S Q can attain (S maximal Q ), that is, What are we looking at with this definition in our particular instance? Remember that here P = |ψ α | 2 corresponding to coherent states. But all our present entropic measures yield results that are independent of α as we have seen above. Thus, T Shannon SDL = 0, not detecting any salient feature in P . Tsallis' measure, instead, introduces another parameter, namely q, and correspondingly, T Tsallis SDL (q) yields different values for different q and produces a q−parametrized curve-We plot in Figure 6 T SDL versus q ∈ [0, ∞] for Shannon's (q = 1), Tsallis' (red, q ≥ 1) and Rényi's (brown, q ≥ 1) measures S q Q . As expected, the statistical complexity T vanishes at q = 1, as we have just explained. For the q−entropies it grows first and then stabilize themselves. Tsallis-curve displays a maximum at q ∼ 2.3, entailing a special q−value ∼ 2.3 of maximum complexity. What to we make of this maximum? that there are salient peculiarities in the distribution P above that the Tsallis SDL-measure best detect with this q value. the Rényi measure detection-ability grows with q at first, but eventually its non-triviality sensor stabilizes itself. Thus, if one is to apply P in computing some physical quantity B, the features of B should better be scrutinized via Tsallis' measure with q = 2.3, that would be the most "informative" one .

Conclusions
We have in this effort achieved a way of classifying the large number of different entropic functionals in vogue nowadays. This should be of importance in the sense of giving a semblance of order to the pandemonium of entropies galore that are used in a plethora of distinct scientific endeavors. Science always begins with a process of classification [25].
In our classification efforts we were aided by using the pure state entropy S Q advanced and utilized in References [9][10][11][12][13]. Our pure states are the coherent ones of the HO (CHO), taking advantage of the closed analytical representation of them advanced in References [18,19]. They are unique in the sense of possessing minimum Heisenberg uncertainty. We compute and compare diverse entropic functionals of the CHO probability densities.
Our quantum entropy S Q represents the information theoretic ignorance pertaining to the square modulus of ψ(x) when it is regarded as a probability density. As just stated, in this paper ψ α (x) is an HO-coherent state, and for any entropic functional S Q one encounters a displacement-al pha independent, positive real value N(Q) . This last fact gives sense to our central proposal, stated above, of associating to any entropic functional a numerical real value. N(Q) is the same for any arbitrary α and thus uniquely characterizes the entropic functional S Q .
These numbers N(Q) provide a way of listing, and thus classifying, the plethora of extant literature's entropic functionals. An application to statistical complexity measures (SCM) is made, that encounters significant differences between two popular SCM.