Differential entropy and time

We give a detailed analysis of the Gibbs-type entropy notion and its dynamical behavior in case of time-dependent continuous probability distributions of varied origins: related to classical and quantum systems. The purpose-dependent usage of conditional Kullback-Leibler and Gibbs (Shannon) entropies is explained in case of non-equilibrium Smoluchowski processes. A very different temporal behavior of Gibbs and Kullback entropies is confronted. A specific conceptual niche is addressed, where quantum von Neumann, classical Kullback-Leibler and Gibbs entropies can be consistently introduced as information measures for the same physical system. If the dynamics of probability densities is driven by the Schr\"{o}dinger picture wave-packet evolution, Gibbs-type and related Fisher information functionals appear to quantify nontrivial power transfer processes in the mean. This observation is found to extend to classical dissipative processes and supports the view that the Shannon entropy dynamics provides an insight into physically relevant non-equilibrium phenomena, which are inaccessible in terms of the Kullback-Leibler entropy and typically ignored in the literature.


Introduction
Among numerous manifestations of the concept of entropy in physics and mathematics, the information-theory based entropy methods have been devised to investigate the large time behavior of solutions of various (mostly dissipative) partial differential equations. Shannon, Kullback and von Neumann entropies are typical information theory tools, designed to quantify the information content and possibly information loss for various classical and quantum systems in a specified state. For quantum systems the von Neumann entropy vanishes on pure states, hence one presumes to have a complete information about the state of a system. On the other hand, for pure states the Gibbs-type (Shannon, e.g. differential) entropy gives access to another information-theory level, associated with a probability distribution inferred from a L 2 (R n ) wave packet. Although Shannon or Kullback entropies are interpreted as information measures, it is quite natural to think of entropy as a measure of uncertainty. In view of the profound role played by the Shannon entropy in the formulation of entropic indeterminacy relations [1,2], the term information, in the present paper is mostly used in the technical sense, meaning the inverse of uncertainty.
In physics, the notion of entropy is typically regarded as a measure of the degree of randomness and the tendency (trends) of physical systems to become less and less organized. Throughout the paper we shall attribute a more concrete meaning to the term organization, both in the classical and quantum contexts. Namely, we shall pay special attention to quantifying, in terms of suitable entropy functionals, the degree of the probability distribution (de)localization, and the dynamical behavior of this specific -localization uncertainty -feature of a physical system.

Notions of entropy
Notions of entropy, information and uncertainty are intertwined and cannot be sharply differentiated. While entropy and uncertainty are -to some extent synonymous -measures of ignorance (lack of information, uncertainty), the complementary notion of information basically quantifies the ability of observers to make reliable predictions about the system, [4,6,7]: the more aware one is about chances of a concrete outcome, the lower is the uncertainty of this outcome. Normally, the growth of uncertainty is identified with an increase of the entropy which in turn is interpreted as an information loss. Consult e.g. standard formulations of the celebrated Boltzmann H-theorem. Following Ref. [3] let us recall that entropy -be it thermodynamical (Gibbs-Boltzmann), dynamical, von Neumann, Wehrl, Shannon, Renyi, Tsallis or any other conceivable candidate -has an exceptional status among physical quantities. As a derived quantity it does not show up in any fundamental equation of motion. Generically, there is no a priori preferred notion of entropy (perhaps, except for the thermodynamical Clausius case) in physical applications and its specific choice appears to be purpose-dependent. As an obvious remnant of the standard thermodynamical reasoning, one expects entropy to be a state function of the system (thermodynamical notions of equilibrium or near-equilibrium are implicit). This state connotation is a source of ambiguities, since inequivalent notions of the system state are used in the description of physical systems, be them classical, thermodynamical and quantum. Not to mention rather specialized meaning of state employed in the standard information theory, [4,9,7]. A primitive information-theory system is simply a bit whose two admissible states are binary digits 1 and 0. Its quantum equivalent is a qubit whose admissible states are vectors in a two-dimensional Hilbert space, hence an infinity of pure states of a two-level quantum system. The information theory framework, if extended to more complicated systems, employs a plethora of notions of state [4,7]. As very special cases we may mention a phase-space point as the determinative of the state of a classical dynamical system, or the macroscopic notion of a thermodynamical state in its classical and quantum versions, [3]. A symbolic mathematical representation of quantum states in terms of wave vectors and/or density operators is expected to provide an experimentally verifiable "information" about the system. To obtain a catalogue of the corresponding statistical predictions, an a priori choice of suitable observables (and thus measurement procedures) is necessary. Then, a casual interpretation of entropy as a measure of one's uncertainty about measurable properties of a system in a prescribed quantum state may acquire an unambiguous meaning. When adopting the state notion to the Hilbert space language of quantum theory, we realize that normalized wave functions and density operators, which are traditionally supposed to determine the quantum state, allow to extend the notion of entropy to certain functionals of the state of the quantum system. The von Neumann entropy S(ρ) = −k B T r(ρ lnρ) (1) of a quantum state (e.g. the density operatorρ), though often infinite, is typically related to the degree of departure from purity (e.g. the "mixedness" level) of the state and is particularly useful while quantifying measurements performed upon finite quantum systems. For a given density operatorρ, von Neumann entropy is commonly accepted as a reliable measure of the information content (about the departure from purity), to be experimentally extracted from of a quantum system in a given state. Only under very specific circumstances, like e.g. in an optimal "quantum experiment" [11,12] which refers to the diagonal density operator (with p i , 1 ≤ i ≤ N being its eigenvalues), the information gain can be described in terms of both von Neumann's and the standard Shannon measure of information: Since von Neumann entropy is invariant under unitary transformations, the result exhibits an invariance under the change of the Hilbert space basis and the conservation in time for a closed system (when there is no information/energy exchange with the environment). Thus, Schrödinger dynamics has no impact on the von Neumann encoding of information, see e.g. also [13,14] for a related discussion.
Pure states have vanishing von Neumann entropy (S(ρ) = 0 "for the pure states and only for them", [3]) and are normally considered as irrelevant from the quantum information theory perspective, since "one has complete information" [3] about such states. One may even say that a pure state is an unjustified over-idealization, since otherwise it would constitute e.g. a completely measured state of a system in an infinite Hilbert space, [15]. A colloquial interpretation of this situation is: since the wave function provides a complete description of a quantum system, surely we have no uncertainty about this quantum system, [16]. Note that as a side comment we find in Ref. [15] a minor excuse: this idealization, often employed for position-momentum degrees of freedom, is usually an adequate approximation, to be read as an answer to an objection of Ref. [17]: "although continuous observables such as the position are familiar enough, they are really unphysical idealizations", c.f. in this connection [18] for an alternative view.
On the other hand, the classic Shannon entropy is known to be a natural measure of the amount of uncertainty related to measurements for pairs of observables, discrete and continuous on an equal footing, when a quantum system actually is in a pure state. Hence, a properly posed question reveals obvious uncertainties where at the first glance we have no uncertainty. The related entropic uncertainty relations for finite and infinite quantum systems have received due attention in the literature, in addition to direct investigations of the configuration space entropic uncertainty/information measure of L 2 (R n ) wave packets, [15], [19] - [42]. The commonly used in the literature notions of Shannon and von Neumann entropy, although coinciding in some cases, definitely refer to different categories of predictions and information measures for physical systems. In contrast to the exclusively quantum concept of von Neumann entropy, the Shannon entropy -quite apart from its purely classical provenance -appears to capture a number of properties of quantum systems which cannot be detected nor described by means of the von Neumann entropy.
Obviously, there is no use of Shannon entropy if one is interested in verifying for mixed quantum states, how much actually a given state is mixed. On the other hand, von Neumann entropy appears to be useless in the analysis of L 2 (R) wave packets and their dynamical manifestations (time-dependent analysis) which are currently in the reach of experimental techniques, [44,43]. It is enough to invoke pure quantum states in L 2 (R n ) and standard position-momentum observables which, quite apart from a hasty criticism [17], still stand for a valid canonical quantization cornerstone of quantum theory, [18]. Those somewhat underestimated facts seem to underlie statements about an inadequacy of Shannon entropy in the quantum context, [11], while an equally valid statement is that the von Neumann entropy happens to be inadequate. The solution of the dilemma lies in specifying the purpose, see also [12].

Differential entropy
We are primarily interested in the information content of pure quantum states in L 2 (R n ), and thus pursue the following (albeit scope-limited, c.f. [44,43] for experimental justifications) view: an isolated system is represented in quantum mechanics by a state vector that conveys statistical predictions for possible measurement outcomes. Consequently, it is the state vector which we regard as an information (alternatively, predictions and uncertainty) resource and therefore questions like, [45]: how much information in the state vector or information about what, may be considered meaningful. Let us emphasize that we do not attempt to define an information content of a physical system as a whole, but rather we wish to set appropriate measures of uncertainty and information for concrete pure states of a quantum system. A particular quantum state should not be misinterpreted to provide a complete description of the corresponding physical system itself, [46]. In fact, when we declare any Schrödinger's ψ as the state of a quantum system, we effectively make a statement about the probabilities of obtaining certain results upon measurement of suitable observables, hence we refer to a definite experimental setup. Therefore, to change or influence this state is not quite the same as changing or influencing the system. Our, still vague notion of information, does not refer to qubits since we shall basically operate in an infinite dimensional Hilbert space. This does not prohibit a consistent use of informationtheory concepts, since an analytic information content of a quantum state vector, in the least reduced to a properly handled plane wave, is not merely an abstraction and can be dug out in realistic experiments, including those specific to time-dependent quantum mechanics, [43,44]. On the way one may verify a compliance with quantum theory of a number of well defined properties of the quantum system for which: the only features known before an experiment is performed are probabilities of various events to occur, [11].
In the case of a quantum mechanical position probability density, its analytic form is assumed to arise in conjunction with solutions of the Schrödinger equation. Then, we need to generalize the original Shannon's entropy for a discrete set of probabilities to the entropy of a continuous distribution with the density distribution function [4], which is also named the differential entropy, [5,6]. Most of our further discussion will be set in a specific context of quantum position-momentum information/uncertainty measures, where the classical form of Shannon differential entropy [4] has been used for years in the formulation of entropic versions of Heisenberg-type indeterminacy relations, [19,20,21,25]. The entropic form of indeterminacy relations, enters the stage through the Fourier analysis of L 2 (R n ) wave packets, in conjunction with the Born statistical interpretation, hence with ψ-induced probability measures in position and momentum space, [19,20]. The experimental connotations pertaining to the notion of uncertainty or indeterminacy are rather obvious, although they do not quite fit to the current quantum information idea of a "useful" quantum measurement, [11]. Given the probability density ρ(x) on R n , we define the differential entropy [6,5,48]), as follows: One may consider a subset Γ ⊂ R n to be a support of ρ instead of R; this is guaranteed by the convention that the integrand in Eq. (3) vanishes if ρ does. Note a minor but crucial notational difference betweenρ and ρ. Let us stress that, modulo minor exceptions, throughout the paper we carefully avoid dimensional quantities (all relevant dimensional constants like the Planck are scaled away), since otherwise the above differential entropy definition may be dimensionally defective and have no physical meaning. An extensive list od differential entropy values for various probability densities can be found in Re. [5]. Since our paper is supposed to be concerned with physical applications, in Section II we shall analyze the issue of how the differential entropy definition depends on the units used. The related difficulty, often overlooked in the literature, refers to literally taking the logarithm of a dimensional argument, see e.g. [9,10].
In the quantum mechanical context, we shall invoke either position S(ρ) or momentum S(ρ) information entropies, with no recourse to the classical entropy given in terms of classical phasespace distributions f (q, p) or (Wehrl entropy) their Wigner/Husimi analogues, [3,28]. The notion of entropic uncertainty relations, [2,21,22,25] explicitly relies on the differential entropy input. Namely, an arithmetic sum of (presumed to be finite) momentum and position information entropies for any normalized L 2 (R n ) wave packet ψ(x), is bounded from below: where n stands for the configuration space (respectively momentum space) dimension, [21]. This feature is worth emphasizing, since neither S(ρ) nor S(ρ) on their own are bounded from below or from above. Nonetheless, both take finite values in physically relevant situations and their sum is always positive.

Temporal behavior-preliminaries
Since a normalized wave function ψ represents a pure state of a quantum system whose dynamics is governed by the Schrödinger equation, only for stationary states the differential entropy S(ρ) is for sure a conserved quantity. In general, the Schrödinger picture evolution of ψ(x, t) and so this of |ψ(x, t)| 2 . = ρ(x, t) may give rise to a nontrivial dynamics of the information entropy associated with the wave packet ψ(x, t). Let us point out that most of the "entropic" research pertains to time-independent situations, like in case of stationary solutions of the Schrödinger equation. Notable exceptions are Refs. [27,33,34]. On general non-quantum grounds an information (differential entropy) dynamics is addressed in Refs. [6,47] and [59]- [66], see also [67,68,69,71,70]. The differential entropy, by a number of reasons [4,6], is said not to quantify the absolute "amount of information carried by the state of the system" (Shannon's uncertainty), unless carefully interpreted. Up to measure preserving coordinate transformations the latter objection remains invalid and this feature gave some impetus to numerically assisted comparative studies of the Shannon information content of different pure states of a given quantum system. Results are ranging from simple atoms to molecules, nuclei, aggregates of particles, many-body Bose and Fermi systems, and Bose-Einstein condensates, see e.g. Refs. [29] - [42]. In these cases, Shannon's differential entropy appears to be a fully adequate measure for the localization degree (which in turn is interpreted as both the uncertainty measure and the information content) of the involved wave packets. A difference of two information entropies (evaluated with respect to the same coordinate system) S(ρ) − S(ρ ′ ) is known to quantify an absolute change in the information content when passing from one state of a given system to another. All potential problems with dimensional units would disappear in this case, [4,9]. Alternatively, to this end one may invoke the familiar notion of the relative Kullback entropy − Γ ρ (ln ρ − ln ρ ′ ) dx, [4,6], provided ρ ′ is strictly positive. Cogent recommendations towards the use of the Shannon information measure, plainly against the Kullback option, can be found in Ref. [72]. We shall come to this point later. For arguments just to the opposite see e.g. [73] and also [92,93]. In the present paper, we predominantly invoke the differential entropy. In Section IV we shall describe a number of limitations upon the use of the Kullback entropy. If both entropies can be safely employed for the same physical model (like e.g. for the diffusion type dynamics with asymptotic invariant densities), we establish direct links between the Shannon and Kullback entropy dynamics. In the context of the induced (by time development of probability densities) "information dynamics" S → S(t), [6,56], it is the difference S(t) − S(t ′ ) between the (presumed to be finite) information entropy values for the time-dependent state of the same physical system, considered at times t ′ < t, which properly captures the net uncertainty/information change in the respective time interval [t ′ , t]. Let us mention that the very same strategy underlies the Kolmogorov-Sinai entropy notion for classical dynamical systems, [60,61,48]. In particular, the rate in time of information entropy dS dt is a well defined quantity characterizing the temporal changes (none, gain or loss) in the information content of a given L 2 (R n ) normalized wave packet ψ(x, t) (strictly speaking, of the related probability density). We indicate that, at variance with standard thermodynamical intuitions, quantum mechanical information (differential) entropy needs not to be a monotonic function of time. In the course of its evolution it may oscillate, increase or decrease with the flow of time, instead of merely increasing with time or staying constant, as customarily expected. That, regardless from the intrinsic time reversal property of the quantum dynamics.
To conform with the information theory lore, we need to address an information entropy balance in the course of time, since for an isolated quantum system there is no analog of thermal reservoir, capable of decreasing (removal) or increasing (putting into) an information entropy of the particular state in which the system actually is. Since there in no quantifiable energy exchange with the environment, the actual purpose and "fate" of the differential entropy need to be investigated. The entropic method we follow in the present paper extends to any formalism operating with general time-dependent spatial probability densities, [47,48,50], even if set out of the explicit thermodynamic context (e.g. the phase space formulation of statistical mechanics). Information entropy and its intrinsic dynamics, like e.g. the information flow and information entropy production rate, quantify properties of general reversible and/or irreversible dynamical systems. Normally, the microscopic dynamics of such systems is expected to follow well defined trajectories (deterministic paths of a dynamical system or sample paths of a stochastic process) and those may be thought to induce a corresponding dynamics for statistical ensembles of trajectories. It is seldom possible to have a sharp wisdom of the initial data x 0 ∈ X for the trajectory dynamics taking place in a phase space X of the system. This imprecision extends to the terminal data (x 0 → x t after time t > 0) as well. Therefore, even if one knows exact dynamical rules governing the behavior of individual trajectories in time, it is basically impossible to tell more about the system then: if its initial state can be found in a subset A ⊂ X with a probability prob(x 0 ∈ A), then after time t one can identify the terminal state of the system x t ∈ X in a subset B ⊂ X with a probability prob(x t ∈ B). An evolution of derived probability densities eventually may be obtained as a solution of an appropriate partial differential transport equation, [47,48,59,63] In the present paper we take a more general view and we bypass a concept of the underlying trajectory dynamics by emphasizing the role of transport equations and their density solutions. Under such premises, we can safely address the dynamics of uncertainty/information generated by the Schrödinger picture quantum evolution of wave packets in closed (no systemreservoir/environment coupling) quantum mechanical systems. Remark 1: Keeping in touch with quantum mechanical tradition, let us recall that at least two different "trajectory pictures " can be related to the very same mathematical model based on the Schrödinger wave packet dynamics: deterministic Bohmian paths [49] and random paths of (basically singular) diffusion-type processes, [50,51,52]. Additionally, under suitable restrictions (free motion, harmonic attraction) classical deterministic phase-space paths are supported by the associated with ψ(x, t) positive Wigner distribution function and its spatial marginal distribution. However, none of the above derived trajectory "pictures" deserves the status of an underlying physical "reality" for quantum phenomena although each of them may serve as an adequate pictorial description of the wave-packet dynamics. Remark 2: In view of Born's statistical interpretation postulate, the Schrödinger picture dynamics sets a well defined transport problem for a probability density ρ(x, t) .
= |ψ(x, t)| 2 . Therefore, one is tempted to resolve such dynamics in terms of (Markovian) diffusion-type processes and their sample paths, see e.g. [50,51,52] and [53,54]. A direct interpretation in terms of random "trajectories" of a Markovian diffusion-type process is here in principle possible under a number of mathematical restrictions, but is non-unique and not necessarily global in time. The nontrivial boundary data, like the presence of wave function nodes, create additional problems although the nodes are known to be never reached by the pertinent processes. The main source of difficulty lies in guaranteing the existence of a process per se i.e. of the well defined transition probability density function solving a suitable parabolic partial differential equation (Fokker-Planck or Kramers). By adopting milder conditions upon the drift fields (instead of too restrictive growth restrictions, one may simply admit smooth functions) it is possible to construct well defined, albeit non-unique, diffusion-type processes. They are consistent with the time development of a given probability density, see Chap. 3 of Ref. [68] and [52].

Outline of the paper
The paper is structured as follows. We begin by recalling the standard lore of the Shannon information theory to attribute an unambiguous meaning to two principal notions, this of information and that of uncertainty. To this end various notions of state of a model system are invoked and suitable information measures are discussed. Next we turn to the coarse-graining issue and set a connection between the Shannon entropy of a discrete probability measure and the differential entropy of a related (through a suitable limiting procedure) continuous probability density. On the way, we analyze the dimensional units impact on the entropy definition. We discuss various entropic inequalities for both differential and coarse-grained entropies of quantum mechanical densities. In Section III, the localization degree of probability densities is analyzed by means of so-called entropy powers and of the Fisher information measure. We infer two chain inequalities, Eqs. (51) and (52), which imply that typically the differential entropy is a well behaved quantity, bounded both from below and above. The formalism is general enough to include quantum mechanical densities as merely the special case. In Section IV we set a conceptual framework for time-dependent problems. Since classical dynamical, stochastic and quantum systems (in their pure states) in general give rise to time-dependent probability densities and information entropies, we resolve the exemplary density dynamics in terms of Smoluchowski diffusion processes, albeit with no explicit random path (e.g. random variable) input. The entropy and Fisher information evolution equations are established. Close links of the differential and conditional Kullback entropies are established for Smoluchowski diffusion processes, when asymptotic invariant densities enter the scene. We discuss a compliance of the induced continual power release in the course of the diffusion process with the mean energy conservation law, Eqs (104) and (108). In section V we analyze differential entropy dynamics and time evolution of the Fisher localization measure in quantum theory and next exemplify the general formalism for simple analytically tractable cases. The emergent continual power transfer effect has been analyzed in connection with the finite energy constraint for the mean energy of quantum motion, Eqs. (114) and (117). Although uncertainty dynamics scenarios of sections IV and V are fundamentally different, nonetheless the respective methodologies appear to have an overlap, when restricted to steady states which support invariant densities for (reversible) stationary diffusion-type processes.
2 Differential entropy: uncertainty versus information

Prerequisites
The original definition of Shannon entropy conveys a dual meaning of both uncertainty and information measure. It is useful to interpret those features in a complementary (albeit colloquial) way: the less is the uncertainty of the system or its state, the larger (and more valuable) is the information we acquire as a result of the measurement (observation) upon the system, and in reverse. We know that a result of an observation of any random phenomenon cannot be predicted a priori (i.e. before an observation), hence it is natural to quantify an uncertainty of this phenomenon. Let us consider µ = (µ 1 , ..., µ N ) as a probability measure on N distinct (discrete) events A j , 1 ≤ j ≤ N pertaining to a model system. Assume that N j=1 µ j = 1 and µ j = prob(A j ) stands for a probability for an event A j to occur in the game of chance with N possible outcomes. Let us call − log µ j an uncertainty function of the event A j . Interestingly, we can coin here the name of the ("missing") information function, if we wish to interpret what can be learned via direct observation of the event A j : the less probable is that event, the more valuable (larger) is the information we would retrieve through its registration . Then, the expression stands for the measure of the mean uncertainty of the possible outcome of the game, [6], and at the same time quantifies the mean information which is accessible from an experiment (i.e. actually playing the game). The base of the logarithm for a while is taken equal 2, but we recall that log b · ln 2 = ln b and ln 2 ≃ 0.69555 with the base e ≃ 2.71828 . Thus, if we identify event values A 1 , ..., A N with labels for particular discrete "states" of the system, we may interpret Eq. (5) as a measure of uncertainty of the "state" of the system, before this particular "state" it is chosen out of the set of all admissible ones. This well conforms with the standard meaning attributed to the Shannon entropy: it is a measure of the degree of ignorance concerning which possibility (event A j ) may hold true in the set {A 1 , A 2 , ..., A N } with a given a priori probability distribution {µ 1 , ..., µ N }.
Notice that: ranges from certainty (one entry whose probability equals 1 and thus no information is missing) to maximum uncertainty when a uniform distribution µ j = 1/N for all 1 ≤ j ≤ N occurs. In the latter situation, all events (or measurement outcomes) are equiprobable and log N sets maximum for a measure of the "missing information". By looking at all intermediate levels of randomness allowed by the inequalities Eq. (5) we realize that the lower is the Shannon entropy the less information about "states" of the system we are missing, i.e. we have more information about the system. If the Shannon entropy increases, we actually loose an information available about the system. Consequently, the difference between two uncertainty measures can be interpreted as an information gain or loss.

Events, states, microstates and macrostates
The Boltzmann formula sets a link of entropy of the (thermodynamical) system with the probability P = 1/W that an appropriate "statistical microstate" can occur. Here, W stands for a number of all possible (equiprobable) microstates that imply the prescribed macroscopic (e.g. thermodynamical) behavior corresponding to a fixed value of S. It is instructive to recall that if P is a probability of an event i.e. of a particular microstate, then − ln P (actually, with log 2 instead of ln) may be interpreted [8] as "a measure of information produced when one message is chosen from the set, all choices being equally likely" ("message" to be identified with a "microstate"). Another interpretation of − ln P is that of a degree of uncertainty in the trial experiment, [7]. As a pedestrian illustration let us invoke a classic example of a molecular gas in a box which is divided into two halves denoted "1" and "2". We allow the molecules to be in one of two elementary states: A 1 if a molecule can be found in "1" half-box and A 2 if it placed in another half "2". Let us consider a particular n-th macrostate of a molecular gas comprising a total of G molecules in a box, with n molecules in the state A 1 and G − n molecules in the state A 2 . The total number of ways in which G molecules can be distributed between two halves of the box in this prescribed macrostate, i.e. the number Here, P (n) = 1/W (n) is a probability with which any of microstates may occur in a system bound to "live" in a given macrostate. The maximum of W (n) and thus of k B ln W (n) corresponds to N 1 = N 2 = n.
To get a better insight into the information-uncertainty intertwine, let us consider an ensemble of finite systems which are allowed to appear in any of N > 0 distinct elementary states. The meaning of "state" is left unspecified, although an "alphabet" letter may be invoked for convenience. Let us pick up randomly a large sample composed of G ≫ 1 single systems, each one in a certain (randomly assigned) state. We record frequencies n 1 /G . = p 1 , ..., n N /G . = p N with which the elementary states of the type 1, ..., N do actually occur. This sample is a substitute for a "message" or a "statistical microstate" in the previous discussion. Next, we identify the number of all possible samples of that fixed size G which would show up the very same statistics p 1 , ..., p N of elementary states. We interpret those samples to display the same "macroscopic behavior". It was the major discovery due to Boltzmann, see e.g. [4], that the number W of relevant "microscopic states" can be approximately read out from each single sample and is directly related to the the introduced a priori probability measure µ 1 , ..., µ N , with an identification p i . = µ i for all 1 ≤ i ≤ N, by the Shannon formula: On the basis of this formula, we can consistently introduce S(µ) as the mean information per each (i-th) elementary state of the N-state system, as encoded in a given sample whose size G ≫ 1 is sufficiently large, [9].
To exemplify previous considerations, let us consider N = 2. It is instructive to compare the uncertainty level (alternatively -information content) of S(µ) for the two-state system, if we take 2 as the logarithm base instead of e. Then, Eq. (8) would refer to the binary encoding of the message (string) with G entries. We find that µ 1 = 0.1 and µ 2 = 0.9 yield S(µ) = 0.469. Analogously 0.2 and 0.8 imply 0.7219, while 0.3 and 0.7 give 0.8813. Next, 0.4 and 0.6 imply 0.971, and we reach an obvious maximum S = 1 for µ 1 = µ 2 = 0.5. An instructive example of the "dog-flea" model workings with G = 50 fleas jumping back and forth between their "states of residence" on a dog "1" or dog "2", can be found in Ref. [55]. Albeit, in a number of specific cases, an evolution of the Gibbs entropy may show up some surprises if the "entropy growth dogma" is uncritically accepted, see e.g. examples in [55,56] and the discussion of Refs. [57,58]. By pursuing the Shannon's communication theory track, [4], we can as well identify states of the model system with "messages" (strings) of an arbitrary length G > 0 which are entirely composed by means of the prescribed N "alphabet" entries (e.g. events or alphabet letters A j with the previous probability measure µ). Then, Eq. (8) may be interpreted as a measure of information per alphabet letter, obtained after a particular message (string ≡ state of the model system) has been received or measured, c.f. our discussion preceding Eq. (8). In this case, the Shannon entropy interpolates between a maximal information (one certain event) and a minimal information (uniform distribution), cf. Eq. (6).

Bernoulli scheme and normal distribution
Let us consider again a two-state system where A 1 appears with a probability µ 1 = p while A 2 with a probability µ 2 = 1 − p. A probability with which A 1 would have appeared exactly n times, in the series of G repetitions of the two-state experiment, is given by the Bernoulli formula: where, in view of the Newton formula for the binomial (p + q) G , after setting q = 1 − p we arrive at G n=0 P n = 1. Since the number n of successes in the Bernoulli scheme is restricted only by inequalities 0 ≤ n ≤ G, what we have actually defined is a probability measure µ = {P 0 , P 1 , ..., P G } for G distinct random events denoted B 0 , B 1 , ..., B G . Accordingly, we can introduce a random variable B and say that it has the Bernoulli distribution, if B takes values n = 0, 1, ..., G with the Bernoulli probabilities P n of Eq. (9) for all n. A random event B n is interpreted as "taking the value n in the Bernoulli scheme". Let us denote P (B = k) . = P (B k ) = P k . We know that P (B < n) = k<n P k . The mean value . The local de Moivre-Laplace theorem tells us that for large values of G the binomial distribution can be approximated by the normal (Gauss) one: At this point we shall take an inspiration from Ref. [74] and relate the Bernoulii "success" probabilities with probabilities of locating a particle in an arbitrary interval on a line R. Namely, let us first consider an interval of length L: [0, L] ⊂ R. Let G ≫ 1, we define an interval grating unit to be r = L/G and next redefine Eq. (10) to arrive at a probability per bin of length r ≪ 1: with: x n = nr, x 0 = Gpr and σ 2 = Gr 2 p(1 − p). Obviously, ρ(x n ) is not a probability on its own, while r · ρ(x n ) = P n is a probability to find a particle in the n-th interval of length r out of the admitted number G = L/r of bins.

Coarse-graining
For a given probability density function on R we can adopt the coarse-graining procedure, [47], giving account of an imprecision with which a spatial position x can be measured or estimated. Thus, if compared with the previous Bernoulli → Gauss argument, we shall proceed in reverse from density functions to approximating them by piece-wise constant, histogram-type discontinuous functions. We need to partition the configuration space R into a family of disjoint subsets (intervals) . = µ k the length of the k-th interval, where µ stands for the Lebesgue measure on R. A probability that a Gaussian random variable with the density ρ takes its value x in an interval B k equals prob(B k ) The probability density ρ coarse grained with respect to the partition {B k } reads: where 1 k (x) is an indicator (characteristic) function of the set B k , which is equal 1 for x ∈ B k and vanishes otherwise.
where an interchange of the summation with integration is presumed to be allowed. By invoking arguments of the previous subsection, we choose a grating unit µ k = r ≪ 1 for all k and notice that < ρ In view of the twice "triple half-width" spatial localization property of the Gauss function, we can safely assert that an interval L ∼ 6σ about x 0 may be used in the coarse graining procedure, instead of the full configuration space R. Effectively, we arrive at a finite partition on L with the resolution L/G = r and then we can safely invoke the definition of p k . = P k = r · ρ(x k ), in conformity with Eq. (11). For a coarse grained probability density we introduce a coarse grained Shannon entropy whose relationship to the original differential entropy is of major interest. We have: with a standard interpretation of the mean information per bin of length r. Here, if a partition (grating) unit r is small, one arrives at an approximate formula (we admit | ln r| ≫ 1): with the obvious proviso that S(ρ B ) ≥ 0 and hence, in view of S(ρ) ≥ ln r, we need to have maintained a proper balance between σ and the chosen grating level r. Remark 5: It is instructive to validate the above approximation for the choice of r = 10 −6 , hence ln r = −6 ln 10 ∼ −13.86 . We have: S(ρ) = (1/2) ln(2πeσ 2 ) ∼ 0.92 + ln σ. By setting σ = (1/2)10 −3 we realize that S(ρ) ∼ 0.92 − ln 2 − 3 ln 10 ∼ −6.7 hence is almost twice larger than the allowed lower bound ln r. The approximate value of the coarse grained entropy is here S(ρ B ) ∼ 7.16 and stands for the mean information per partition bin in the "string" composed of G bins.
In view of Eq. (15), as long as we keep in memory the strictly positive grating unit r, there is a well defined "regularization" procedure (add − ln r to S(ρ)) which relates the coarse grained entropy with a given differential entropy. In a number of cases it is computationally simpler to evaluate the differential entropy, and then to extract -otherwise computationally intractablecoarse grained entropy. Notice that one cannot allow a naive zero grating limit in Eq. (15), although r may be arbitrarily small. Indeed, for e.g. the Gaussian densities, differential entropy takes finite values and this would suggest that the coarse grained entropy might be arbitrarily large. This obstacle does not arise if one proceeds with some care. One should remember that we infer the coarse grained entropy from the differential entropy exactly at the price of introducing the resolution unit r. The smaller is r, the better is an approximation of the differential entropy by the second term on the right-hand-side of Eq. (15), but − ln r needs to remain as a finite entry in Eq. (15).
They extend to all approximately equal entries in Eq. (15). Since − ln r = − ln L + ln G, we arrive at new inequalities: where k [rρ(x k )] ln[ρ(x k )] ⇒ − ρ ln ρ dx with r → 0 and possibly L → ∞. A conclusion is that the differential entropy is unbounded both form below and from the above. In particular, S(ρ) may take arbitrarily low negative values, in plain contrast to its coarse grained version S(ρ B ) which is always nonnegative. We can be more detailed in connection with approximations employed in Eqs. (16) and (15). Actually, the right-hand-side of Eq. (15) sets a lower bound for the coarse-grained entropy S(ρ B ). Let us recall that the value of a convex function x ln x at the mean value of its argument x , does not exceed the mean value x ln x of the function itself. Then, in our notation which follows Eq. (12), we can directly employ an averaging over B k : Taking the minus sign, executing summations with respect to k (convergence of the series being presumed) and using Eqs. (15) and (16) we get: as a complement to Eq. (17), see e.g. also [27,22]. Equations (16) and (18) allow, with suitable reservations, to extend the standard information/uncertainty measure meaning from coarse-grained entropies to differential entropies per se. Namely, the difference of two coarse grained entropies, corresponding to the same partition but to different (coarse grained) densities, may be adequately approximated by the difference of the corresponding differential entropies: An exponential density on a positive half-line R + is known to maximize a differential entropy among all R + density functions with the first moment fixed at 1/λ > 0. The density has the form: ρ(x) = λ exp(−λx) for x ≥ 0 and vanishes for x < 0. Its variance is 1/λ 2 . The differential entropy of the exponential density reads S(ρ) = 1 − ln λ.
In he notation of the previous subsection, let us coarse-grain this density at a particular value of λ = 1 and then evaluate the corresponding entropy as follows. We choose r ≪ 1, p k ≃ ρ(x k ) · r with x k = kr where k is a natural number. One can directly verify that for small r, we can write r ≃ 1 − exp(−r) and thence consider p k ≃ [1 − exp(−r)] exp(−kr), such that ∞ k=0 p k = 1, with the well known quantum mechanical connotation. Namely, let us set r = hν/k B T ; notice that for the first time in the present paper, we explicitly invoke dimensional units, in terms of which the dimensionless constant r is defined. We readily arrive at the probability of the kν-th oscillator mode in thermal bath at the temperature T . Our assumption of r ≪ 1 corresponds to the low frequency oscillator problem with ν ≪ 1, see e.g. [9]. Clearly, we have for the mean number of modes which implies the familiar Planck formula For the variance we get The standard Shannon entropy of the discrete probability distribution p k , k ∈ N reads so that in view of S(ρ B ) ≃ − ln r + 1 and S(ρ) = 1 for ρ(x) = exp(−x), we clearly have S(ρ B ) ≃ − ln r + S(ρ), as expected. Let us point out our quite redundant sophistication. In fact, we could have skipped all the above reasoning and take Eq. (15) as the starting point to evaluate S(ρ B ) for the coarse grained exponential density, at the assumed resolution r = hν/k B T ≪ 1, with the obvious result Remark 6: An analogous procedure can be readily adopted to analyze phenomenological histograms of energy spectra for complex systems, with an obvious extension of its range of validity to spectral properties of the classically chaotic case. In particular, semiclassical spectral series of various quantum systems admit an approximation of spacing histograms by continuous distributions on the positive half-line R + . Although a full fledged analysis is quite complicated, one may invoke quite useful, albeit approximate formulas for adjacent level spacing distributions. The previously mentioned exponential density corresponds to the so-called Poisson spectral series. In the family of radial densities of the form ρ N (x) = 2 Γ(N/2) x N −1 exp(−x 2 ) where N > 1 and Γ is the Euler gamma function, [79], the particular cases N = 2, 3, 5 correspond to the generic level spacing distributions, based on the exploitation of the Wigner surmise. The respective histograms plus their continuous density interpolations are often reproduced in "quantum chaos" papers, see for example [80].

Spatial coarse graining in quantum mechanics
The coarse grained entropy attributes the "mean information per bin of length r" to systems described by continuous probability densities and their differential entropies. Effectively one has a tool which allows to accompany the coarse grained density histogram (of p k in the k-th bin on R) by the related histogram of uncertainties − ln p k , c.f. Section II.A where an uncertainty function has been introduced. The archetypal example of position measurement in quantum mechanics presumes that position is measured in bins corresponding to the resolution of the measurement apparatus. This means that the continuous spectrum of the position observable is partitioned into a countable set of intervals (bins) whose maximum length we regard as a "resolution unit". For an interval B k ⊂ R we may denote p k the probability of finding the outcome of a position measurement to have a value in B k . We are free set the bin size arbitrarily, especially if computer assisted procedures are employed, [78].
Following [17] one may take the view that the most natural measure of the uncertainty in the result of a position measurement or preparation of specific localization features of a state vector (amenable to an analysis via spectral properties of the position operator) should be the information entropy coming from Eqs. (12) and (13): This viewpoint is validated by current experimental techniques in the domain of matter wave interferometry, [44,43], and the associated numerical experimentation where various histograms are generated, [78]. The formula Eq. (15) gives meaning to the intertwine of the differential and coarse grained entropies in the quantum mechanical context. When an analytic form of the entropy is in the reach, the coarse graining is straightforward. One should realize that most of the results known to date have been obtained numerically, hence with an implicit coarse-graining, although they were interpreted in terms of the differential entropy, see e.g. [28]- [37]. In connection with an entropic inequality Eq. (4) let us point out [2] that it is a generic property of normalized L 2 (R n ) wave functions that, by means of the Fourier transformation, they give rise to two interrelated densities (presently we refer to L 2 (R)): ρ = |ψ| 2 andρ = |F (ψ)| 2 where is the Fourier transform of ψ(x). The inequality (4) for the corresponding (finite) differential entropies follows, here with n = 1. By choosing resolutions r ≪ 1 andr ≪ 1 we can introduce the respective coarse grained entropies, each fulfilling an inequality Eq. (18). Combining these inequalities with Eq. (4), we get the prototype entropic inequalities for coarse grained entropies: with the corresponding resolutions r andr. By referring to Eq. (15) we realize that the knowledge of S(ρ B ), completely determines S(ρ B ) at the presumed resolution levels: and in reverse. This in turn implies that in all computer generated position-momentum differential entropy inequalities, where the coarse graining is implicit, the knowledge of position entropy and of the resolution levels provide sufficient data to deduce the combined position-momentum outcomes, see also [14]- [37].
In standard units (with reintroduced), the previous discussion pertains to quantum mechanical position -momentum entropic uncertainty relations. In the notation of Refs. [26] and [22] we have: for measurement entropies with position and momentum "abstract measuring device" resolutions δx and δp respectively, such that δx · δp ≪ h. Note, that to reintroduce we must explain how the differential entropy definition is modified if we pass to dimensional units, see e.g next subsection. Let us also point out that one should not confuse the above resolution units r,r and δx, δp with standard mean square deviation values ∆X and ∆P which are present in the canonical indeterminacy relations: ∆X · ∆P ≥ /2. Let us set ≡ 1 (natural units system). If, following conventions we define the squared standard deviation (i.e. variance) value for an observable A in a pure state ψ as (∆A) 2 = (ψ, [A − A ] 2 ψ) with A = (ψ, Aψ), then for the position X and momentum P operators we have the following version of the entropic uncertainty relation (here expressed through so-called entropy powers, see e.g. [2]): which is an alternative for Eq. (4); n = 1 being implicit.

Impact of dimensional units
Let us come back to an issue of reintroducing physical units in Eq. (28). In fact, if x and p stand for one-dimensional phase space labels and f (x, p) is a normalized phase-space density, f (x, p)dxdp = 1, then the related dimensionless differential entropy is defined as follows, [10,40]: where h = 2π is the Planck constant. If ρ(x) = |ψ| 2 (x), where ψ ∈ L 2 (R), thenρ h (p) = |F h (ψ)| 2 (p) is defined in terms of the dimensional Fourier transform: Let us consider the joint density and evaluate the differential entropy S h for this density. Remembering that ρ(x)dx = 1 = ρ h (p)dp, we have formally, [40] Eq. (9): to be compared with Eq. (28). The formal use of the logarithm properties before executing integrations in the ρ h ln(hρ h ) dp, has left us with the previously mentioned issue (Section 1.2) of "literally taking the logarithm of a dimensional argument" i. e. that of ln h. We recall that S h is a dimensionless quantity, while if x has dimensions of length, then the probability density has dimensions of inverse length and analogously in connection with momentum dimensions. Let us denote x . = rδx and p . =rδp where labels r andr are dimensionless, while δx and δp stand for respective position and momentum dimensional (hitherto -resolution) units. Then: is a dimensionless quantity. Analogously is dimensionless. First left-hand-side terms in two above equations we recognize as S x and S p respectively. Hence, formally we get a manifestly dimensionless decomposition instead of the previous one, Eq. (33). The last identity Eq. (36) gives an unambiguous meaning to the preceding formal manipulations with dimensional quantities.
As a byproduct of our discussion, we have resolved the case of the spatially interpreted real axis, when x has dimensions of length, c.f. also [10]: S x δx is the pertinent dimensionless differential entropy definition for spatial probability densities. Example 1: Let us go through an explicit example involving the Gauss density where σ is the standard deviation (its square stands for the variance). There holds S(ρ) = 1 2 ln (2πeσ 2 ) which is a dimensionless outcome. If we pass to x with dimensions of length, then inevitably σ must have dimensions of length. It is instructive to check that in this dimensional case we have a correct dimensionless result: to be compared with Eq. (34). Then we realize that S x δx vanishes if σ/δx = (2πe) −1/2 , hence at the dimensional value of the standard deviation σ = (2πe) −1/2 δx, compare e.g. [10]. Example 2: Let us invoke the simplest (naive) text-book version of the Boltzmann H-theorem, valid in case of the rarified gas (of mass m particles), without external forces, close to its thermal equilibrium, under an assumption of its space homogeneity, [57,58]. If the probability density function f (v) is a solution of the corresponding Boltzmann kinetic equation, then the Boltzmann H-functional (which is simply the negative of the differential entropy) In the present case we know that there exists an invariant (asymptotic) density, which in one-dimensional case has the form f * . We can straightforwardly, albeit formally, evaluate H * = f * ln f * dv = −(1/2) ln(2πek B T /m) and become faced with an apparent dimensional difficulty, [9], since an argument of the logarithm is not dimensionless. For sure, a consistent integration outcome for H(t) should involve a dimensionless argument k B T /m[v] 2 instead of k B T /m, provided [v] stands for any unit of velocity; examples are [v] = 1 m/s (here m stands for the SI length unit, and not for a mass parameter) or 10 −5 m/s. To this end, in conformity with our previous discussion, it suffices to redefine H * as follows, [9,10]: Multiplying f * by [v] we arrive at the dimensionless argument of the logarithm in the above and cure the dimensional obstacle. Remark 7: Let us also mention an effect of the scaling transformation upon the differential entropy, [4]. We denote where α > 0, β > 0. The respective Shannon entropy reads: In case of the Gaussian ρ, Eq. (37), we get S(ρ α,β ) = ln[(σ/β) √ 2πe]. An obvious interpretation is that the β-scaling transformation of ρ(x − α) would broaden this density if β < 1 and would shrink when β > 1. Clearly, S(ρ α,β ) takes the value 0 at σ = (2πe) −1/2 β in analogy with our previous dimensional considerations. If an argument of ρ is assumed to have dimensions, then the scaling transformation with the dimensional β may be interpreted as a method to restore the dimensionless differential entropy value.

Localization: differential entropy and Fisher information
We recall that among all one-dimensional distribution functions ρ(x) with a finite mean, subject to the constraint that the standard deviation is fixed at σ, it is the Gauss function with halfwidth σ which sets a maximum of the differential entropy, [4]. For the record, let us add that if only the mean is given for probability density functions on R, then there is no maximum entropy distribution in their set. Let us consider the Gaussian probability density on the real line R as a reference density function: , in conformity with Eq. (11), but without any restriction on the value of x ∈ R. The differential entropy of the Gauss density has a simple analytic form, independent of the mean value x 0 and maximizes an inequality: This imposes a useful bound upon the entropy power: with an obvious bearing on the spatial localization of the density ρ, hence spatial (un)certainty of position measurements. We can say that almost surely, with probability 0.998, there is nothing to be found (measured) beyond the interval of the length 6σ which is centered about the mean value x 0 of the Gaussian density ρ.
Knowing that for arbitrary density functions the differential entropy Eq. (3) is unbounded form below and from above, we realize that in the subset of all densities with a finite mean and a fixed variance σ 2 , we actually have an upper bound set by Eq. (42). However, in contrast to coarse grained entropies which are always nonnegative, even for relatively large mean deviation σ < 1/ √ 2πe ≃ 0.26 the differential entropy S(ρ) is negative. Therefore, quite apart from the previously discussed direct information theory links, c.f. Eqs. (15), (18) and (19), the major role of the differential entropy is to be a measure of localization in the "state space" (actually, configuration space) of the system, [75,76,77]. Let us consider a one-parameter family of probability densities ρ α (x) on R whose first (mean) and second moments (effectively, the variance) are finite. The parameter-dependence is here not completely arbitrary and we assume standard regularity properties that allow to differentiate various functions of ρ α with respect to the parameter α under the sign of an (improper) integral, [82].
Namely, let us denote xρ α (x)dx = f (α) and x 2 ρ α dx < ∞. We demand that as a function of x ∈ R, the modulus of the partial derivative ∂ρ α /∂α is bounded by a function G(x) which together with xG(x) is integrable on R. This implies, [82], the existence of ∂f /∂α and an important inequality: via the standard Schwarz inequality. An equality appears in Eq. (44) if ρ α (x) is the Gauss function with mean value α. At this point let assume that the mean value of ρ α actually equals α and we fix at σ 2 the value (x − α) 2 = x 2 − α 2 of the variance (in fact, standard deviation from the mean value ) of the probability density ρ α . The previous inequality Eq. (44) now takes the familiar form: where an integral on the left-hand-side is the so-called Fisher information of ρ α , known to appear in various problems of statistical estimation theory, as well as an ingredient of a number of information-theoretic inequalities, [82,23,24,75,83]. In view of F α ≥ 1/σ 2 , we realize that the Fisher information is more sensitive indicator of the wave packet localization than the entropy power, Eq. (43). Let us define ρ α (x) . = ρ(x − α). Then, the Fisher information F α . = F is no longer the mean value α-dependent and can be readily transformed to the conspicuously quantum mechanical form (up to a factor D 2 with D = /2m): where u . = ∇ ln ρ is named an osmotic velocity field, [50,53], and an average Q = ρ · Qdx is carried out with respect to the function As a consequence of Eq. (46), we have − Q ≥ 1/2σ 2 for all relevant probability densities with any finite mean and variance fixed at σ 2 . When multiplied by D 2 , the above expression for Q(x) notoriously appears in the hydrodynamical formalism of quantum mechanics as the so-called de Broglie-Bohm quantum potential (D = /2m). It appears as well in the corresponding formalism for diffusion-type processes, including the standard Brownian motion (then, D = k B T /mβ, see e.g. [53,54,84]. An important inequality, valid under an assumption ρ α (x) = ρ(x − α), has been proved in [23], see also [24,85]: It tells us that the lower bound for the Fisher information is in fact given a sharper form by means of the (squared) inverse entropy power. Our two information measures appear to be correlated.

Remark 8:
Let us point out that the Fisher information F (ρ) may blow up to infinity under a number of circumstances, [83]: when ρ approaches the Dirac delta behavior, if ρ vanishes over some interval in R or is discontinuous. We observe that F > 0 because it may vanish only when ρ is constant everywhere on R, hence when ρ is not a probability density. Under an additional decomposition/factorization ansatz (of the quantum mechanical L 2 (R n ) provenance) that ρ(x) . = |ψ| 2 (x), where a real or complex function ψ = √ ρ exp(iφ) is a normalized element of L 2 (R), another important inequality holds true, [23,2]: provided the Fisher information takes finite values. Here,σ 2 is the variance of the "quantum mechanical momentum canonically conjugate to the position observable", up to (skipped) dimensional factors. In the above, we have exploited the Fourier transformψ . = (F ψ) of ψ to arrive at ρ . = |ψ| 2 of Eq. (4) whose variance the aboveσ 2 actually is. In view of two previous inequalities (49), (50) we find that not only the Fisher information, but also an entropy power is bounded from below and above. Namely, we have: which implies 1/2σ 2 ≤ − Q ≤ 8π 2σ2 and furthermore as a complement to Eq. (43). Most important outcome of Eq. (52) is that the differential entropy S(ρ) typically may be expected to be a well behaved quantity: with finite both lower and upper bounds. We find rather interesting that the Heisenberg indeterminacy relationship Eq. (29), which is normally interpreted to set a lower bound on the experimentally accessible phase-space data (e.g. volume), according to Eq. (52) ultimately had appeared to give rise to lower and upper bounds upon the configurational (spatial) information measure and thence -the uncertainty (information) measure.
To our knowledge, the inequalities Eqs. (51) and (52), although implicit in various information theory papers, see especially [23] and [2], hitherto were never explicitly spelled out.

Random walk
Let us consider a classic example of a one-dimensional random walk where a particle is assumed to be displaced along R 1 with a probability 1/2 forth and back, each step being of a unit length, [74]. If one begins from the origin 0, after G steps a particle can found at any of the points −G, −G + 1, ... − 1, 0, 1, ..., G. The probability that after G displacements a particle can be found at the point g ∈ [−G, G] is given by the Bernoulli distribution: where G g=−G P g = 1. We are interested in the asymptotic formula, valid for large G and g ≪ G. (Note that even for relatively small value of G = 20, and |g| ≤ 16, an accuracy level is satisfactory.) There holds: and, accordingly We assume r ∼ 10 −6 m, to be a grating unit (i.e. minimal step length for a walk). Let r ≪ ∆x ≪ G (size ∆x ∼ 10 −4 m is quite satisfactory). For large G and |g| ≪ G, we denote x . = g ·r and ask for a probability ρ G (x)∆x that a particle can be found in the interval [x, x + ∆x] after G displacements. The result is [74]: and by assuming that a particle suffers k displacements per unit time, we can give Eq. (56) the familiar form of the heat kernel: with the diffusion coefficient D = kr 2 /2. It is a fundamental solution of the heat equation ∂ t ρ = D∆ρ which is the Fokker-Planck equation for the Wiener process. The differential entropy of the above time-dependent density reads: and its time evolution clearly displays the localization uncertainty growth. By means of the formula Eq. (19) we can quantify the differential entropy dynamics for all solutions of the heat equation.
Since the heat kernel determines the transition probability density for the Wiener process (free Brownian motion in R), by setting x → x − x ′ and t → t − t ′ ≥ 0, we can replace the previous . This transition density allows to deduce any given solution ρ(x, t) of the heat equation from its past data, according to: In particular, we can consider the process starting at t ′ = 0 with any initial density ρ 0 (x). Let ρ υ denote a convolution of a probability density ρ with a Gaussian probability density having variance υ. The transition density of the Wiener process generates such a convolution for ρ 0 , with υ = σ 2 . = 2Dt. Then, de Bruijn identity, [23,75], dS(ρ υ )/dυ = (1/2)F (ρ υ ), directly yields the information entropy time rate for S(ρ) = S(t): The Fisher information F (ρ) is the α = 0 version of the general definition given in Eqs. (46) and (47). The derivation of Eq. (59) amounts to differentiating an υ-dependent integrand under the sign of an improper integral, [82,83]) The monotonic growth of S(t) is paralleled by linear in time growth of σ(t) and the decay of F , hence quantifies the uncertainty (disorder) increase related to the "flattening" down of ρ, see also [83,86].

Kullback entropy versus differential entropy
We emphasize that in the present paper we have deliberately avoided the use of the relative Kullback-Leibler entropy, [81,6,47]. This entropy notion is often invoked to tell "how far from each other" two probability qdensities are. The Kullback entropy is particularly useful if one investigates an approach of the system toward (or its deviation from) equilibrium, this being normally represented by a stationary density function, [48,88]. In this context, it is employed to investigate a major issue of the dynamical origins of the increasing entropy, see [47,48,92]. Consult also both standard motivations and apparent problems encountered in connection with the celebrated Boltzmann's H-theorem, [57,58] and [67]. However, a reliability of the Kullback entropy may be questioned in case of general parameterdependent densities. In particular, this entropy fails to quantify properly certain features of a non-stationary dynamics of probability densities. Specifically if we wish to make a "comparison" of once given density function to itself, but at different stages (instants) of its time evolution. Let us consider a one parameter family of Gaussian densities ρ α = ρ(x − α), with the mean α ∈ R and the standard deviation fixed at σ. These densities are not differentiated by the information (differential) entropy and share its very same value S σ = 1 2 ln (2πeσ 2 ) independent of α. If we admit σ to be another free parameter, a two-parameter family of Gaussian densities ρ α → ρ α,σ (x) appears. Such densities, corresponding to different values of σ and σ ′ do admit an "absolute comparison" in terms of the Shannon entropy, in accordance with Eq. (19): By denoting σ . = σ(t) = √ 2Dt and σ ′ . = σ(t ′ ) we make the non-stationary (heat kernel) density amenable to the "absolute comparison" formula at different time instants t ′ > t > 0: (σ ′ /σ) = t ′ /t.
In the above we have "compared" differential entropies of quite akin, albeit different, probability densities. Among many inequivalent ways to evaluate the "divergence" between probability distributions, the relative (Kullback) entropy is typically used to quantify such divergence from the a priori prescribed reference density, [88,48]. We define the Kullback entropy K(θ, θ ′ ) for a one-parameter family of probability densities ρ θ , so that the "distance" between any two densities in this family can be directly evaluated. Let ρ θ ′ stands for the prescribed (reference) probability density. We have, [81,88,6]: which, in view of the concavity of the function f (w) = −w ln w, is positive.
Let us indicate that the negative of K, H c . = −K, named the conditional entropy [6], is predominantly used in the literature [6,92,48] because of its affinity (regarded as a formal generalization) to the differential entropy. Then e.g. one investigates an approach of −K towards its maximum (usually achieved at the value zero) when a running density is bound to have a unique stationary asymptotic, [92]. If we take θ ′ .
= θ + ∆θ with ∆θ ≪ 1, the following approximate formula holds true under a number of standard assumptions, [81]: where F θ denotes the Fisher information measure, previously defined by Eq. (46). With this proviso, we can evaluate the Kullback distance within a two-parameter (α, σ) family of Gaussian densities, by taking θ → α.
Passing to α ′ = α + ∆α at a fixed value of σ we arrive at: For the record, we note that the respective Shannon entropies do coincide: S α = S α+∆α . Analogously, we can proceed with respect to the label σ at α fixed: when, irrespective of α: By choosing θ → σ 2 at α fixed, we get (now the variance σ 2 is modified by its increment ∆(σ 2 )): while which, upon identifications σ 2 = 2Dt and ∆(σ 2 ) = 2D∆t, sets an obvious connection with the differential (∆S)(t) and thence with the time derivativeṠ = 1/2t of the heat kernel differential entropy, Eq. (58) and the de Bruijn identity. Our previous observations are a special case of more general reasoning. Namely, if we consider a two-parameter θ . = (θ 1 , θ 2 ) family of densities, then instead of Eq. (62) we would have arrived at where i, j, = 1, 2 and the Fisher information matrix F ij has the form In case of Gaussian densities, labelled by independent parameters θ 1 = α and θ 2 = σ (alternatively θ 2 = σ 2 ), the Fisher matrix is diagonal and defined in terms of previous entries F α and F σ (or F σ 2 ). It is useful to note (c.f. also [92]) that in self-explanatory notation, for two θ and θ ′ Gaussian densities there holds: The first entry in Eq. (70) coincides with the "absolute comparison formula" for Shannon entropies, Eq. (60). However for |θ ′ −θ| ≪ 1, hence in the regime of interest for us, the second term dominates the first one. Indeed, let us set α ′ = α and consider σ 2 = 2Dt, ∆(σ 2 ) = 2D∆t. Then S(σ ′ ) − S(σ) ≃ ∆t/2t, while K(θ, θ ′ ) ≃ (∆t) 2 /4t 2 . Although, for finite increments ∆t we have the time derivative notionṠ can be defined exclusively for the differential entropy, and is meaningless in terms of the Kullback "distance". Let us mention that no such obstacles arise in the standard cautious use of the relative Kullback entropy H c . Indeed, normally one of the involved densities stands for the stationary reference one = −K(ρ t |ρ * ), see e.g. [48,92].

Entropy dynamics in the Smoluchowski process
We consider spatial Markov diffusion processes in R with a diffusion coefficient (constant or time-dependent) D and admit them to drive space-time inhomogeneous probability densities ρ = ρ(x, t). In the previous section we have addressed the special case of the free Brownian motion characterized by the current velocity (field) v .
= v · ρ which obeys the continuity equation ∂ t ρ = −∇j, this in turn being equivalent to the heat equation.
It is instructive to notice that the gradient of a potential-type function Q = Q(x, t), c.f. Eq. (48), entirely composed in terms of u: almost trivially appears (i.e. merely as a consequence of the heat equation, [53,94]) in the hydrodynamical (momentum) conservation law appropriate for the free Brownian motion: A straightforward generalization refers to a diffusive dynamics of a mass m particle in the external field of force, here taken to be conservative: F = F (x) = −∇V . The associated Smoluchowski diffusion process with a forward drift b(x) = F mβ is analyzed in terms of the Fokker-Planck equation for the spatial probability density ρ(x, t), [88,89,90,91]: with the initial data ρ 0 (x) = ρ(x, 0). Note that if things are specialized to the standard Brownian motion in an external force field, we know a priori (due to the Einstein fluctuation-dissipation relationship, [74]) that D = k B T mβ , where β is interpreted as the friction (damping) parameter, T is the temperature of the bath, k B being the Boltzmann constant . We assume, modulo restrictions upon drift function [52,68], to resolve the Smoluchowski dynamics in terms of (possibly non-unique) Markovian diffusion-type processes. Then, the following compatibility equations follow in the form of hydrodynamical conservation laws for the diffusion process, [53,94]: where, not to confuse this notion with the previous force field potential V , we denote by Ω(x) the so-called volume potential for the process: where the functional form of Q is given by Eq. (72). Obviously the free Brownian law, Eq. (73), comes out as the special case.
In the above (we use a short-hand notation v . = v(x, t)): v .
defines the current velocity of Brownian particles in external force field. This formula allows us to transform the continuity equation into the Fokker-Planck equation and back.
With a solution ρ(x, t) of the Fokker-Planck equation, we associate a differential (information) entropy S(t) = − ρ ln ρ dx which is typically not a conserved quantity, [59]- [28]. The rate of change in time of S(t) readily follows. Boundary restrictions upon ρ, vρ and bρ to vanish at spatial infinities (or at finite spatial volume boundaries) yield the rate equation: to be compared with the previous, b = 0 case, Eq. (73). Anticipating further discussion, let us stress that even in case of plainly irreversible diffusive dynamics, it is by no means obvious that the differential entropy should grow, decay (diminish) or show up a mixed behavior. It is often tacitly assumed that one should "typically" haveṠ > 0 which is not true, [67,87]. We can rewrite Eq. (79) in a number of equivalent forms, like e.g. (note that u 2 = −D ∇ · u ) and specifically, [67,68], as the major entropy balance equation: where · denotes the mean value with respect to ρ. This balance equation is extremely persuasive, since b = F/(mβ) and j = vρ combine into a characteristic "power release" expression: Like in case of not necessarily positiveṠ, the "power release" expressionQ may be positive which represents the power removal to the environment, as well as negative which corresponds to the power absorption from the environment. In the formal thermodynamical lore, in the above we deal here with the time rate at which the mechanical work per unit of mass may possibly being dissipated (removed to the reservoir) in the form of heat, in the course of the Smoluchowski diffusion process: k B TQ = F · j dx, with T being the temperature of the bath. When there is no external forces, we have b = 0, and then the differential entropy time rate formula for the free Brownian motion Eq. (59) reappears.
On the other hand, the positive terms in Eq. (81) and Eq. (59) represent the rate at which information entropy is put (pumped) into the diffusing system by the thermally active environment, thus causing a disorder/uncertainty growth. This particular "entropy production" rate may possibly be counterbalanced (to this end we need external forces) by the heat removal due to dissipation, according to: whereQ is defined in Eq. (82) while (Ṡ) in = (1/D) v 2 .
Remark 10: In Refs. [68,67,69] a measure-theoretic and probabilistic justification was given to an interpretation of (1/D) v 2 as the entropy production rate of the (originally -stationary) diffusion process with the current velocity v. We would like to point out that traditionally, [70,71,61], the statistical mechanical notion of an entropy production refers to the excess entropy that is pumped out of the system. An alternative statement tells about the entropy production by the physical system into the thermostat. In the present discussion, an increase of the information entropy of the Smoluchowski process definitely occurs due to the thermal environment: the differential entropy is being generated (produced) in the physical system by its environment.
Of particular interest is the case of constant information entropyṠ = 0 which amounts to the existence of steady states. In the simplest case, when the diffusion current vanishes, we encounter the primitive realization of the state of equilibrium with an invariant density ρ. Then, b = u = D∇ ln ρ and we readily arrive at the classic equilibrium identity for the Smoluchowski process: which determines the functional form of the invariant density in case of a given conservative force field, [68,88]. There is an ample discussion in Ref. [68] of how these properties match with time reversal of the stationary diffusion process and the vanishing of the entropy production (in our lore, see e.g. [61,60]) rate (Ṡ) in . Coming back to the general discussion, let us define the so-called thermodynamic force F th .
The mean value of the potential of the thermodynamic force associates with the diffusion process an obvious analogue of the Helmholtz free energy: where the dimensional version S G .
= k B S of information entropy has been introduced (actually, it is a direct configuration-space analog of the Gibbs entropy). The expectation value of the mechanical force potential V plays here the role of (mean) internal energy, [71,67]. By assuming that ρV v vanishes at integration volume boundaries (or infinity), we easily get the time rate of Helmholtz free energy at a constant temperature T : By employing Eq. (83), we readily arrive at which either identically vanishes (equilibrium) or remains negative. Thus, Helmholtz free energy either remains constant in time or decreases as a function of time at the rate set by the information entropy "production"Ṡ in . One may expect that actualy Ψ (t) drops down to a finite minimum as t → ∞. However, this feature is a little bit deceiving. One should be aware that a finite minimum as well may not exist, which is the case e.g. for the free Brownian. Also, multiple minima need to be excluded as well.

Kullback entropy versus Shannon entropy in the Smoluchowski process
In the presence of external forces the property Eq. (89) may consistently quantify an asymptotic approach towards a minimum corresponding to an invariant (presumed to be unique) probability density of the process. Indeed, by invoking Eq. (84) we realize that Let us take the above ρ * (x) as a reference density with respect to which the divergence of ρ(x, t) is evaluated in the course of the pertinent Smoluchowski process. This divergence is well quantified by the conditional Kullback entropy H c (t). Let us notice that Consequently, in view of Eqs. (88) and (83), we geṫ An approach of Ψ (t) towards the minimum proceeds in the very same rate as this of H c (t) towards its maximum. In contrast toḢ c which is non-negative, we have no growth guarantee for the differential entropẏ S whose sign is unspecified. Nonetheless, the balance between the time rate of entropy production/removal and the power release into or out of the environment, is definitely correct. We havė S ≥ −Q. The relationship between two different forms of entropy, differential (Shannon) and conditional (Kullback), and their dynamics is thereby set. An exhaustive discussion of the temporal approach to equilibrium of the Gibbs and conditional entropies can be found in Refs. [92,93]. Both invertible deterministic and non-invertible stochastic systems were addresses.
Typically, the conditional entropy either remains constant or monotonically increases to its maximum at zero. This stays in conformity with expectations motivated by the Boltzamnn H-theorem and the second law of thermodynamics. To the contrary, the Gibbs entropy displays a different behavior, even under the very same approach-to-equilibrium circumstances: it may monotonically increase or decrease, and as well display an oscillatory pattern. In below we shall demonstrate that this potentially strange behavior of the differential entropy gives an insight into nontrivial power transfer processes in the mean, whose assessment would not be possible in terms of the conditional entropy.

One-dimensional Ornstein-Uhlenbeck process
It is quite illuminating to exemplify previous considerations by a detailed presentation of the standard one-dimensional Ornstein-Uhlenbeck process. We denote b(x) = −γx with γ > 0. If an initial density is chosen in the Gaussian form, with the mean value α 0 and variance σ 2 0 . the Fokker-Planck evolution Eq. (74) preserves the Gaussian form of ρ(x, t) while modifying the mean value α(t) = α 0 exp(−γt) and variance according to Accordingly, since a unique invariant density has the form ρ * = γ/2πD exp(−γx 2 /2D) we obtain, [92]: while in view of our previous considerations, we have S(t) = (1/2) ln[2πeσ 2 (t)] and F = 1/σ 2 (t). ThereforeṠ We observe that if σ 2 0 > D/γ, thenṠ < 0, while σ 2 0 < D/γ impliesṠ > 0. In both cases the behavior of the differential entropy is monotonic, though its growth or decay do critically rely on the choice of σ 2 0 . Irrespective of σ 2 0 the asymptotic value of S(t) as t → ∞ reads (1/2) ln[2πe(D/γ). The differential entropy evolution is anti-correlated with this of the localization, sincė For all σ 2 0 the asymptotic value of F reads γ/D. We have here a direct control of the behavior of the "power release" expressionQ =Ḣ c −Ṡ. Sincė in case ofṠ < 0 we encounter a continual power supplyQ > 0 by the thermal environment.
In case ofṠ > 0 the situation is more complicated. For example, if α 0 = 0, we can easily check thatQ < 0, i.e. we have the power drainage from the environment for all t ∈ R + . More generally, the sign ofQ is negative for α 2 0 < 2(D − γσ 2 0 )/γ. If the latter inequality is reversed, the sign ofQ is not uniquely specified and suffers a change at a suitable time instant t change (α 2 0 , σ 2 0 ).

Mean energy and the dynamics of Fisher information
By considering (−ρ)(x, t) and s(x, t), such that v = ∇s, as canonically conjugate fields, we can invoke the variational calculus. Namely, one may derive the continuity (and thus Fokker-Planck) equation together with the Hamilton-Jacobi type equation (whose gradient implies the hydrodynamical conservation law Eq. (76)): by means of the extremal (least, with fixed end-point variations) action principle involving the (mean Lagrangian: The related Hamiltonian (which is the mean energy of the diffusion process per unit of mass) reads We can evaluate an expectation value of Eq. (98) which implies an identity H = − ∂ t s . By invoking Eq. (86), with the time-independent V , we arrive aṫ whose expectation value Ψ , in view of vρ = 0 at the integration volume boundaries, identically vanishes. Since v = −(1/mβ)∇Ψ, we define s(x, t) .
so that H ≡ 0 identically. We have thus arrived at the following interplay between the mean energy and the information entropy "production" rate: generally valid for Smoluchowski processes with non-vanishing diffusion currents. By recalling the notion of the Fisher information Eq. (47) and setting F . = D 2 F α , we can rewrite the above formula as follows: where F /2 = − Q > 0 holds true for probability densities with finite mean and variance. We may evaluate directly the uncertainty dynamics of the Smoluchowski process, by recalling that the Fisher information F /2 is the localization measure, which for probability densities with finite mean value and variance σ 2 is bounded from below by 1/σ 2 , see e.g. Section III.
Namely, by exploiting the hydrodynamical conservation laws Eq. (76) for the Smoluchowski process we get: We assume to have secured conditions allowing to take a derivative under an indefinite integral, and take for granted that of ρv 3 vanishes at the integration volume boundaries. This implies the following expression for the time derivative of v 2 : Proceeding in the same vein, in view ofΩ, we find that and so the equation of motion for F follows: Since we have ∇Q = ∇P/ρ where P = D 2 ρ ∆ ln ρ, the previous equation takes the formḞ = − ρv∇Qdx = − v∇P dx, which is an analog of the familiar expression for the power release (dE/dt = F · v, with F = −∇V ) in classical mechanics; this to be compared with our previous discussion of the "heat dissipation" term Eq. (82). Remark 11: As indicates our previous example of the Ornstein-Uhlenbeck process in one dimension, there is nothing obvious to say about the growth or decay of various quantities involved. In this particular case, we have e.g. v 2 (t) = (D/2)Ḣ c = t(γ 2 α 2 0 /D) exp(−2γt), hence an asymptotic value 0, while u 2 (t) = (D/2)F (t) → γ/D. Accordingly, we have Ω (t) → −γ/2D.

Balance equations
In the discussion of Smoluchowski diffusions, our major reference point was the conventional Fokker-Planck equation (74) for a probability density supporting a Markovian diffusion process. The (time-independent) drift function b was assumed to be known a priori (e.g. the conservative external forces were established on phenomenological or model construction grounds), while the initial and/or boundary data for the probability density of the process could be chosen (to a high degree) arbitrarily. Under such "normal" circumstances, the hydrodynamical conservation laws (76) come out as a direct consequence of the Fokker-Planck equation. Also, the functional expression for Ω of the form (77) is basically known to arise if one attempts to replace an elliptic diffusion operator by a Hermitian (and possibly self-adjoint) one, [88,68,53]. We shall depart from the standard Brownian motion setting to more general Markovian diffusiontype processes which, while still respecting the Fokker-Planck equation, admit general timedependent forward drifts. In fact, we invoke at this point a well defined stochastic counterpart of the Schrödinger picture quantum dynamics of wave packets, [50,51,53,54,52,68,85], where the notion of differential entropy and its dynamics finds a proper place. The dynamics of quantal probability densities is here resolved in terms of diffusion-type processes. Let us assume to have chosen an arbitrary continuous (it is useful, if bounded from below) function V = V( − → x , t) with dimensions of energy. we consider the Schrödinger equation (set D = /2m) in the form The Madelung decomposition ψ = ρ 1/2 exp(is) with the phase function s = s(x, t) defining v = ∇s is known to imply two coupled equations: the standard continuity equation ∂ t ρ = −∇(vρ) and the Hamilton-Jacobi-type equation where Ω . = V/m and the functional form of Q coincides with this introduced previously in Eq. (72). Notice a "minor" sign change in Eq. (110) in comparison with Eq. (98). Those two equations form a coupled system, whose solutions describe a Markovian diffusion-type process: the probability density is propagated by a Fokker-Planck dynamics of the form Eq. (74) with the drift b = v − u where u = D∇ ln ρ is an osmotic velocity field. We can mimic the calculus of variations steps of the previous section, so arriving at the Hamiltonian (actually, the mean energy of the quantum motion per unit of mass): to be compared with Eq. (100). There holds Of particular interest (due to its relative simplicity) is the case of time-independent V, when is known to be a conserved finite quantity, which is not necessarily positive. Since generally H = 0, we deal here with so-called finite energy diffusion-type processes, [51,52]. The corresponding Fokker-Planck equation propagates a probability density |ψ| 2 = ρ, whose differential entropy S may quite nontrivially evolve in time.
Keeping intact the previous derivation procedures for (Ṡ) in (while assuming the validity of mathematical restrictions upon the behavior of integrands), we encounter the information entropy balance equations in their general form disclosed in Eqs. (81)-(83). The related differential entropy "production" rate reads: We recall that 1 2 F = − Q > 0 which implies E − Ω ≥ 1 2 F > 0. Therefore, the localization measure F has a definite upper bound: the pertinent wave packet cannot be localized too sharply. We notice that the localization (Fisher) measure in general evolves in time. Here E is a constant andΩ = 0. By invoking the hydrodynamical conservation laws, we find out that the dynamics of Fisher information follows an equation: and that there holds which is to be compared (notice the opposite sign of the right-hand expression) with the result we have obtained for Smoluchowski processes.
Obviously, now we haveḞ = + v∇P dx, with the same functional form for P as before. We interpretḞ as the measure of power transfer in the course of which the (de)localization "feeds" the diffusion current and in reverse. Here, we encounter a negative feedback between the localization and the proper energy of motion which keeps intact an overall mean energy H = E of the quantum motion. See e.g. also [53].
In case of v = 0, we have E = 1 2 F + Ω and no entropy "production" nor dynamics of uncertainty. There holdsṠ = 0 and we deal with time-reversible stationary diffusion processes and their invariant probability densities ρ(x), [68,52]. Remark 12: Let us indicate that the phase function s(x, t) shows up certain (remnant) features of the Helmholtz Ψ and Ψ . This behavior is not unexpected, since e.g. the ground state densities (and other invariant densities of stationary states) are directly related to time-reversible stationary diffusion-type processes of Refs. [52,68]. We have − ∂ t s = E. In view of v = ∇s and assumed vanishing of sρv at the integration volume boundaries, we get: The previously mentioned case of no entropy "production" refers to v = 0 and thus s = s 0 − E · t. We recall that the corresponding derivation of Eq. (89) has been carried out for v = −(1/mβ)∇Ψ, with Ψ = 0). Hence, as close as possible link with the present discussion is obtained if we re-define s into s Ψ . = −s. Then we have For stationary quantum states, when v = 0 identically, we get d dt s Ψ = E, in contrast to the standard Fokker-Planck case of d dt Ψ = 0. Interestingly enough, we can write the generalized Hamilton-Jacobi equation, while specified to the v = regime, with respect to s Ψ . Indeed, there holds ∂ t s Ψ = Ω − Q, in close affinity with Eq. (98) in the same regime.

Free evolution
Let us consider the probability density in one space dimension: and the phase function which determine a free wave packet solution of equations (109) and (110), i.e. obtained for the case of V ≡ 0 with the initial data ψ(x, 0) = (πα 2 ) −1/4 exp(−x 2 /2α 2 ).
and the the Fokker-Planck equation with the forward drift b(x, t) is solved by the above ρ.
In the present case, the differential entropy reads: where X 2 . = x 2 ρdx = (α 4 + 4D 2 t 2 )/2α 2 . Its time rate DṠ = v 2 − b · v equals: for t ≥ 0. Its large time asymptotic is D/t. Furthermore, we have with the obvious large time asymptotic value 2D 2 /α 2 : the differential entropy production remains untamed for all times. Due to u 2 = (2D 2 α 2 )/(α 4 + 4D 2 t 2 ) there holds Accordingly, the quantum mechanical analog of the entropy (rather than heat) "dissipation" term −D · Q in the quantum case reads and while taking negative values for t < α 2 /2D, it turns out to be positive for larger times. Formally speaking, after a short entropy "dissipation" period we pass to the entropy "absorption" regime which in view of its D/t asymptotic, for large times definitely dominates D(Ṡ) in ∼ 2D 2 /α 2 .
Those differential entropy balance features do parallel a continual growth of the mean kinetic energy (1/2) v 2 from an initial value 0 towards its asymptotic value D 2 /α 2 = E. Note that the negative feedback is here displayed by the the behavior of u 2 which drops down from the initial value 2D 2 /α 2 towards 0. It is also instructive to notice that in the present case F (t) = D 2 / X 2 (t). We can readily check thatḞ = d u 2 /dt = −d v 2 /dt.
A forward drift takes the form: and the above ρ solves the corresponding Fokker-Planck equation.
For example, we have v = ∇s = p(t)/2m and therefore an oscillating entropy "production" term D(Ṡ) in = p 2 (t)/4m 2 which is balanced by an oscillating "dissipative" counter-term to yieldṠ. Suitable expressions for s and ∂ t s easily follow.
Concerning the Fisher measure, we have obviously F = ω/D which is a constant of motion.

Stationary states
In contrast to generic applications of the standard Fokker-Planck equation, where one takes for granted that there is a unique positive stationary probability density, the situation looks otherwise if we admit the Schrödinger equation as a primary dynamical rule for the evolution of (inferred) probability densities. For a chosen potential, all available stationary quantum states may serve the purpose, since then we have nonnegative (zeroes are now admitted) ρ * (x), and v(x) = 0 identically (we stay in one spatial dimension). The standard harmonic oscillator may serve as an instructive example. One may e.g. consult Fig.  3 in [30] to check the behavior of both position and momentum differential entropies, and their sum, depending on the energy eigenvalue. All these stationary state values grow monotonically with n = 1, 2, ..., 60, [30] and follow the pattern in the asymptotic regime n ≡ 500, [33]. For convenience we shall refer to the Schrödinger eigenvalue problem with scaled away physical units. We consider ( compare e.g. Eq. (109) with D → 1/2) In terms of a suitable Hamilton-Jacobi type equation we can address the same problem by seeking solutions of an equation n + 1/2 = Ω − Q with respect to √ ρ * , provided we set Ω = x 2 /2, define u = ∇ ln √ ρ * and demand that Q = u 2 /2 + (1/2)∇ · u.
Obviously, except for the ground state which is strictly positive, all remaining stationary states are nonnegative. An open problem, first generally addressed in [95], see also [96], is to implement a continuous dynamical process for which any of induced stationary densities may serve as an invariant asymptotic one. An obvious (Ornstein-Uhlenbeck) solution is known for the ground state density. Its ample discussion (albeit without mentioning the quantum connotation) has been given in Section IV.

Outlook
One may raise an issue of what are the entropy functionals good for. In non-equilibrium statistical mechanics of gases one invokes them to solve concrete physical problems: for example to address the second law of thermodynamics and the related Boltzmann H-theorem from a probabilistic point of view. We are strongly motivated by a number of reported problems with the rigorous formulation of effects of noise on entropy evolution and a justification of the entropy growth paradigm for model systems, [58,57,89,92,13,97] and a long list of mathematically oriented papers on the large time asymptotic of stochastic systems, [98]- [102]. Therefore, the major issue addressed in the present paper is that of quantifying the dynamics of probability densities in terms of proper entropy functionals. The formalism was designed to encompass standard diffusion processes of non-equilibrium statistical physics and the Schrödinger picture implemented dynamics of probability densities related to pure quantum states in L 2 (R). In the latter case an approach towards equilibrium has not been expected to occur at all. To this end the behavior of Shannon and Kullback-Leibler entropies in time has been investigated in classical and quantum mechanical contexts. The utility of a particular form of entropy for a given dynamical model appears to be basically purpose dependent. The use of the Kullback-Leibler entropy encounters limitations which are not shared by the differential entropy, when the dynamical process is rapid. Alternatively, if one is interested in its short-time features. On the contrary, it is the conditional Kullback-Leibler entropy which is often invoked in rigorous formulations of the Boltzmann H-theorem under almost-equilibrium conditions and its analogues for stochastic systems. The large time asymptotics of solutions of Fokker-Planck equations, if analyzed in terms of this entropy, gives reliable results. However, our analysis of Smoluchowski diffusion processes and of the exemplary Ornstein-Uhlenbeck process, demonstrates that a deeper insight into the underlying non-equilibrium physical phenomena (the inherent power transfer processes) is available only in terms of the Shannon entropy and its time rate of change. This insight is inaccessible in terms of the Kullback-Leibler entropy . The differential entropy needs not to increase, even in case of plainly irreversible dynamics. The monotonic growth in time of the conditional Kullback entropy (when applicable), not necessarily should be related to the "dynamical origins of the increasing entropy", [47,93]. We would rather tell that the conditonal entropy is well suited to stay in correspondence with the lore of the second law of thermodynamics, since by construction its time behavior is monotonic, if one quantifies an asymptotic approach towards a stationary density. In case of Smoluchowski processes, the time rate of the conditional Kullback entropy was found to coincide with the corresponding differential (Shannon) entropy "production" rate. The differential entropy itself needs not to grow and may as well change its dynamical regime from growth to decay and in reverse, even with the entropy "production" involved. Balance equations for the differential entropy and the Fisher information measure involve a nontrivial power transfer. In case of Smoluchowski processes this power release can be easily attributed to the entropy removal from the system or the entropy absorption from (drainage) the thermostat.
In the quantum mechanical regime, the inherent power transfer is related to metamorphoses of various forms of mean energy among themselves and needs not the notion of external to the system thermostat. Apart from the above observations, we have provided a comprehensive review of varied appearances of the differential entropy in the existing literature on both classical and quantum dynamical systems. As a byproduct of the general discussion, we have described its specific quantum manifestations, in the specific (pure quantum states) regime where the traditional von Neumann entropy is of no much use.