Abstract
We find that the standard relative entropy and the Umegaki entropy are designed for the purpose of inferentially updating probabilities and density matrices, respectively. From the same set of inferentially guided design criteria, both of the previously stated entropies are derived in parallel. This formulates a quantum maximum entropy method for the purpose of inferring density matrices in the absence of complete information.
1. Introduction
We design an inferential updating procedure for probability distributions and density matrices such that inductive inferences may be made. The inferential updating tools found in this derivation take the form of the standard and quantum relative entropy functionals, and thus we find the functionals are designed for the purpose of updating probability distributions and density matrices, respectively. Previously formulated design derivations which found the entropy to be a tool for inference originally required five design criteria (DC) [1,2,3], this was reduced to four in [4,5,6], and then down to three in [7]. We reduced the number of required DC down to two while also providing the first design derivation of the quantum relative entropy—using the same design criteria and inferential principles in both instances.
The designed quantum relative entropy takes the form of Umegaki’s quantum relative entropy, and thus it has the “proper asymptotic form of the relative entropy in quantum (mechanics)” [8,9,10]. Recently, Wilming, etc. [11] gave an axiomatic characterization of the quantum relative entropy that “uniquely determines the quantum relative entropy”. Our derivation differs from their’s, again in that we design the quantum relative entropy for a purpose, but also that our DCs are imposed on what turns out to be the functional derivative of the quantum relative entropy rather than on the quantum relative entropy itself. The use of a quantum entropy for the purpose of inference has a large history: Jaynes [12,13] invented the notion of the quantum maximum entropy method [14], while it was perpetuated by [15,16,17,18,19,20,21,22] and many others. However, we find the quantum relative entropy to be the suitable entropy for updating density matrices, rather than the von Neuman entropy [23], as is suggested in [24]. I believe the present article provides the desired motivation for why the appropriate quantum relative entropy for updating density matrices, from prior to posterior, should be logarithmic in form while also providing a solution for updating non-uniform prior density matrices [24]. The relevant results of these papers may be found using the quantum relative entropy with suitably chosen prior density matrices.
It should be noted that because the relative entropies were reached by design, they may be interpreted as such, “the relative entropies are tools for updating”, which means we no longer need to attach an interpretation ex post facto—as a measure of disorder or amount of missing information. In this sense, the relative entropies were built for the purpose of saturating their own interpretation [4,7], and, therefore, the quantum relative entropy is the tool designed for updating density matrices.
This article takes an inferential approach to probabilities and density matrices that is expected to be notionally consistent with the Bayesian derivations of Quantum Mechanics, such as Entropic Dynamics [7,25,26,27], as well as Bayesian interpretations of Quantum Mechanics, such as QBism [28]. The quantum maximum entropy method is, however, expected to be useful independent of one’s interpretation of Quantum Mechanics because the entropy is designed at the level of density matrices rather than being formulated from arguments about the “inner workings” of Quantum Mechanics. This inferential approach is, at the very least, verbally convenient so we will continue writing in this language.
A few applications of the quantum maximum entropy method are given in an another article [29]. By maximizing the quantum relative entropy with respect to a “data constraint” and the appropriate prior density matrix, the Quantum Bayes Rule [30,31,32,33,34] (a positive-operator valued measure (POVM) measurement and collapse) is derived. The quantum maximum entropy method can reproduce the density matrices in [35,36] that are cited as “Quantum Bayes Rules”, but the required constraints are difficult to motivate; however, it is expected that the results of this paper may be useful for further understanding Machine Learning techniques that involve the quantum relative entropy [37]. The Quantum Bayes Rule derivation in [29] is analogous to the standard Bayes Rule derivation from the relative entropy given in [38], as was suggested to be possible in [24]. This article provides the foundation for [29], and thus, the quantum maximum entropy method unifies a few topics in Quantum Information and Quantum Measurement through entropic inference.
As is described in this article and in [29], the quantum maximum entropy method is able to provide solutions even if the constraints and prior density matrix in question do not all mutually commute. This might be useful for subjects as far reaching as [39], which seeks to use Quantum Theory as a basis for building models for cognition. The immediate correspondence is that the quantum maximum entropy method might provide a solution toward addressing the empirical evidence for noncommutative cognition, which is how one’s cognition changes when addressing questions in permuted order [39]. A simpler model for noncommutative cognition may also be possible by applying sequential updates via the standard maximum entropy method with their order permuted. Sequential updating does not, in general, give the same resultant probability distribution when the updating order is permuted—this is argued to be a feature of the standard maximum entropy method [40]. Similarly, sequential updating in the quantum maximum entropy method also has this feature, but it should be noted that the noncommutativity of sequential updating is different in principle than simultaneously updating with respect to expectation values of noncommuting operators.
The remainder of the paper is organized as follows: first, we will discuss some universally applicable principles of inference and motivate the design of an entropy function able to rank probability distributions. This entropy function will be designed such that it is consistent with inference by applying a few reasonable design criteria, which are guided by the aforementioned principles of inference. Using the same principles of inference and design criteria, we find the form of the quantum relative entropy suitable for inference. The solution to an example of updating 2 × 2 prior density matrices with respect to expectation values over spin matrices that do not commute with the prior via the quantum maximum entropy method is given in the Appendix B. We end with concluding remarks (I thank the reviewers for providing several useful references in this section).
2. The Design of Entropic Inference
Inference is the appropriate updating of probability distributions when new information is received. Bayes rule and Jeffrey’s rule are both equipped to handle information in the form of data; however, the updating of a probability distribution due to the knowledge of an expectation value was realized by Jaynes [12,13,14] through the method of maximum entropy. The two methods for inference were thought to be devoid of one another until the work of [38,40], which showed Bayes Rule and Jeffrey’s Rule to be consistent with the method of maximum entropy when the expectation values were in the form of data [38,40]. In the spirit of the derivation we will carry on as if the maximum entropy method were not known and show how it may be derived as an application of inference.
Given a probability distribution over a general set of propositions , it is self evident that if new information is learned, we are entitled to assign a new probability distribution that somehow reflects this new information while also respecting our prior probability distribution . The main question we must address is: “Given some information, to what posterior probability distribution should we update our prior probability distribution ?”, that is,
This specifies the problem of inductive inference. Since “information” has many colloquial, yet potentially conflicting, definitions, we remove potential confusion by defining information operationally as the rationale that causes a probability distribution to change (inspired by and adapted from [7]). Directly from [7]:
Our goal is to design a method that allows a systematic search for the preferred posterior distribution. The central idea, first proposed in [4], is disarmingly simple: to select the posterior, first rank all candidate distributions in increasing order of preference and then pick the distribution that ranks the highest. Irrespective of what it is that makes one distribution preferable over another (we will get to that soon enough), it is clear that any ranking according to preference must be transitive: if distribution is preferred over distribution , and is preferred over , then is preferred over . Such transitive rankings are implemented by assigning to each a real number , which is called the entropy of , in such a way that if is preferred over , then . The selected distribution (one or possibly many, for there may be several equally preferred distributions) is that which maximizes the entropy functional.
Because we wish to update from prior distributions to posterior distributions by ranking, the entropy functional is a real function of both and . In the absence of new information, there is no available rationale to prefer any to the original , and thereby the relative entropy should be designed such that the selected posterior is equal to the prior (in the absence of new information). The prior information encoded in is valuable and we should not change it unless we are informed otherwise. Due to our definition of information, and our desire for objectivity, we state the predominate guiding principle for inductive inference:
The Principle of Minimal Updating (PMU):A probability distribution should only be updated to the extent required by the new information.
This simple statement provides the foundation for inference [7]. If the updating of probability distributions is to be done objectively, then possibilities should not be needlessly ruled out or suppressed. Being informationally stingy, that we should only update probability distributions when the information requires it, pushes inductive inference toward objectivity. Thus, using the PMU helps formulate a pragmatic (and objective) procedure for making inferences using (informationally) subjective probability distributions [41].
This method of inference is only as universal and general as its ability to apply equally well to any specific inference problem. The notion of “specificity” is the notion of statistical independence; a special case is only special in that it is separable from other special cases. The notion that systems may be “sufficiently independent” plays a central and deep-seated role in science and the idea that some things can be neglected and that not everything matters, is implemented by imposing criteria that tells us how to handle independent systems [7]. Ironically, the universally shared property by all specific inference problems is their ability to be independent of one another—they share independence. Thus, a universal inference scheme based on the PMU permits:
Properties of Independence (PI):
Subdomain Independence: When information is received about one set of propositions, it should not affect or change the state of knowledge (probability distribution) of the other propositions (else information was also received about them too);
And,
Subsystem Independence: When two systems are a priori believed to be independent and we only receive information about one, then the state of knowledge of the other system remains unchanged.
The PIs are special cases of the PMU that ultimately take the form of design criteria in this design derivation. The process of constraining the form of by imposing design criteria may be viewed as the process of eliminative induction, and after sufficient constraining, a single form for the entropy remains. Thus, the justification behind the surviving entropy is not that it leads to demonstrably correct inferences, but, rather, that all other candidate entropies demonstrably fail to perform as desired [7]. Rather than the design criteria instructing one how to update, they instruct in what instances one should not update. That is, rather than justifying one way to skin a cat over another, we tell you when not to skin it, which is operationally unique—namely you don’t do it—luckily enough for the cat.
The Design Criteria and the Standard Relative Entropy
The following design criteria (DC), guided by the PMU, are imposed and formulate the standard relative entropy as a tool for inference. The form of this presentation is inspired by [7].
DC1: Subdomain Independence
We keep DC1 from [7] and review it below. DC1 imposes the first instance of when one should not update—the Subdomain PI. Suppose the information to be processed does not refer to a particular subdomain of the space of xs. In the absence of new information about , the PMU insists we do not change our minds about probabilities that are conditional on . Thus, we design the inference method so that , the prior probability of x conditional on , is not updated and therefore the selected conditional posterior is
(The notation will be as follows: we denote priors by , candidate posteriors by lower case , and the selected posterior by upper case P.) We emphasize the point is not that we make the unwarranted assumption that keeping unchanged is guaranteed to lead to correct inferences. It need not; induction is risky. The point is, rather, that, in the absence of any evidence to the contrary, there is no reason to change our minds and the prior information takes priority.
DC1 Implementation
Consider the set of microstates belonging to either of two non-overlapping domains or its compliment , such that and . For convenience, let . Consider the following constraints:
such that , and the following “local” expectation value constraints over and ,
where is a scalar function of x and . As we are searching for the candidate distribution which maximizes S while obeying (2) and (3), we maximize the entropy with respect to these expectation value constraints using the Lagrange multiplier method,
and, thus, the entropy is maximized when the following differential relationships hold:
Equations (2)–(5), are equations we must solve to find the four Lagrange multipliers and the n probability values associated to the n microstates . If the subdomain constraint DC1 is imposed in the most restrictive case, then it will hold in general. The most restrictive case requires splitting into a set of domains such that each singularly includes one microstate . This gives,
Because the entropy is a functional over the probability of each microstate’s posterior and prior distribution, its variational derivative is also a function of said probabilities in general,
DC1 is imposed by constraining the form of to ensure that changes in have no influence over the value of in domain , through , for . If there is no new information about propositions in , its distribution should remain equal to by the PMU. We further restrict such that an arbitrary variation of (a change in the prior state of knowledge of the microstate j) has no effect on for and therefore DC1 imposes , as is guided by the PMU. At this point, it is easy to generalize the analysis to continuous microstates such that the indices become continuous , sums become integrals, and discrete probabilities become probability densities .
Remark
We are designing the entropy for the purpose of ranking posterior probability distributions (for the purpose of inference); however, the highest ranked distribution is found by setting the variational derivative of equal to the variations of the expectation value constraints by the Lagrange multiplier method,
Therefore, the real quantity of interest is rather than the specific form of . All forms of that give the correct form of are equally valid for the purpose of inference. Thus, every design criteria may be made on the variational derivative of the entropy rather than the entropy itself, which we do. When maximizing the entropy, for convenience, we will let,
and further use the shorthand , in all cases.
DC1’: In the absence of new information, our new state of knowledge is equal to the old state of knowledge .
This is a special case of DC1, and is implemented differently than in [7]. The PMU is in principle a statement about informational honestly—that is, one should not “jump to conclusions” in light of new information and in the absence of new information, one should not change their state of knowledge. If no new information is given, the prior probability distribution does not change, that is, the posterior probability distribution is equal to the prior probability. If we maximizing the entropy without applying constraints,
then DC1’ imposes the following condition:
for all x in this case. This special case of the DC1 and the PMU turns out to be incredibly constraining as we will see over the course of DC2.
Comment
If the variable x is continuous, DC1 requires that when information refers to points infinitely close but just outside the domain , that it will have no influence on probabilities conditional on [7]. This may seem surprising as it may lead to updated probability distributions that are discontinuous. Is this a problem? No.
In certain situations (e.g., physics) we might have explicit reasons to believe that conditions of continuity or differentiability should be imposed and this information might be given to us in a variety of ways. The crucial point, however—and this is a point that we keep and will keep reiterating—is that unless such information is explicitly given, we should not assume it. If the new information leads to discontinuities, so be it.
DC2: Subsystem Independence
DC2 imposes the second instance of when one should not update—the Subsystem PI. We emphasize that DC2 is not a consistency requirement. The argument we deploy is not that both the prior and the new information tells us the systems are independent, in which case consistency requires that it should not matter whether the systems are treated jointly or separately. Rather, DC2 refers to a situation where the new information does not say whether the systems are independent or not, but information is given about each subsystem. The updating is being designed so that the independence reflected in the prior is maintained in the posterior by default via the PMU and the second clause of the PIs [7].
The point is not that when we have no evidence for correlations we draw the firm conclusion that the systems must necessarily be independent. They could indeed have turned out to be correlated and then our inferences would be wrong. Again, induction involves risk. The point is rather that if the joint prior reflects independence and the new evidence is silent on the matter of correlations, then the prior independence takes precedence. As before, in this case subdomain independence, the probability distribution should not be updated unless the information requires it [7].
DC2 Implementation
Consider a composite system, . Assume that all prior evidence led us to believe the subsystems are independent. This belief is reflected in the prior distribution: if the individual system priors are and , then the prior for the whole system is their product . Further suppose that new information is acquired such that would by itself be updated to and that would be itself be updated to . By design, the implementation of DC2 constrains the entropy functional such that, in this case, the joint product prior updates to the selected product posterior [7].
The argument below is considerably simplified if we expand the space of probabilities to include distributions that are not necessarily normalized. This does not represent any limitation because a normalization constraint may always be applied. We consider a few special cases below:
Case 1: We receive the extremely constraining information that the posterior distribution for system 1 is completely specified to be while we receive no information at all about system 2. We treat the two systems jointly. Maximize the joint entropy subject to the following constraints on the :
Notice that the probability of each within is being constrained to in the marginal. We therefore need a one Lagrange multiplier for each to tie each value of to . Maximizing the entropy with respect to this constraint is,
which requires that
for arbitrary variations of . By design, DC2 is implemented by requiring in this case, therefore,
This equation must hold for all choices of and all choices of the prior as is independent of . Suppose we had chosen a different prior that disagrees with . For all and , the multiplier remains unchanged as it constrains the independent . This means that any dependence that the right-hand side might potentially have had on and on the prior must cancel out. This means that
Since is arbitrary in f, suppose further that we choose a constant prior set equal to one, , therefore
in general. This gives
The left-hand side does not depend on , and therefore neither does the right-hand side. An argument exchanging systems 1 and 2 gives a similar result.
Case 1—Conclusion: When the system 2 is not updated the dependence on and drops out,
and vice-versa when system 1 is not updated,
As we seek the general functional form of , and because the dependence drops out of (19) and the dependence drops out of (20) for arbitrary and , the explicit coordinate dependence in consequently drops out of both such that,
as must only depend on coordinates through the probability distributions themselves. (As a double check, explicit coordinate dependence was included in the following computations but inevitably dropped out due to the form the functional equations and DC1’. By the argument above, and for simplicity, we drop the explicit coordinate dependence in here.)
Case 2: Now consider a different special case in which the marginal posterior distributions for systems 1 and 2 are both completely specified to be and respectively. Maximize the joint entropy subject to the following constraints on the ,
Again, this is one constraint for each value of and one constraint for each value of , which, therefore, require the separate multipliers and . Maximizing S with respect to these constraints is then,
leading to
The updating is being designed so that , as the independent subsystems are being updated based on expectation values which are silent about correlations. DC2 thus imposes,
Write (25) as,
The left-hand side is independent of so we can perform a trick similar to that we used before. Suppose we had chosen a different constraint that differs from and a new prior that differs from except at the value . At the value ,the multiplier remains unchanged for all , , and thus . This means that any dependence that the right-hand side might potentially have had on and on the choice of , must cancel out, leaving unchanged. That is, the Lagrange multiplier “pushes out” these dependences such that
Because is independent of arbitrary variations of and on the left hand side (LHS) above—it is satisfied equally well for all choices. The form of is apparent if as similar to Case 1 as well as DC1’. Therefore, the Lagrange multiplier is
A similar analysis carried out for leads to
Case 2—Conclusion: Substituting back into (25) gives us a functional equation for ,
The general solution for this functional equation is derived in the Appendix A.3, and is
where are constants. The constants are fixed by using DC1’. Letting gives by DC1’, and, therefore,
so we are forced to conclude for arbitrary . Letting such that we are really maximizing the entropy (although this is purely aesthetic) gives the general form of to be
As long as , the value of A is arbitrary as it always can be absorbed into the Lagrange multipliers. The general form of the entropy designed for the purpose of inference of is found by integrating , and, therefore,
The constant in , , will always drop out when varying . The apparent extra term () from integration cannot be dropped while simultaneously satisfying DC1’, which requires in the absence of constraints or when there is no change to one’s information. In previous versions where the integration term () is dropped, one obtains solutions like (independent of whether was previously normalized or not) in the absence of new information. Obviously, this factor can be taken care of by normalization, and, in this way, both forms of the entropy are equally valid; however, this form of the entropy better adheres to the PMU through DC1’. Given that we may regularly impose normalization, we may drop the extra term and . For convenience then, (34) becomes
which is a special case when the normalization constraint is being applied. Given normalization is applied, the same selected posterior maximizes both and , and the star notation may be dropped.
Remarks
It can be seen that the relative entropy is invariant under coordinate transformations. This implies that a system of coordinates carry no information and it is the “character” of the probability distributions that are being ranked against one another rather than the specific set of propositions or microstates they describe.
The general solution to the maximum entropy procedure with respect to N linear constraints in , , and normalization gives a canonical-like selected posterior probability distribution,
The positive constant may always be absorbed into the Lagrange multipliers so we may let it equal unity without loss of generality. DC1’ is fully realized when we maximize with respect to a constraint on that is already held by , such as , which happens to have the same value as , then its Lagrange multiplier is forcibly zero (as can be seen in (36) using (34)), in agreement with Jaynes. This gives the expected result as there is no new information. Our design has arrived at a refined maximum entropy method [12] as a universal probability updating procedure [38].
3. The Design of the Quantum Relative Entropy
In the last section, we assumed that the universe of discourse (the set of relevant propositions or microstates) was known. In quantum physics, things are a bit more ambiguous because many probability distributions, or many experiments, can be associated with a given density matrix. In this sense, it is helpful to think of density matrices as “placeholders” for probability distributions rather than a probability distributions themselves. As any probability distribution from a given density matrix, , may be ranked using the standard relative entropy, it is unclear why we would chose one universe of discourse over another. In lieu of this, such that one universe of discourse is not given preferential treatment, we consider ranking entire density matrices against one another. Probability distributions of interest may be found from the selected posterior density matrix. This moves our universe of discourse from sets of propositions to Hilbert space(s).
When the objects of study are quantum systems, we desire an objective procedure to update from a prior density matrix to a posterior density matrix . We will apply the same intuition for ranking probability distributions (Section 2) and implement the PMU, PI, and design criteria to the ranking of density matrices. We therefore find the quantum relative entropy to be designed for the purpose of inferentially updating density matrices.
3.1. Designing the Quantum Relative Entropy
In this section, we design the quantum relative entropy using the same inferentially guided design criteria as were used in the standard relative entropy.
DC1: Subdomain Independence
The goal is to design a function that is able to rank density matrices. This insists that be a real scalar valued function of the posterior , and prior density matrices, which we will call the quantum relative entropy or simply the entropy. An arbitrary variation of the entropy with respect to is,
where is the trace. We wish to maximize this entropy with respect to expectation value constraints, such as on . Using the Lagrange multiplier method to maximize the entropy with respect to and normalization, and setting the variation equal to zero,
where and are the Lagrange multipliers for the respective constraints. Because is a real number, we inevitably require to be real, but without imposing this directly, we find that requiring to be real requires to be Hermitian. At this point, it is simpler to allow for arbitrary variations of such that,
For these arbitrary variations, the variational derivative of S must satisfy,
at the maximum. As in the remark earlier, all forms of S that give the correct form of under variation are equally valid for the purpose of inference. For notational convenience, we let
which is a matrix valued function of the posterior and prior density matrices. The form of is already “local” in (the variational derivative is with respect to the whole density matrix), so we don’t need to constrain it further as we did in the original DC1.
DC1’: In the absence of new information, the new state is equal to the old state
Applied to the ranking of density matrices, in the absence of new information, the density matrix should not change, that is, the posterior density matrix is equal to the prior density matrix. Maximizing the entropy without applying any constraints gives,
and, therefore, DC1’ imposes the following condition in this case:
As in the original DC1’, if is known to obey some expectation value , and then if one goes out of their way to constrain to that expectation value and nothing else, it follows from the PMU that , as no information has been gained. This is not imposed directly but can be verified later.
DC2: Subsystem Independence
The discussion of DC2 is the same as the standard relative entropy DC2—it is not a consistency requirement, and the updating is designed so that the independence reflected in the prior is maintained in the posterior by default via the PMU when the information provided is silent about correlations.
DC2 Implementation
Consider a composite system living in the Hilbert space . Assume that all prior evidence led us to believe the systems were independent. This is reflected in the prior density matrix: if the individual system priors are and , then the joint prior for the whole system is . Further suppose that new information is acquired such that would itself be updated to and that would be itself be updated to . By design, the implementation of DC2 constrains the entropy functional such that in this case, the joint product prior density matrix updates to the product posterior so that inferences about one do not affect inferences about the other.
The argument below is considerably simplified if we expand the space of density matrices to include density matrices that are not necessarily normalized. This does not represent any limitation because normalization can always be easily achieved as one additional constraint. We consider a few special cases below:
Case 1: We receive the extremely constraining information that the posterior distribution for system 1 is completely specified to be while we receive no information about system 2 at all. We treat the two systems jointly. Maximize the joint entropy , subject to the following constraints on the ,
Notice all of the elements in of are being constrained. We therefore need a Lagrange multiplier which spans and therefore it is a square matrix . This is readily seen by observing the component form expressions of the Lagrange multipliers . Maximizing the entropy with respect to this independent constraint is
but reexpressing this with its transpose , gives
where we have relabeled , for convenience, as the name of the Lagrange multipliers are arbitrary. For arbitrary variations of , we therefore have
DC2 is implemented by requiring , such that the function is designed to reflect subsystem independence in this case; therefore, we have
Had we chosen a different prior , for all the LHS remains unchanged given that is independent of scalar functions (I would like to thank M. Krumm for pointing this out.) of , as those could be lumped into while keeping fixed. The potential dependence on scalar functions of can be removed by imposing DC2 in a subsystem independent situation where in need not be fixed under variations of . The resulting equation in such a situation, for instance maximizing the entropy of an independent joint prior with respect to , facilitated by a scalar Lagrange multiplier , and after imposing DC2,
For subsystem independence to be imposed here, must be independent of variations in , and, therefore, in a general subsystem independent case, is independent of scalar functions of . This means that any dependence that the right-hand side of (48) might potentially have had on must drop out, meaning,
Since is arbitrary, suppose further that we choose a unit prior, , and note that and are block diagonal in . Because the LHS is block diagonal in ,
The RHS is block diagonal in and, because the function is understood to be a power series expansion in its arguments,
This gives
and, therefore, the factors out and . A similar argument exchanging systems 1 and 2 shows .
Case 1—Conclusion: The analysis leads us to conclude that when the system 2 is not updated, the dependence on drops out,
and, similarly,
Case 2: Now consider a different special case in which the marginal posterior distributions for systems 1 and 2 are both completely specified to be and , respectively. Maximize the joint entropy, , subject to the following constraints on the ,
where is the partial trace function, which a trace over the vectors in over
. Here, each expectation value constrains the entire space , where lives. The Lagrange multipliers must span their respective spaces, so we implement the constraint with the Lagrange multiplier operator , then,
For arbitrary variations of , we have
By design, DC2 is implemented by requiring in this case; therefore, we have
Write (59) as
The LHS is independent of changes that might occur in on the RHS of (60). This means that any variation of and must be “pushed out” by —it removes the dependence of and in . Any dependence that the RHS might potentially have had on , must cancel out in a general subsystem independent case, leaving unchanged. Consequently,
Because is independent of arbitrary variations of and on the LHS above—it is satisfied equally well for all choices. The form of reduces to the form of from Case 1 when and, similarly, DC1’ gives . Therefore, the Lagrange multiplier is
A similar analysis is carried out for leading to
Case 2—Conclusion: Substituting back into (59) gives us a functional equation for ,
which is
The general solution to this matrix valued functional equation is derived in Appendix A.5 and is
where tilde is a “super-operator” having constant coefficients and twice the number of indicies as and as discussed in the Appendix (i.e., and similarly for ). DC1’ imposes
which is satisfied in general when , and, now,
We may fix the constant by substituting our solution into the RHS of Equation (64), which is equal to the RHS of Equation (65),
where acts on the joint space of 1 and 2 and , acts on single subspaces 1 or 2, respectively. Using the well known log tensor product identity in this case (The proof is demonstrated by taking the log of and substituting .), , the RHS of Equation (69) becomes
Note that arbitrarily letting gives
or arbitrarily letting gives
As , , and are constant tensors, inspecting the above equalities determines the form of the tensor to be where A is a scalar constant and is the super-operator identity over the appropriate (joint) Hilbert space.
Because our goal is to maximize the entropy function, we let the arbitrary constant and distribute identically, which gives the final functional form,
“Integrating” gives a general form for the quantum relative entropy,
where is Umegaki’s form of the relative entropy [42,43,44], the extra from integration is an artifact present for the preservation of DC1’, and is a constant in the sense that it drops out under arbitrary variations of . This entropy leads to the same inferences as Umegaki’s form of the entropy with an added bonus that in the absence of constraints or changes in information—rather than which would be given by maximizing Umegaki’s form of the entropy. In this sense, the extra only improves the inference process as it more readily adheres to the PMU though DC1’; however, now, because , we have , which provides little nuisance. In the spirit of this derivation, we will keep the term there, but, for all practical purposes of inference, as long as there is a normalization constraint, it plays no role, and we find (letting and ),
Umegaki’s form of the relative entropy. is an equally valid entropy because, given normalization is applied, the same selected posterior maximizes both and .
3.2. Remarks
Due to the universality and the equal application of the PMU by using the same design criteria for both the standard and quantum case, the quantum relative entropy reduces to the standard relative entropy when or when the experiment being preformed is known. The quantum relative entropy we derive has the correct asymptotic form of the standard relative entropy in the sense of [8,9,10]. Further connections will be illustrated in a follow up article that is concerned with direct applications of the quantum relative entropy. Because two entropies are derived in parallel, we expect the well-known inferential results and consequences of the relative entropy to have a quantum relative entropy representation.
Maximizing the quantum relative entropy with respect to some constraints , where are a set of arbitrary Hermitian operators, and normalization , gives the following general solution for the posterior density matrix:
where are the Lagrange multipliers of the respective constraints and normalization may be factored out of the exponential in general because the identity commutes universally. If , it is well known that the analysis arrives at the same expression for after normalization, as it would if the von Neumann entropy were used, and thus one can find expressions for thermalized quantum states . The remaining problem is to solve for the N Lagrange multipliers using their N associated expectation value constraints. In principle, their solution is found by computing Z and using standard methods from Statistical Mechanics,
and inverting to find , which has a unique solution due to the joint concavity (convexity depending on the sign convention) of the quantum relative entropy [8,9] when the constraints are linear in . The simple proof that (77) is monotonic in , and therefore invertible, is that its derivative . Between the Zassenhaus formula [45]
and Horn’s inequality [46,47,48], the solutions to (77) lack a certain calculational elegance because it is difficult to express the eigenvalues of (in the exponential) in simple terms of the eigenvalues of the ’s and , in general, when the matrices do not commute. The solution requires solving the eigenvalue problem for , such the the exponential of may be taken and evaluated in terms of the eigenvalues of the s and the prior density matrix . A pedagogical exercise is starting with a prior that is a mixture of spin-z up and down (), maximizing the quantum relative entropy with respect to an expectation of a general Hermitian operator with which the prior density matrix does not commute. This example for spin is given in the Appendix B.
4. Conclusions
This approach emphasizes the notion that entropy is a tool for performing inference and downplays counter-notional issues that arise if one interprets entropy as a measure of disorder, a measure of distinguishability, or an amount of missing information [7]. Because the same design criteria, guided by the PMU, are applied equally well to the design of a relative and quantum relative entropy, we find that both the relative and quantum relative entropy are designed for the purpose of inference. Because the quantum relative entropy is the functional that fits the requirements of a tool designed for the inference of density matrices, we now know what it is and how to use it—formulating an inferential quantum maximum entropy method. This article provides the foundation for [29], which, in particular, derives the Quantum Bayes Rule and collapse as special cases of the quantum maximum entropy method, as was craved in [24], analogous to [38,40]’s treatment for deriving Bayes Rule using the standard maximum entropy method. The quantum maximum entropy method thereby unifies a few topics in Quantum Information and Quantum Measurement through entropic inference.
Acknowledgments
I must give ample acknowledgment to Ariel Caticha who suggested the problem of justifying the form of the quantum relative entropy as a criterion for ranking of density matrices. He cleared up several difficulties by suggesting that design constraints be applied to the variational derivative of the entropy rather than the entropy itself. In addition, he provided substantial improvements to the method for imposing DC2 that led to the functional equations for the variational derivatives ()—with more rigor than in earlier versions of this article. His time and guidance are all greatly appreciated—thanks, Ariel. I would also like to thank M. Krumm, the reviewers, as well as our information physics group at UAlbany for our many intriguing discussions about probability, inference, and quantum mechanics.
Conflicts of Interest
The author declares no conflict of interest.
Appendix A
The Appendix loosely follows the relevant sections in [49], and then uses the methods reviewed to solve the relevant functional equations for . The last section is an example of the quantum maximum entropy method applied to a mixed spin state.
Appendix A.1. Simple Functional Equations
From [49] pages 31–44.
Theorem A1.
If Cauchy’s functional equation
is satisfied for all real x, y, and if the function is (a) continuous at a point, (b) nonegative for small positive x’s, or (c) bounded in an interval, then,
is the solution to (A1) for all real x. If (A1) is assumed only over all positive x, y, then under the same conditions, (A2) holds for all positive x.
Proof.
The most natural assumption for our purposes is that is continuous at a point (which later extends to continuity all points as given by Darboux [50]). Cauchy solved the functional equation by induction. In particular, Equation (A1) implies,
and if we let each as a special case to determine f, we find
We may let such that
Letting gives
and, because for , above, we have
which is the general solution of the linear functional equation. In principle, c can be complex. The importance of Cauchy’s solution is that it can be used to give general solutions to the following Cauchy equations:
by preforming consistent substitution until they are the same form as (A1), as given by Cauchy. We will briefly discuss the first two. ☐
Theorem A2.
The general solution of is for all real or for all positive that are continuous at one point and, in addition to the exponential solution, the solution and for () are in these classes of functions.
The first functional is solved by first noting that it is strictly positive for real x, y, , which can be shown by considering ,
If there exists , then it follows that , a trivial solution, hence the reason why the possibility of being equal to zero is excluded above. Given is nowhere zero, we are justified in taking the natural logarithm , due to its positivity . This gives,
and letting gives,
which is Cauchy’s linear equation, and thus has the solution . Because , one finds in general that .
Theorem A3.
If the functional equation is valid for all positive then its general solution is given it is continuous at a point. If (or ) are valid, then the general solution is . If all real are valid except 0, then the general solution is .
In particular, we are interested in the functional equation when are positive. In this case, we can again follow Cauchy and substitute and to get,
and letting gives . Again, the solution is and, therefore, the general solution is when we substitute for u. If x could equal 0, then , which has the trivial solution . The general solution for , and positive is therefore .
Appendix A.2. Functional Equations with Multiple Arguments
From [49] pages 213–217. Consider the functional equation,
which is a generalization of Cauchy’s linear functional Equation (A1) to several arguments. Letting gives
which is the Cauchy linear functional equation having solution , where is assumed to be continuous or at least measurable majorant. Similarly,
and if you consider
and, as is arbitrary, we could have let such that in general
formulating the general solution.
Appendix A.3. Relative Entropy
We are interested in the following functional equation:
This is an equation of the form,
where , , , and . First, assume all q and p are greater than zero. Then, substitute: and and let and so on such that
which is of the form of (A15). The general solution for F is therefore
which means the general solution for is
In such a case, when for some value we may let , where is as close to zero as we could possibly want—the trivial general solution is saturated by the special case when from DC1’. Here, we return to the text.
Appendix A.4. Matrix Functional Equations
(This derivation is implied in [49] pages 347–349). First, consider a Cauchy matrix functional equation,
where and are square matrices. Rewriting the matrix functional equation in terms of its components gives
and is now in the form of (A15), and, therefore, the solution is
for . We find it convenient to introduce super indices, and such that the component equation becomes
and resembles the solution for the linear transformation of a vector from [49]. In general, we will be discussing matrices which stem from tensor products of density matrices. In this situation, can be thought of as index tensor or a matrix where is the product of the ranks of the matrices in the tensor product or even as a vector of length . In such a case, we may abuse the super index notation where A and B lump together the appropriate number of indices such that (A28) is the form of the solution for the components in general. The matrix form of the general solution is
where is a constant super-operator having components .
Appendix A.5. Quantum Relative Entropy
The functional equation of interest is
These density matrices are Hermitian, positive semi-definite, have positive eigenvalues, and are not equal to . Because every invertible matrix can be expressed as the exponential of some other matrix, we can substitute , and so on for all four density matrices giving,
Now, we use the following identities for Hermitian matrices:
and
to recast the functional equation as,
Letting , and the like, gives
This functional equation is of the form
which has the general solution
analogous to (A19), and finally, in general,
where are super-operators having constant coefficients. Here, we return to the text.
Appendix B. Spin Example
Consider an arbitrarily mixed prior (in the spin-z basis for convenience) with ,
and a general Hermitian matrix in the spin- Hilbert space,
having a known expectation value,
Maximizing the entropy with respect to this general expectation value and normalization is:
which after varying gives the solution,
Letting
gives
where is the diagonalized matrix of having real eigenvalues. They are
due to the quadratic formula, where explicitly:
and
Because and are real, is real and . The normalization constraint specifies the Lagrange multiplier Z,
so . The expectation value constraint specifies the Lagrange multiplier ,
which becomes
or
This equation is monotonic in and therefore it is uniquely specified by the value of c. Ultimately, this is a consequence from the concavity of the entropy. The specific proof of (A52)’s monotonicity is below:
Proof.
For to be Hermitian, is Hermitian and is real—furthermore, because is real and thus . Because is quadratic in and positive, it may be written in vertex form,
where , , and are the coordinates of the minimum of . Notice that the form of (A52) is
Making the change of variables centers the function such that is symmetric about . We can then write
where the derivative has been computed. Because is a positive, symmetric, and monotonically increasing on the (symmetric) half-plane (for greater than or less that zero), is also positive and symmetric, but it is unclear whether is strictly monotonic in the half-plane or not. We may restate
We are now in a convenient position to preform the derivate test for monotonic functions:
because , and therefore are all . The function of interest is therefore monotonic for all , and therefore it is monotonic for all , completing the proof that there exists a unique real Lagrange multiplier in (A52).
Although (A52) is monotonic in , it is seemingly a transcendental equation. This can be solved graphically for the given values , i.e., given the Hermitian matrix and its expectation value are specified. Equation (A52) and the eigenvalues take a simpler form when because, in this instance, and commutes universally so it may be factored out of the exponential in (A44). ☐
References
- Shore, J.E.; Johnson, R.W. Axiomatic derivation of the Principle of Maximum Entropy and the Principle of Minimum Cross-Entropy. IEEE Trans. Inf. Theory 1980, 26, 26–37. [Google Scholar] [CrossRef]
- Shore, J.E.; Johnson, R.W. Properties of Cross-Entropy Minimization. IEEE Trans. Inf. Theory 1981, 27, 472–482. [Google Scholar] [CrossRef]
- Csiszár, I. Why least squares and maximum entropy: An axiomatic approach to inference for linear inverse problems. Ann. Stat. 1991, 19, 2032. [Google Scholar] [CrossRef]
- Skilling, J. The Axioms of Maximum Entropy. In Maximum-Entropy and Bayesian Methods in Science and Engineering; Erickson, G.J., Smith, C.R., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1988. [Google Scholar]
- Skilling, J. Classic Maximum Entropy. In Maximum-Entropy and Bayesian Methods in Science and Engineering; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1988. [Google Scholar]
- Skilling, J. Quantified Maximum Entropy. In Maximum-Entropy and Bayesian Methods in Science and Engineering; Fougére, P.F., Ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1990. [Google Scholar]
- Caticha, A. Entropic Inference and the Foundations of Physics (Monograph Commissioned by the 11th Brazilian Meeting on Bayesian Statistics—EBEB-2012). Available online: http://www.albany.edu/physics/ACaticha-EIFP-book.pdf (accessed on 30 November 2017).
- Hiai, F.; Petz, D. The Proper Formula for Relative Entropy and its Asymptotics in Quantum Probability. Commun. Math. Phys. 1991, 143, 99–114. [Google Scholar] [CrossRef]
- Petz, D. Characterization of the Relative Entropy of States of Matrix Algebras. Acta Math. Hung. 1992, 59, 449–455. [Google Scholar] [CrossRef]
- Ohya, M.; Petz, D. Quantum Entropy and Its Use; Springer: New York, NY, USA, 1993; ISBN 0-387-54881-5. [Google Scholar]
- Wilming, H.; Gallego, R.; Eisert, J. Axiomatic Characterization of the Quantum Relative Entropy and Free Energy. Entropy 2017, 19, 241. [Google Scholar] [CrossRef]
- Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
- Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Jaynes, E.T. Information Theory and Statistical Mechanics II. Phys. Rev. 1957, 108, 171–190. [Google Scholar] [CrossRef]
- Balian, R.; Vénéroni, M. Incomplete descriptions, relevant information, and entropy production in collision processes. Ann. Phys. 1987, 174, 229–244. [Google Scholar] [CrossRef]
- Balian, R.; Balazs, N.L. Equiprobability, inference and entropy in quantum theory. Ann. Phys. 1987, 179, 97–144. [Google Scholar] [CrossRef]
- Balian, R. Justification of the Maximum Entropy Criterion in Quantum Mechanics. In Maximum Entropy and Bayesian Methods; Skilling, J., Ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1989; pp. 123–129. [Google Scholar]
- Balian, R. On the principles of quantum mechanics. Am. J. Phys. 1989, 57, 1019–1027. [Google Scholar] [CrossRef]
- Balian, R. Gain of information in a quantum measurement. Eur. J. Phys. 1989, 10, 208–213. [Google Scholar] [CrossRef]
- Balian, R. Incomplete descriptions and relevant entropies. Am. J. Phys. 1999, 67, 1078–1090. [Google Scholar] [CrossRef]
- Blankenbecler, R.; Partovi, H. Uncertainty, Entropy, and the Statistical Mechanics of Microscopic Systems. Phys. Rev. Lett. 1985, 54, 373–376. [Google Scholar] [CrossRef] [PubMed]
- Blankenbecler, R.; Partovi, H. Quantum Density Matrix and Entropic Uncertainty. In Proceedings of the Fifth Workshop on Maximum Entropy and Bayesian Methods in Applied Statistics, Laramie, WY, USA, 5–8 August 1985. [Google Scholar]
- Von Neumann, J. Mathematische Grundlagen der Quantenmechanik; Springer: Berlin, Germany, 1932; English Translation: Mathematical Foundations of Quantum Mechanics; Princeton University Press: Princeton, NY, USA, 1983. [Google Scholar]
- Ali, S.A.; Cafaro, C.; Giffin, A.; Lupo, C.; Mancini, S. On a Differential Geometric Viewpoint of Jaynes’ Maxent Method and its Quantum Extension. AIP Conf. Proc. 2012, 1443, 120–128. [Google Scholar]
- Caticha, A. Entropic Dynamics: Quantum Mechanics from Entropy and Information Geometry. Available online: https://arxiv.org/abs/1711.02538 (accessed on 30 November 2017).
- Reginatto, M.; Hall, M.J.W. Quantum-classical interactions and measurement: A consistent description using statistical ensembles on configuration space. J. Phys. Conf. Ser. 2009, 174, 012038. [Google Scholar] [CrossRef]
- Reginatto, M.; Hall, M.J.W. Information geometry, dynamics and discrete quantum mechanics. AIP Conf. Proc. 2013, 1553, 246–253. [Google Scholar]
- Caves, C.; Fuchs, C.; Schack, R. Quantum probabilities as Bayesian probabilities. Phys. Rev. A 2002, 65, 022305. [Google Scholar] [CrossRef]
- Vanslette, K. The Quantum Bayes Rule and Generalizations from the Quantum Maximum Entropy Method. Available online: https://arxiv.org/abs/1710.10949 (accessed on 30 November 2017).
- Schack, R.; Brun, T.; Caves, C. Quantum Bayes rule. Phys. Rev. A 2001, 64, 014305. [Google Scholar] [CrossRef]
- Korotkov, A. Continuous quantum measurement of a double dot. Phys. Rev. B 1999, 60, 5737–5742. [Google Scholar] [CrossRef]
- Korotkov, A. Selective quantum evolution of a qubit state due to continuous measurement. Phys. Rev. B 2000, 63, 115403. [Google Scholar] [CrossRef]
- Jordan, A.; Korotkov, A. Qubit feedback and control with kicked quantum nondemolition measurements: A quantum Bayesian analysis. Phys. Rev. B 2006, 74, 085307. [Google Scholar] [CrossRef]
- Hellmann, F.; Kamiński, W.; Kostecki, P. Quantum collapse rules from the maximum relative entropy principle. New J. Phys. 2016, 18, 013022. [Google Scholar] [CrossRef]
- Warmuth, M. A Bayes Rule for Density Matrices. In Advances in Neural Information Processing Systems 18, Proceedings of the Neural Information Processing Systems Conference, Montréal, QC, Canada, 7–12 December 2005; Neural Information Processing Systems Foundation, Inc.: La Jolla, CA, USA, 2015. [Google Scholar]
- Warmuth, M.; Kuzmin, D. A Bayesian Probability Calculus for Density Matrices. Mach. Learn. 2010, 78, 63–101. [Google Scholar] [CrossRef]
- Tsuda, K. Machine learning with quantum relative entropy. J. Phys. Conf. Ser. 2009, 143, 012021. [Google Scholar] [CrossRef]
- Giffin, A.; Caticha, A. Updating Probabilities. Presented at the 26th International Workshop on Bayesian Inference and Maximum Entropy Methods (MaxEnt 2006), Paris, France, 8–13 July 2006. [Google Scholar]
- Wang, Z.; Busemeyer, J.; Atmanspacher, H.; Pothos, E. The Potential of Using Quantum Theory to Build Models of Cognition. Top. Cogn. Sci. 2013, 5, 672–688. [Google Scholar] [CrossRef] [PubMed]
- Giffin, A. Maximum Entropy: The Universal Method for Inference. Ph.D. Thesis, University at Albany (SUNY), Albany, NY, USA, 2008. [Google Scholar]
- Caticha, A. Toward an Informational Pragmatic Realism. Minds Mach. 2014, 24, 37–70. [Google Scholar] [CrossRef]
- Umegaki, H. Conditional expectation in an operator algebra, IV (entropy and information). Ködai Math. Sem. Rep. 1962, 14, 59–85. [Google Scholar] [CrossRef]
- Uhlmann, A. Relative entropy and the Wigner-Yanase-Dyson-Lieb concavity in an interpolation theory. Commun. Math. Phys. 1997, 54, 21–32. [Google Scholar] [CrossRef]
- Schumacher, B.; Westmoreland, M. Relative entropy in quantum information theory. In Proceedings of the AMS Special Session on Quantum Information and Computation, Washington, DC, USA, 19–21 January 2000. [Google Scholar]
- Suzuki, M. On the Convergence of Exponential Operators—The Zassenhaus Formula, BCH Formula and Systematic Approximants. Commun. Math. Phys. 1977, 57, 193–200. [Google Scholar] [CrossRef]
- Horn, A. Eigenvalues of sums of Hermitian matrices. Pac. J. Math. 1962, 12, 225–241. [Google Scholar] [CrossRef]
- Bhatia, R. Linear Algebra to Quantum Cohomology: The Story of Alfred Horn’s Inequalities. Am. Math. Mon. 2001, 108, 289–318. [Google Scholar] [CrossRef]
- Knutson, A.; Tao, T. Honeycombs and Sums of Hermitian Matrices. Not. AMS 2001, 48, 175–186. [Google Scholar]
- Aczél, J. Lectures on Functional Equations and Their Applications; Academic Press Inc.: New York, NY, USA, 1966; Volume 19, pp. 31–44, 141–145, 213–217, 301–302, 347–349. [Google Scholar]
- Darboux, G. Sur le théorème fondamental de la géométrie projective. Math. Ann. 1880, 17, 55–61. [Google Scholar] [CrossRef]
© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).