Entropic Updating of Probability and Density Matrices

We find that the standard relative entropy and the Umegaki entropy are designed for the purpose of inferentially updating probability and density matrices respectively. From the same set of inferentially guided design criteria, both of the previously stated entropies are derived in parallel. This formulates a quantum maximum entropy method for the purpose of inferring density matrices in the absence of complete information in Quantum Mechanics.


Introduction
We design an inferential updating procedure for probability distributions and density matrices such that inductive inferences may be made. The inferential updating tools found in this derivation take the form of the standard and quantum relative entropy functionals, and thus we find the functionals are designed for the purpose of updating probability distributions and density matrices respectively. Design derivations which found the entropy to be a tool for inference originally required five design criteria (DC) [1,2,3], this was reduced to four in [4,5,6], and then down to three in [7]. We reduced the number of required DC down to two while also providing the first design derivation of the quantum relative entropy -using the same design criteria and inferential principles in both instances.
The designed quantum relative entropy takes the form of Umegaki's quantum relative entropy, and thus it has the "proper asymptotic form of the relative entropy in quantum (mechanics)" [8,9,10]. Recently, [11] gave an axiomatic characterization of the quantum relative entropy that "uniquely determines the quantum relative entropy". Our derivation differs from their's, again in that we design the quantum relative entropy for a purpose, but also that our DCs are imposed on what turns out to be the functional derivative of the quantum relative entropy rather than on the quantum relative entropy itself. The use of a quantum entropy for the purpose of inference has a large history: Jaynes [12,13] invented the notion of the quantum maximum entropy method [14], while it was perpetuated by [15,16,17,18,19,20,21,22] and many others. However, we find the quantum relative entropy to be the suitable entropy for updating density matrices, rather than the von Neumann. The relevant results of their papers may be found using our quantum relative entropy with a suitable uniform prior density matrix.
It should be noted that because the relative entropies were reached by design, they may be interpret as such, "the relative entropies are tools for updating", which means we no longer need to attach an interpretation ex post facto -as a measure of disorder or amount of missing information. In this sense, the relative entropies were built for the purpose of saturating their own interpretation [4,7].
The remainder of the paper is organized as follows: First we will discuss some universally applicable principles of inference and motivate the design of an entropy function able to rank probability distributions. This entropy function will be designed such it is consistent with inference by applying a few reasonable design criteria, which are guided by the aforementioned principles of inference. Using the same principles of inference and design criteria, we find the form of the quantum relative entropy suitable for inference. We end with concluding remarks.
Solutions forρ by maximizing the quantum relative entropy give insight into the Quantum Bayes' Rule in the sense of [23,24,25,26]. This, and a few other applications of the quantum maximum entropy method, will be discussed in a future article.

The Design of Entropic Inference
Inference is the appropriate updating of probability distributions when new information is received. Bayes' rule and Jeffrey's rule are both equipped to handle information in the form of data; however, the updating of a probability distribution due to the knowledge of an expectation value was realized by Jaynes [12,13,14] through the method of maximum entropy. The two methods for inference were thought to be devoid of one another until the work of [27], which showed Bayes' and Jeffrey's Rule to be consistent with the method of maximum entropy when the expectation values were in the form of data [27]. In the spirit of the derivation we will carry-on as if the maximum entropy method were not known and show how it may be derived as an application of inference.
Given a probability distribution ϕ(x) over a general set of propositions x ∈ X, it is self evident that if new information is learned, we are entitled to assign a new probability distribution ρ(x) that somehow reflects this new information while also respecting our prior probability distribution ϕ(x). The main question we must address is: "Given some information, to what posterior probability distribution ρ(x) should we update our prior probability distribution ϕ(x) to?", that is, This specifies the problem of inductive inference. Since "information" has many colloquial, yet potentially conflicting, definitions, we remove potential confusion by defining information operationally ( * ) as the rationale that causes a probability distribution to change (inspired by and adapted from [7]). Directly from [7]: "Our goal is to design a method that allows a systematic search for the preferred posterior distribution. The central idea, first proposed in [4] is disarmingly simple: to select the posterior first rank all candidate distributions in increasing order of preference and then pick the distribution that ranks the highest. Irrespective of what it is that makes one distribution preferable over another (we will get to that soon enough) it is clear that any ranking according to preference must be transitive: if distribution ρ1 is preferred over distribution ρ2, and ρ2 is preferred over ρ3, then ρ1 is preferred over ρ3. Such transitive rankings are implemented by assigning to each ρ(x) a real number S[ρ], which is called the entropy of ρ, in such a way that if ρ1 is preferred over ρ2, then S[ρ1] > S[ρ2]. The selected distribution (one or possibly many, for there may be several equally preferred distributions) is that which maximizes the entropy functional." Because we wish to update from prior distributions ϕ to posterior distributions ρ by ranking, the entropy functional S[ρ, ϕ], is a real function of both ϕ and ρ. In the absence of new information, there is no available rationale to prefer any ρ to the original ϕ, and thereby the relative entropy should be designed such that the selected posterior is equal to the prior ϕ (in the absence of new information). The prior information encoded in ϕ(x) is valuable and we should not change it unless we are informed otherwise. Due to our definition of information, and our desire for objectivity, we state the predominate guiding principle for inductive inference: The Principle of Minimal Updating (PMU): A probability distribution should only be updated to the extent required by the new information.
This simple statement provides the foundation for inference [7]. If the updating of probability distributions is to be done objectively, then possibilities should not be needlessly ruled out or suppressed. Being informationally stingy, that we should only update probability distributions when the information requires it, pushes inductive inference toward objectivity. Thus using the PMU helps formulate a pragmatic (and objective) procedure for making inferences using (informationally) subjective probability distributions [28].
This method of inference is only as universal and general as its ability to apply equally well to any specific inference problem. The notion of "specificity" is the notion of statistical independence; a special case is only special in that it is separable from other special cases. The notion that systems may be "sufficiently independent" plays a central and deep-seated role in science and the idea that some things can be neglected and that not everything matters, is implemented by imposing criteria that tells us how to handle independent systems [7]. Ironically, the universally shared property by all specific inference problems is their ability to be independent of one another. Thus, a universal inference scheme based on the PMU permits, Properties of Independence (PI): Subdomain Independence: When information is received about one set of propositions, it should not effect or change the state of knowledge (probability distribution) of the other propositions (else information was also received about them too); And, Subsystem Independence: When two systems are a-priori believed to be independent and we only receive information about one, then the state of knowledge of the other system remains unchanged.
The PI's are special cases of the PMU that ultimately take the form of design criteria in the design derivation. The process of constraining the form of S[ρ, ϕ] by imposing design criteria may be viewed as the process of eliminative induction, and after sufficient constraining, a single form for the entropy remains. Thus, the justification behind the surviving entropy is not that it leads to demonstrably correct inferences, but rather, that all other candidate entropies demonstrably fail to perform as desired [7]. Rather than the design criteria instructing one how to update, they instruct in what instances one should not update. That is, rather than justifying one way to skin a cat over another, we tell you when not to skin it, which is operationally unique -namely you don't do it -luckily enough for the cat.

The Design Criteria and the Standard Relative Entropy
The following design criteria (DC), guided by the PMU, are imposed and formulate the standard relative entropy as a tool for inference. The form of this presentation is inspired by [7].

DC1: Subdomain Independence
We keep the DC1 from [7] and review it below. DC1 imposes the first instance of when one should not update -the Subdomain PI. Suppose the information to be processed does not refer to a particular subdomain D of the space X of x's. In the absence of new information about D the PMU insists we do not change our minds about probabilities that are conditional on D. Thus, we design the inference method so that ϕ(x|D), the prior probability of x conditional on x ∈ D, is not updated and therefore the selected conditional posterior is, P (x|D) = ϕ(x|D). (1) (The notation will be as follows: we denote priors by ϕ, candidate posteriors by lower case ρ, and the selected posterior by upper case P .) We emphasize the point is not that we make the unwarranted assumption that keeping ϕ(x|D) unchanged is guaranteed to lead to correct inferences. It need not; induction is risky. The point is, rather, that in the absence of any evidence to the contrary there is no reason to change our minds and the prior information takes priority.

DC1 Implementation:
Consider the set of microstates xi ∈ X belonging to either of two non-overlapping domains D or its compliment D , such that X = D ∪ D and ∅ = D ∩ D . For convenience let ρ(xi) = ρi. Consider the following constraints: such that ρ(D) + ρ(D ) = 1, and the following "local" constraints to D and D respectively are, As we are searching for the candidate distribution which maximizes S while obeying (2) and (3), we maximize the entropy S ≡ S[ρ, ϕ] with respect to these expectation value constraints using the Lagrange multiplier method, and thus, the entropy is maximized when the following differential relationships hold: Equations (2)-(5), are n + 4 equations we must solve to find the four Lagrange multipliers {λ, λ , µ, µ } and the n probability values {ρi}.
If the subdomain constraint DC1 is imposed in the most restrictive case, then it will hold in general. The most restrictive case requires splitting X into a set of {Di} domains such that each Di singularly includes one microstate xi. This gives, Because the entropy S = S[ρ1, ρ2, ...; ϕ1, ϕ2, ...] is a function over the probability of each microstate's posterior and prior distribution, its variational derivative is also a function of said probabilities in general, DC1 is imposed by constraining the form of φi(ρ1, ρ2, ...; ϕ1, ϕ2, ...) = φi(ρi; ϕ1, ϕ2, ...) to ensures that changes in Ai → Ai + δAi have no influence over the value of ρj in domain Dj, through φi, for i = j. If there is no new information about propositions in Dj, its distribution should remain equal to ϕj by the PMU. We further restrict φi such that an arbitrary variation of ϕj → ϕj +δϕj (a change in the prior state of knowledge of the microstate j) has no effect on ρi for i = j and therefore DC1 imposes φi = φi(ρi, ϕi), as is guided by the PMU. At this point it is easy to generalize the analysis to continuous microstates such that the indices become continuous i → x, sums become integrals, and discrete probabilities become probability densities ρi → ρ(x).

Remark:
We are designing the entropy for the purpose of ranking posterior probability distributions (for the purpose of inference); however, the highest ranked distribution is found by setting the variational derivative of S[ρ, ϕ] equal to the variations of the expectation value constraints by the Lagrange multiplier method, Therefore, the real quantity of interest is δS δρ(x) rather than the specific form of S[ρ, ϕ]. All forms of S[ρ, ϕ] that give the correct form of δS δρ(x) are equally valid for the purpose of inference. Thus, every design criteria may be made on the variational derivative of the entropy rather than the entropy itself, which we do. When maximizing the entropy, for convenience, we will let, and further use the shorthand φx(ρ, ϕ) ≡ φx(ρ(x), ϕ(x)), in all cases.

DC1':
In the absence of new information, our new state of knowledge ρ(x) is equal to the old state of knowledge ϕ(x).
This is a special case of DC1, and is implemented differently than in [7]. The PMU is in principle a statement about informational honestly -that is, one should not "jump to conclusions" in light of new information and in the absence of new information, one should not change their state of knowledge. If no new information is given, the prior probability distribution ϕ(x) does not change, that is, the posterior probability distribution ρ(x) = ϕ(x) is equal to the prior probability. If we maximizing the entropy without applying constraints, then DC1' imposes the following condition: for all x in this case. This special case of the DC1 and the PMU turns out to be incredibly constraining as we will see over the course of DC2. Comment: From [7]. If the variable x is continuous, DC1 requires that when information refers to points infinitely close but just outside the domain D, that it will have no influence on probabilities conditional on D. This may seem surprising as it may lead to updated probability distributions that are discontinuous. Is this a problem? No.
In certain situations (e.g., physics) we might have explicit reasons to believe that conditions of continuity or differentiability should be imposed and this information might be given to us in a variety of ways. The crucial point, however -and this is a point that we keep and will keep reiterating -is that unless such information is explicitly given we should not assume it. If the new information leads to discontinuities, so be it.

DC2: Subsystem Independence
DC2 imposes the second instance of when one should not update -the Subsystem PI. We emphasize that DC2 is not a consistency requirement. The argument we deploy is not that both the prior and the new information tells us the systems are independent, in which case consistency requires that it should not matter whether the systems are treated jointly or separately. Rather, DC2 refers to a situation where the new information does not say whether the systems are independent or not, but information is given about each subsystem. The updating is being designed so that the independence reflected in the prior is maintained in the posterior by default via the PMU and the second clause of the PI's. [7] The point is not that when we have no evidence for correlations we draw the firm conclusion that the systems must necessarily be independent. They could indeed have turned out to be correlated and then our inferences would be wrong. Again, induction involves risk. The point is rather that if the joint prior reflected independence and the new evidence is silent on the matter of correlations, then the prior takes precedence. As before, in this case subdomain independence, the probability distribution should not be updated unless the information requires it. [7] DC2 Implementation: Consider a composite system, x = (x1, x2) ∈ X = X1 × X2. Assume that all prior evidence led us to believe the subsystems are independent. This belief is reflected in the prior distribution: if the individual system priors are ϕ1(x1) and ϕ2(x2), then the prior for the whole system is their product ϕ1(x1)ϕ2(x2). Further suppose that new information is acquired such that ϕ1(x1) would by itself be updated to P1(x1) and that ϕ2(x2) would be itself be updated to P2(x2). By design, the implementation of DC2 constrains the entropy functional such that in this case, the joint product prior ϕ1(x1)ϕ2(x2) updates to the selected product posterior P1(x1)P2(x2). [7] The argument below is considerably simplified if we expand the space of probabilities to include distributions that are not necessarily normalized. This does not represent any limitation because a normalization constraint may always be applied. We consider a few special cases below: Case 1: We receive the extremely constraining information that the posterior distribution for system 1 is completely specified to be P1(x1) while we receive no information at all about system 2. We treat the two systems jointly. Maximize the joint entropy S[ρ(x1, x2), ϕ(x1)ϕ(x2)] subject to the following constraints on the ρ(x1, x2) , dx2 ρ(x1, x2) = P1(x1) . (12) Notice that the probability of each x1 ∈ X1 within ρ(x1, x2) is being constrained to P1(x1) in the marginal. We therefore need a one Lagrange multiplier λ1(x1) for each x1 ∈ X1 to tie each value of dx2 ρ(x1, x2) to P1(x1). Maximizing the entropy with respect to this constraint is, which requires that for arbitrary variations of ρ(x1, x2). By design, DC2 is implemented by requiring ϕ1ϕ2 → P1ϕ2 in this case, therefore, This equation must hold for all choices of x2 and all choices of the prior ϕ2(x2) as λ1(x1) is independent of x2. Suppose we had chosen a different prior ϕ 2 (x2) = ϕ2(x2)+δϕ2(x2) that disagrees with ϕ2(x2). For all x2 and δϕ2(x2), the multiplier λ1(x1) remains unchanged as it constrains the independent ρ(x1) → P1(x1). This means that any dependence that the right hand side might potentially have had on x2 and on the prior ϕ2(x2) must cancel out. This means that Since ϕ2 is arbitrary in f suppose further that we choose a constant prior set equal to one, ϕ2(x2) = 1, therefore in general. This gives, The left hand side does not depend on x2, and therefore neither does the right hand side. An argument exchanging systems 1 and 2 gives a similar result. Case 1 -Conclusion: When the system 2 is not updated the dependence on ϕ2 and x2 drops out, and vice-versa when system 1 is not updated, As we seek the general functional form of φx 1 x 2 , and because the x2 dependence drops out of (19) and the x1 dependence drops out of (20) for arbitrary ϕ1, ϕ2 and ϕ12 = ϕ1ϕ2, the explicit coordinate dependence in φ consequently drops out of both such that, as φ = φ(ρ(x), ϕ(x)) must only depend on coordinates through the probability distributions themselves.
(As a double check, explicit coordinate dependence was included in the following computations but inevitably dropped out due to the form the functional equations and DC1'. By the argument above, and for simplicity, we drop the explicit coordinate dependence in φ here.) Case 2: Now consider a different special case in which the marginal posterior distributions for systems 1 and 2 are both completely specified to be P1(x1) and P2(x2) respectively. Maximize the joint entropy S[ρ(x1, x2), ϕ(x1)ϕ(x2)] subject to the following constraints on the ρ(x1, x2) , dx2 ρ(x1, x2) = P1(x1) and dx1 ρ(x1, x2) = P2(x2) .
Again, this is one constraint for each value of x1 and one constraint for each value of x2, which therefore require the separate multipliers µ1(x1) and µ2(x2). Maximizing S with respect to these constraints is then, leading to The updating is being designed so that ϕ1ϕ2 → P1P2, as the independent subsystems are being updated based on expectation values which are silent about correlations. DC2 thus imposes, Write (25) as, The left hand side is independent of x2 so we can perform a trick similar to that we used before. Suppose we had chosen a different constraint P 2 (x2) that differs from P2(x2) and a new prior ϕ 2 (x2) that differs from ϕ2(x2) except at the valuex2. At the valuex2,the multiplier µ1(x1) remains unchanged for all P 2 (x2), ϕ 2 (x2), and thus x2. This means that any dependence that the right hand side might potentially have had on x2 and on the choice of P2(x2), ϕ 2 (x2) must cancel out leaving µ1(x1) unchanged. That is, the Lagrange multiplier µ(x2) "pushes out" these dependences such that Because g(P1(x1), ϕ1(x1)) is independent of arbitrary variations of P2(x2) and ϕ2(x2) on the LHS above -it is satisfied equally well for all choices. The form of g = φ(P1(x1), q1(x1)) is apparent if P2(x2) = ϕ2(x2) = 1 as µ2(x2) = 0 similar to Case 1 as well as DC1'. Therefore, the Lagrange multiplier is A similar analysis can be carried out for µ2(x2) leads to µ2(x2) = φ (P2(x2), ϕ2(x2)) .
Case 2 -Conclusion: Substituting back into (25) gives us a functional equation for φ , The general solution for this functional equation is derived in the Appendix, section 6.3, and is where a1, a2 are constants. The constants are fixed by using DC1'. Letting ρ1(x1) = ϕ1(x1) = ϕ1 gives φ(ϕ, ϕ) = 0 by DC1', and therefore, so we are forced to conclude a1 = −a2 for arbitrary ϕ. Letting a1 ≡ A = −|A| such that we are really maximizing the entropy (although this is purely aesthetic) gives the general form of φ to be, As long as A = 0, the value of A is arbitrary as it always can be absorbed into the Lagrange multipliers.
The general form of the entropy designed for the purpose of inference of ρ is found by integrating φ, and therefore, The constant in ρ, C[ϕ], will always drop out when varying ρ. The apparent extra term (|A| ρ(x)dx) from integration cannot be dropped while simultaneously satisfying DC1', which requires ρ(x) = ϕ(x) in the absence of constraints or when there is no change to one's information. In previous versions where the integration term (|A| ρ(x)dx) is dropped, one obtains solutions like ρ(x) = e −1 ϕ(x) (independent of whether ϕ(x) was previously normalized or not) in the absence of new information. Obviously this factor can be taken care of by normalization, and in this way both forms of the entropy are equally valid; however, this form of the entropy better adheres to the PMU through DC1'. Given that we may regularly impose normalization, we may drop the extra ρ(x)dx term and C[ϕ]. For convenience then, (34) becomes which is a special case when the normalization constraint is being applied. Given normalization is applied, the same selected posterior ρ(x) maximizes both S(ρ(x), ϕ(x)) and S * (ρ(x), ϕ(x)), and the star notation may be dropped.
Remarks: It can be seen that the relative entropy is invariant under coordinate transformations. This implies that a system of coordinates carry no information and it is the "character" of the probability distributions that are being ranked against one another rather than the specific set of propositions or microstates they describe.
The general solution to the maximum entropy procedure with respect to N linear constraints in ρ, Ai(x) , and normalization gives a canonical-like selected posterior probability distribution, The positive constant |A| may always be absorbed into the Lagrange multipliers so we may let it equal unity without loss of generality. DC1' is fully realized when we maximize with respect to a constraint on ρ(x) that is already held by ϕ(x), such as x 2 = x 2 ρ(x) which happens to have the same value as x 2 ϕ(x), then its Lagrange multiplier is forcibly zero α1 = 0 (as can be seen in (36) using (34)), in agreement with Jaynes. This gives the expected result ρ(x) = ϕ(x) as there is no new information. Our design has arrived at a refined maximum entropy method [12] as a universal probability updating procedure [27].

The Design of the Quantum Relative Entropy
Last section we assumed that the universe of discourse (the set of relevant propositions or microstates) X = A×B×... was known. In quantum physics things are a bit more ambiguous because many probability distributions, or many experiments, can be associated to a given density matrix. In this sense it helpful to think of density matrices as "placeholders" for probability distributions rather than a probability distributions themselves. As any probability distribution from a given density matrix, ρ(·) = Tr(|· ·|ρ), may be ranked using the standard relative entropy, it is unclear why we would chose one universe of discourse over another. In lieu of this, such that one universe of discourse is not given preferential treatment, we consider ranking entire density matrices against one another. Probability distributions of interest may be found from the selected posterior density matrix. This moves our universe of discourse from sets of propositions X → H to Hilbert space(s). When the objects of study are quantum systems, we desire an objective procedure to update from a prior density matrixφ to a posterior density matrixρ. We will apply the same intuition for ranking probability distributions (Section 2) and implement the PMU, PI, and design criteria to the ranking of density matrices. We therefore find the quantum relative entropy S(ρ,φ) to be designed for the purpose of inferentially updating density matrices.

Designing the Quantum Relative Entropy
In this section we design the quantum relative entropy using the same inferentially guided design criteria as were used in the standard relative entropy.

DC1: Subdomain Independence
The goal is to design a function S(ρ,φ) which is able to rank density matrices. This insists that S(ρ,φ) be a real scalar valued function of the posteriorρ, and priorφ density matrices, which we will call the quantum relative entropy or simply the entropy. An arbitrary variation of the entropy with respect toρ is, We wish to maximize this entropy with respect to expectation value constraints, such as, A = Tr(Âρ) on ρ. Using the Lagrange multiplier method to maximize the entropy with respect to A and normalization, is setting the variation equal to zero, where λ and α are the Lagrange multipliers for the respective constraints. Because S(ρ,φ) is a real number, we inevitably require δS to be real, but without imposing this directly, we find that requiring δS to be real requiresρ,Â to be Hermitian. At this point, it is simpler to allow for arbitrary variations ofρ such that, For these arbitrary variations, the variational derivative of S must satisfy, at the maximum. As in the remark earlier, all forms of S which give the correct form of δS(ρ,φ) δρ T under variation are equally valid for the purpose of inference. For notational convenience we let, which is a matrix valued function of the posterior and prior density matrices. The form of φ(ρ,φ) is already "local" inρ, so we don't need to constrain it further as we did in the original DC1.

DC1':
In the absence of new information, the new stateρ is equal to the old stateφ.
Applied to the ranking of density matrices, in the absence of new information, the density matrix ϕ should not change, that is, the posterior density matrixρ =φ is equal to the prior density matrix. Maximizing the entropy without applying any constraints gives, and therefore DC1' imposes the following condition in this case, As in the original DC1', ifφ is known to obey some expectation value constraint Â , then if one goes out of their way to constrainρ to that expectation value with nothing else, it follows from the PMU that ρ =φ, as no information has been gained. This is not imposed directly, but can be verified later.

DC2: Subsystem Independence
The discussion of DC2 is the same as the standard relative entropy DC2 -it is not a consistency requirement, and the updating is designed so that the independence reflected in the prior is maintained in the posterior by default via the PMU, when the information provided is silent about correlations.

DC2 Implementation:
Consider a composite system living in the Hilbert space H = H1 ⊗ H2. Assume that all prior evidence led us to believe the systems were independent. This is reflected in the prior density matrix: if the individual system priors areφ1 andφ2, then the joint prior for the whole system isφ1 ⊗φ2. Further suppose that new information is acquired such thatφ1 would by itself be updated toρ1 and thatφ2 would be itself be updated toρ2. By design, the implementation of DC2 constrains the entropy functional such that in this case, the joint product prior density matrixφ1 ⊗φ2 updates to the product posteriorρ1 ⊗ρ2 so that inferences about one do not affect inferences about the other.
The argument below is considerably simplified if we expand the space of density matrices to include density matrices that are not necessarily normalized. This does not represent any limitation because normalization can always be easily achieved as one additional constraint. We consider a few special cases below: Case 1: We receive the extremely constraining information that the posterior distribution for system 1 is completely specified to beρ1 while we receive no information about system 2 at all. We treat the two systems jointly. Maximize the joint entropy S[ρ12,φ1 ⊗φ2], subject to the following constraints on thê ρ12 , Tr2(ρ12) =ρ1.
Notice all of the N 2 elements in H1 ofρ12 are being constrained. We therefore need a Lagrange multiplier which spans H1 and therefore it is a square matrixλ1. This is readily seen by observing the component form expressions of the Lagrange multipliers (λ1)ij = λij. Maximizing the entropy with respect to this H2 independent constraint is, but reexpressing this with its transpose (λ1)ij = (λ T 1 )ji, gives where we have relabeledλ T 1 →λ1, for convenience, as the name of the Lagrange multipliers are arbitrary. For arbitrary variations ofρ12, we therefore have, DC2 is implemented by requiringφ1 ⊗φ2 →ρ1 ⊗φ2, such that the function φ is designed to reflect subsystem independence in this case; therefore, we havê This equation must hold for all choices of the independent priorφ2 in H2. Suppose we had chosen a different priorφ 2 =φ2 + δφ2. For all δφ2 the LHSλ1 ⊗12 remains unchanged. This means that any dependence that the right hand side might potentially have had onφ2 must cancel out, meaning, Sinceφ2 is arbitrary, suppose further that we choose a unit prior,φ2 =12 , and note thatρ1 ⊗12 and ϕ1 ⊗12 are block diagonal in H2. Because the LHS is block diagonal in H2, the RHS is block diagonal in H2, and because the function φ is understood to be a power series expansion in its arguments, This gives,λ and therefore the12 factors out andλ1 = φ (ρ1,φ1). A similar argument exchanging systems 1 and 2 showsλ2 = φ (ρ2,φ2) in this case.

Case 1 -Conclusion:
The analysis leads us to conclude that when the system 2 is not updated the dependence onφ2 also drops out, and similarly, Case 2: Now consider a different special case in which the marginal posterior distributions for systems 1 and 2 are both completely specified to beρ1 andρ2 respectively. Maximize the joint entropy, S[ρ12,φ1 ⊗ ϕ2], subject to the following constraints on theρ12 , Tr2(ρ12) =ρ1 and Tr1(ρ12) =ρ2.
Because g(ρ1,φ1) is independent of arbitrary variations ofρ2 andφ2 on the LHS above -it is satisfied equally well for all choices. The form of g(ρ1,φ1) reduces to the form of f (ρ1,φ1) from Case 1 when ρ2 =φ2 =12 and similarly DC1' givesμ2 = 0. Therefore, the Lagrange multiplier iŝ A similar analysis can be carried out forμ2 leading tô Case 2 -Conclusion: Substituting back into (58) gives us a functional equation for φ , which is, The general solution to this matrix valued functional equation is derived in the Appendix 6.5, and is, where tilde ∼ A is a "super-operator" having constant coefficients and twice the number of indicies asρ andφ as discussed in the Appendix (i.e. ∼ A ln(ρ) ij = k A ijk (log(ρ)) k and similarly for ∼ B ln(φ)).

DC1' imposes,
which is satisfied in general when ∼ A= − ∼ B, and now, We may fix the constant ∼ A by substituting our solution into the RHS of equation (63) which is equal to the RHS of equation (64), where ∼ A12 acts on the joint space of 1 and 2 and ∼ A1, ∼ A2 acts on single subspaces 1 or 2 respectively. Using the log tensor product identity, ln(ρ1 ⊗12) = ln(ρ1) ⊗12, in the RHS of equation (68) gives, Note that arbitrarily lettingρ2 =φ2 gives, or arbitrarily lettingρ1 =φ1 gives, "Integrating" φ, gives a general form for the quantum relative entropy, where SU (ρ,φ) is Umegaki's form of the relative entropy, the extra |A|Tr(ρ) from integration is an artifact present for the preservation of DC1', and C[φ] is a constant in the sense that it drops out under arbitrary variations ofρ. This entropy leads to the same inferences as Umegaki's form of the entropy with added bonus thatρ =φ in the absence of constraints or changes in information -rather thanρ = e −1φ which would be given by maximizing Umegaki's form of the entropy. In this sense the extra |A|Tr(ρ) only improves the inference process as it more readily adheres to the PMU though DC1'; however now because SU ≥ 0, we have S(ρ,φ) ≤ Tr(ρ) + C[φ], which provides little nuisance. In the spirit of this derivation we will keep the Tr(ρ) term there, but for all practical purposes of inference, as long as there is a normalization constraint, it plays no role, and we find (letting |A| = 1 and C[φ] = 0), Umegaki's form of the relative entropy. S * (ρ,φ) is an equally valid entropy because, given normalization is applied, the same selected posteriorρ maximizes both S(ρ,φ) and S * (ρ,φ).

Remarks
Due to the universality and the equal application of the PMU by using the same design criteria for both the standard and quantum case, the quantum relative entropy reduces to the standard relative entropy when [ρ,φ] = 0 or when the experiment being preformedρ → ρ(a) = Tr(ρ|a a|) is known. The quantum relative entropy we derive has the correct asymptotic form of the standard relative entropy in the sense of [8,9,10]. Further connections will be illustrated in a follow up article that is concerned with direct applications of the quantum relative entropy. Because two entropies are derived in parallel, we expect the well known inferential results and consequences of the relative entropy to have a quantum relative entropy representation.
Maximizing the quantum relative entropy with respect to some constraints Â i , where {Âi} are a set of arbitrary Hermitian operators, and normalization 1 = 1, gives the following general solution for the posterior density matrix: where αi are the Lagrange multipliers of the respective constraints and normalization may be factored out of the exponential in general because the identity commutes universally. Ifφ ∝1, it is well known the analysis arrives at the same expression forρ after normalization as it would if the von Neumann entropy were used, and thus one can find expressions for thermalized quantum statesρ = 1 Z e −βĤ . The remaining problem is to solve for the N Lagrange multipliers using their N associated expectation value constraints. In principle their solution is found by computing Z and using standard methods from Statistical Mechanics, and inverting to find αi = αi( Â i ), which has a unique solution due to the joint concavity (convexity depending on the sign convention) of the quantum relative entropy [8,9] and Horn's inequality, the solutions to (76) lack a certain calculational elegance because it is difficult to express the eigenvalues ofĈ = log(φ) + αiÂi (in the exponential) in simple terms of the eigenvalues of theÂi's andφ, in general, when the matrices do not commute. The solution requires solving the eigenvalue problem forĈ, such the the exponential ofĈ may be taken and evaluated in terms of the eigenvalues of the αiÂi's and the prior density matrixφ. A pedagogical exercise is, starting with a prior which is a mixture of spin-z up and downφ = a|+ +| + b|− −| (a, b = 0) and maximize the quantum relative entropy with respect to the expectation of a general Hermitian operator. This example is given in the Appendix 6.6.

Conclusions:
This approach emphasizes the notion that entropy is a tool for performing inference and downplays counter-notional issues which arise if one interprets entropy as a measure of disorder, a measure of distinguishability, or an amount of missing information [7]. Because the same design criteria, guided by the PMU, are applied equally well to the design of a relative and quantum relative entropy, we find that both the relative and quantum relative entropy are designed for the purpose of inference. Because the quantum relative entropy is the function which fits the requirements of a tool designed for inference, we now know what it is and how to use it -formulating an inferential quantum maximum entropy method. A follow up article is concerned with a few interesting applications of the quantum maximum entropy method, and in particular it derives the Quantum Bayes Rule.
I must give ample acknowledgment to Ariel Caticha who suggested the problem of justifying the form of the quantum relative entropy as a criterion for ranking of density matrices. He cleared up several difficulties by suggesting that design constraints be applied to the variational derivative of the entropy rather than the entropy itself. As well, he provided substantial improvements to the method for imposing DC2 that lead to the functional equations for the variational derivatives (φ12 = φ1 + φ2) -with more rigor than in earlier versions of this article. His time and guidance are all greatly appreciated -Thanks Ariel.

Relative entropy:
We are interested in the following functional equation, This is an equation of the form, where x1 = ρ(x1), y1 = ρ(x2), x2 = ϕ(x1), and y2 = ϕ(x2). First assume all q and p are greater than zero. Then, substitute: xi = e x i and yi = e y i and let F (x 1 , x 2 ) = F (e x 1 , e x 2 ) and so on such that which is of the form of (92). The general solution for F is therefore F (x 1 + y 1 , x 2 + y 2 ) = a1(x 1 + y 1 ) + a2(x 2 + y 2 ) = a1 ln(x1y1) + a2 ln(x2y2) = F (x1y1, x2y2) (100) which means the general solution for φ is, In such a case when ϕ(x0) = 0 for some value x0 ∈ X we may let ϕ(x0) = where is as close to zero as we could possibly want -the trivial general solution φ = 0 is saturated by the special case when ρ = ϕ from DC1'. Here we return to the text.

Matrix functional equations
(This derivation is implied in [33] pages 347-349). First consider a Cauchy matrix functional equation, whereX andŶ are n × n square matrices. Rewriting the matrix functional equation in terms of its components gives, fij(x11 + y11, x12 + y12, ..., xnn + ynn) = fij(x11, x12, ..., xnn) + fij(y11, y12, ..., ynn) is now in the form of (92) and therefore the solution is, fij(x11, x12, ..., xnn) = n ,k=0 for i, j = 1, ..., n. We find it convenient to introduce super indices, A = (i, j) and B = ( , k) such that the component equation becomes, resembles the solution for a linear transformation of a vector from [33]. In general we will be discussing matricesX =X1 ⊗X2 ⊗ ... ⊗XN which stem out of the tensor products of density matrices. In this situationX can be thought of as 2N index tensor or a z × z matrix where z = N i ni is the product of the ranks of the matrices in the tensor product or evenX is a vector of length z 2 . In such a case we may abuse the super index notation where A and B lump together the appropriate number of indices such that (105) is the form of the solution for the components in general. The matrix form of the general solution is, where C is a constant super-operator having components cAB.
where ∼ A, ∼ B are super-operators having constant coefficients.
Maximizing the entropy with respect to this general expectation value and normalization is: which after varying gives,ρ = 1 Z exp(αcµσ µ + log(φ)).
Although (129) is monotonic in α it is seemingly a transcendental equation. This can be solved graphically for the given values c, c1, cx, cy, cz, i.e. given the Hermitian matrix and its expectation value are specified. Equation (129) and the eigenvalues take a simpler form when a = b = 1 2 , because in this instanceφ ∝1 and commutes universally so it may be factored out of the exponential in (121).