Next Article in Journal
Entropy and Chaos-Based Modeling of Nonlinear Dependencies in Commodity Markets
Previous Article in Journal
Fault Diagnosis of Wind Turbine Rotating Bearing Based on Multi-Mode Signal Enhancement and Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Monotonicity of Relative Entropy: A Comparative Study of Petz’s and Uhlmann’s Approaches

by
Santiago Matheus
1,
Francesco Bottacin
1 and
Edoardo Provenzi
2,*
1
Dipartimento di Matematica, Università degli Studi di Padova, Via Trieste 63, 35121 Padova, Italy
2
Institute of Mathematics, Université de Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, 351 Cours de la Libération, F-33400 Talence, France
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(9), 954; https://doi.org/10.3390/e27090954
Submission received: 19 August 2025 / Revised: 10 September 2025 / Accepted: 12 September 2025 / Published: 14 September 2025
(This article belongs to the Section Quantum Information)

Abstract

We revisit the monotonicity of relative entropy under the action of quantum channels, a foundational result in quantum information theory. Among the several available proofs, we focus on those by Petz and Uhlmann, which we reformulate within a unified, finite-dimensional operator-theoretic framework. In the first part, we examine Petz’s strategy, identify a subtle flaw in his original use of Jensen’s contractive operator inequality, and point out how it was corrected to restore the validity of his line of reasoning. In the second part, we develop Uhlmann’s approach, which is based on interpolations of positive sesquilinear forms and applies automatically to non-invertible density operators. By comparing these two approaches, we highlight their complementary strengths: Petz’s method is more direct and clear; Uhlmann’s method is more abstract and general. Our treatment aims to clarify the mathematical structure underlying the monotonicity of relative entropy and to make these proofs more accessible to a broader audience interested in both the foundations and applications of quantum information theory.

1. Introduction

The (quantum) relative entropy is a central concept in quantum information theory, quantifying the distinguishability between quantum states and serving as a key tool in the analysis of information processing tasks.
One of its most important properties is the monotonicity under quantum channels (a quantum channel is a completely positive trace-preserving linear map; see later for a rigorous definition), which expresses the idea that state distinguishability cannot increase during the dynamical evolution of a quantum system that interacts with an environment. This property is also known as the data processing inequality (DPI).
The concept of relative entropy was first introduced by Umegaki in 1962 [1] in the setting of σ -finite von Neumann algebras, and it was later extended to arbitrary von Neumann algebras by Araki in 1976 [2] by means of Tomita–Takesaki modular theory.
The proof of the monotonicity of relative entropy is closely related to the proof of the strong subadditivity (SSA) of the von Neumann entropy. The latter serves as a measure of the mixedness of quantum states, and the validity of the SSA ensures that quantum uncertainty behaves consistently across composite systems, placing fundamental constraints on how information and correlations can be distributed among subsystems.
The first step toward proving SSA was made in 1968 by Lanford and Robinson [3], who established the subadditivity of the von Neumann entropy and conjectured its stronger form. In 1973, Lieb [4], building on earlier work by Wigner, Yanase, and Dyson, proved several key properties concerning the convexity and concavity of operator functions and trace functionals.
These results enabled Lieb and Ruskai to establish the full proof of the strong subadditivity of the von Neumann entropy for both finite and infinite-dimensional Hilbert spaces later that same year in their landmark paper [5]. In that work, they also derived, though without emphasizing it as such, the monotonicity of relative entropy under the partial trace operation, which constitutes a special case of quantum channels.
The first explicit and general proof of the monotonicity of relative entropy under the action of quantum channels was provided two years later by Lindblad in his seminal 1975 paper [6], thanks to the results established by Lieb and Ruskai in [5].
A further breakthrough was achieved by Uhlmann in 1977, who extended the property of monotonicity to a broader class of transformations: the adjoints of unital Schwarz maps [7].
The equivalence between SSA and the monotonicity of relative entropy was rigorously established by Petz in the 1980s [8,9]. Later, in [10], Petz proposed a new proof of monotonicity and posed the question of whether this property also holds for positive (but not necessarily completely positive) trace-preserving linear maps. This question remained open for several years until it was affirmatively resolved in 2023 by Müller-Hermes and Reeb in [11]. Other pertinent references related to the monotonicity of relative entropy are [12,13,14,15].
In this paper, we focus specifically on the demonstration strategies proposed by Petz and Uhlmann in [10] and [7] respectively, which we believe offer particularly interesting and useful complementary perspectives. Petz’s proof relies on operator-algebraic methods and an explicit representation of the relative entropy in terms of a suitable inner product, an idea inspired by the previously quoted work of Araki.
However, as we show, his original use of Jensen’s operator inequality contains a subtle flaw when applied in its contractive form. We point out how Petz himself and Nielsen corrected the problem in order to restore the validity of Petz’s original approach.
Uhlmann’s proof, on the other hand, is formulated in terms of interpolations of positive sesquilinear forms, a technique that naturally extends to non-invertible states and arbitrary quantum channels.
Despite their foundational importance, both Petz’s and Uhlmann’s proofs are often considered technically demanding and conceptually opaque. Petz’s approach, while elegant, involves intricate manipulations of operator inequalities that can obscure the overall structure of the argument. Uhlmann’s method introduces a formalism that is unfamiliar to many working in quantum information theory and is rarely presented in full detail in the literature.
We aim to clarify and systematize both strategies by reformulating them in a unified, finite-dimensional, operator-theoretic framework, thus making both proofs more accessible to a wider audience.
This paper is structured as follows: In Section 2, we recall the necessary preliminaries on operator convexity, partial trace, and quantum channels. In Section 3, we analyze Petz’s proof, highlight its limitations, and discuss its correction. In Section 4, we develop Uhlmann’s approach in detail and derive the monotonicity of relative entropy in the general setting.

2. Mathematical Preliminaries

In this section, we start recalling the basic definitions and results needed for the rest of the paper.
Given a finite-dimensional Hilbert state space ( H , , H ) over the field F = R or C , B ( H ) indicates the F -algebra of linear (bounded) operators A : H H .
We recall that A B ( H ) is as follows:
  • Positive semi-definite, written as A 0 if x , A x H 0 for all x H ;
  • Positive definite, written as A > 0 if x , A x H > 0 for all x 0 H H ;
  • Hermitian if A = A , where A is the adjoint operator of A, defined by the formula A x , y H = x , A y H for all x , y H .
If F = C , then a positive semi-definite, or positive definite, operator is automatically Hermitian, but this is not the case if F = R .
Suppose now that H is the state space of a quantum system. The density operator (also called the density matrix) ρ B ( H ) associated with a given state s of the system is positive semi-definite, Hermitian, and such that Tr ( ρ ) = 1 . Hence, ρ has eigenvalues 0 λ j 1 , which sum up to 1.
As is well-known, if s is a pure state, then ρ is a rank-one orthogonal projector and thus not invertible.
B ( H ) itself becomes an F -Hilbert space when it is endowed with the Hilbert–Schmidt operator inner product
A , B B ( H ) : = Tr ( A B ) , A , B B ( H ) .
The subset of B ( H ) given by Hermitian operators on H , indicated with B H ( H ) , is a real Hilbert space with respect to the inner product inherited from B ( H ) , and it is also a partially ordered set with respect to the Löwner ordering, defined as follows: for all A , B B H ( H ) , A B B A 0 .
Given a function f : I R R and A B H ( H ) with spectral decomposition A = U D U , with U being unitary and D being diagonal, with entries given by the eigenvalues λ j of A, all supposed to belong to I, we write as usual
f ( A ) = U f ( D ) U ,
where f ( D ) is diagonal, with non-trivial entries given by f ( λ j ) .
If, for every finite-dimensional H and every couple of operators A , B B H ( H ) , we have
A B f ( A ) f ( B ) ,
then f is said to be operator monotone on I. Instead, if we have
f A + B 2 f ( A ) + f ( B ) 2 ,
then f is said to be operator convex on I. If the last inequality is reversed, f is said to be operator concave on I.
By Löwner’s theorem (see, e.g., [16] chapter V.4), f : I R is operator monotone if and only if it has an analytic continuation that maps the upper half-plane H + into itself. Notable examples of operator monotone functions on ( 0 , + ) are x log ( x ) and x 1 / x .
If f : I R is a continuous and operator monotone function on I, then a I , the function F : I R given by
F ( x ) = a x f ( t ) d t
is operator convex on I; see again [16]. So, since t 1 / t is operator monotone on ( 0 , + ) ,
x 1 x 1 t d t = log ( x )
is operator convex on ( 0 , + ) , which implies that x log ( x ) is operator concave on ( 0 , + ) .
As is well-known, a convex function f : I R satisfies the Jensen inequality
f i = 1 n λ i x i i = 1 n λ i f ( x i ) ,
for all x 1 , , x n I and non-negative weights λ 1 , , λ n such that i = 1 n λ i = 1 .
The Jensen inequality can be generalized to operator convex functions, as stated in the following celebrated theorem, proven by Hansen and Pedersen [17].
Theorem 1
(Jensen’s operator inequality). For a continuous function f : I R , the following conditions are equivalent:
(i) 
f is operator convex on I.
(ii) 
For each natural number n 1 , the following inequality holds:
f ( i = 1 n A i X i A i ) i = 1 n A i f ( X i ) A i ,
for every n-tuple of bounded Hermitian operators X 1 , , X n on an arbitrary Hilbert space H , with spectra contained in I, and every n-tuple of operators ( A 1 , , A n ) on H satisfying i = 1 n A i A i = i d H .
(iii) 
f ( V X V ) V f ( X ) V for every isometry V : K H from an arbitrary Hilbert space K on an arbitrary Hilbert space H and every Hermitian operator X on H with spectrum contained in I.
An immediate consequence of this theorem is the following corollary, also known as Jensen’s contractive operator inequality.
Corollary 1
(Contractive version). Let f : I R be a continuous function, and suppose that 0 I . Then, f is operator convex on I and f ( 0 ) 0 if and only if for some, hence, for every, n 1 inequality (8) is valid for every n-tuple of bounded Hermitian operators X 1 , , X n on an arbitrary Hilbert space H , with spectra contained in I, and every n-tuple of operators ( A 1 , , A n ) on H satisfying i = 1 n A i A i i d H .
The reason for the adjective ‘contractive’ can be easily understood by setting n = 1 ; in this case, it follows that f is operator convex on an interval I containing 0 with f ( 0 ) 0 if and only if
f ( A X A ) A f ( X ) A ,
for every bounded Hermitian operator X with spectrum in I and every operator A such that A A i d , which implies that A is a contraction, i.e., A x x for all x H or, equivalently, A 1 .
Let us now consider two interacting quantum systems a and b with Hilbert state spaces H a and H b , respectively. The associated composite quantum system has Hilbert state space H a b = H a H b .
In the following, we indicate with X a , X b , X a b generic operators of B H ( H a ) , B H ( H b ) , and B H ( H a b ) , respectively.
The partial trace  Tr b over subsystem b is a ‘superoperator’, i.e., a linear map Tr b B ( B H ( H a b ) , B H ( H a ) ) , defined as the linear extension to the whole B H ( H a b ) of the map
Tr b ( X a X b ) = Tr ( X b ) X a .
Tr b satisfies the following property:
Tr ( Tr b ( X a b ) X a ) = Tr ( X a b ( X a i d b ) ) .
See, e.g., [18], page 100.
The partial trace Tr b is as follows:
  • Trace-preserving: Tr ( X a b ) = Tr ( Tr b ( X a b ) ) ;
  • Positive: if X a b 0 , then Tr b ( X a b ) 0 .
As a consequence, Tr b maps states of H a b into states of H a .
Moreover, Tr b is completely positive; i.e., Tr b i d H is a positive linear map for all Hilbert spaces H . A trace-preserving and completely positive (or ‘CPTP’) linear map is called a channel.
The partial trace is actually one of the three main elements of every channel. In fact, thanks to the Stinespring theorem (see [19]), given any channel C B ( B H ( H a ) , B H ( H a ) ) , there exist an operator Y B H ( H b ) and a unitary operator U on H a H b such that
C ( X ) : = Tr b ( U ( X Y ) U ) , X B H ( H a ) .
By the Riesz representation theorem, we can define the adjoint of the partial trace Tr b as the only operator Tr b : B H ( H a ) B H ( H a b ) satisfying
X a b , Tr b ( X a ) B H ( H a b ) = Tr b ( X a b ) , X a B H ( H a ) .
Writing the inner products explicitly and using property (11), we find
Tr ( X a b Tr b ( X a ) ) = Tr ( Tr b ( X a b ) X a ) = Tr ( X a b ( X a i d b ) ) ,
which allows us to write the explicit action of Tr b as
Tr b ( X a ) = X a i d b .
This formula implies that ( Tr b ( X a ) ) = Tr b ( X a ) and that Tr b is unital; i.e., it maps the identity of its domain to the identity of its range
Tr b ( i d a ) = i d a b .
Being a completely positive unital transformation, Tr b is a so-called Schwarz map (see [20], Corollary 2.8); i.e., it satisfies the following inequality:
Tr b ( X a ) Tr b ( X a ) Tr b ( X a X a ) .
This property is shared by the adjoint of any channel C .

Relative Entropy in Quantum Information Theory

We devote a separate subsection to relative entropy, as its proper treatment entails the detailed development of several technical aspects.
This is essential for addressing certain subtleties that are often overlooked in the literature yet play a crucial role in the rigorous analysis of relative entropy.
Given any finite-dimensional Hilbert space H and any density operator ρ B H ( H ) with eigenvalues λ j 0 , we indicate with Sp ( ρ ) its spectrum and with Sp + ( ρ ) the subset of Sp ( ρ ) composed only of strictly positive eigenvalues.
It will be convenient to have the following index sets at hand: I ( ρ ) = { j : λ j Sp ( ρ ) } and I + ( ρ ) = { j : λ j Sp + ( ρ ) } .
The support of ρ , denoted with supp ( ρ ) , is the subset of H defined as follows:
supp ( ρ ) : = span { x j H : x j is an eigenvector of ρ with eigenvalue λ j Sp + ( ρ ) } = j I + ( ρ ) E λ j ,
where E λ j is the eigenspace relative to the eigenvalue λ j .
From this definition and the spectral theorem for Hermitian operators, it immediately follows that
supp ( ρ ) = ker ( ρ ) = Im ( ρ ) .
This implies that H = ker ( ρ ) supp ( ρ ) , so the positive definite operator ρ | supp ( ρ ) has a trivial kernel; hence, it is invertible.
In particular, if ρ is positive definite, then ρ is invertible everywhere, and its image and support coincide with the entire H . On the other hand, non-invertible density operators, for instance, those corresponding to pure states, have support strictly included in H .
The deviation from purity, or mixedness, of ρ can be measured by its von Neumann entropy, defined as follows:
S ( ρ ) : = Tr ( ρ log ρ ) .
The condition S ( ρ ) = 0 is satisfied if and only if ρ is pure. In the literature, the precise definition of the logarithm of a matrix may vary according to the specific aims and context in which the matrix logarithm is employed.
For the purposes of this paper, the key property that the logarithm of a density operator ρ , or, more generally, of a Hermitian operator, must satisfy is exp ( log ρ ) = ρ . For this reason, we adopt a definition of log via functional calculus in the extended real numbers with the following conventions:
log ( 0 ) = , exp ( ) = 0 , 0 log 0 = 0 ,
of course justified by the limits log ε as ε 0 + , exp ( M ) 0 + as M + , and ε log ε 0 as ε 0 + .
Using the spectral theorem, if ρ is decomposed as
ρ = j I + ( ρ ) λ j P j + 0 P 0 ,
where P j denotes the orthogonal projector onto the eigenspace E λ j , j I + ( ρ ) , and P 0 is the orthogonal projector on ker ( ρ ) , then, using the previous conventions, we have
log ρ : = j I + ( ρ ) log ( λ j ) P j + ( ) P 0 ,
and so
exp ( log ρ ) = j I + ( ρ ) exp ( log λ j ) P j + exp ( ) P 0 = j I + ( ρ ) λ j P j + 0 P 0 = ρ .
Since the projectors satisfy the orthogonality relation P i P j = δ i j P j and by the convention 0 log 0 = 0 , which accounts for the zero eigenvalue, we obtain
S ( ρ ) = Tr ( ρ log ρ ) = j I + ( ρ ) λ j log ( λ j ) Tr ( P j ) ,
where Tr ( P j ) is the multiplicity of the eigenvalue λ j , in the literature, it is actually more common to write just
S ( ρ ) = j I + ( ρ ) λ j log ( λ j ) ,
with the understanding that each eigenvalue is repeated according to its multiplicity.
The von Neumann entropy does not, by itself, tell us how one state differs from another. To capture the distinguishability of two states, the concept of relative entropy (sometimes called the quantum Kullback–Leibler divergence) must be introduced.
The definition of the relative entropy between two density operators ρ and σ is subjected to a condition on the compatibility of their supports, precisely the following:
S ( ρ σ ) : = Tr ( ρ log ρ ρ log σ ) if supp ( ρ ) supp ( σ ) + if supp ( ρ ) supp ( σ ) ,
which is known as the ‘support-based definition’ of relative entropy.
In order to understand why the condition supp ( ρ ) supp ( σ ) is both necessary and sufficient for S ( ρ σ ) to be well-defined, we first note that
Tr ( ρ log ρ ρ log σ ) = Tr ( ρ log ρ ) Tr ( ρ log σ ) ,
so, the first term of Formula (28) is minus the von Neumann entropy, which is always finite. In order to analyze the second term, let us consider, alongside Equation (22), the analogous spectral decomposition of σ
σ = k I + ( σ ) μ k Π k + 0 Π 0 ,
so that
log σ = k I + ( σ ) log ( μ k ) Π k + log ( 0 ) Π 0 ,
where Π k is the orthogonal projector on the eigenspace relative to the positive eigenvalue μ k of σ , and Π 0 is the projector on ker ( σ ) . Then,
Tr ( ρ log σ ) = j I + ( ρ ) k I + ( σ ) λ j log ( μ k ) Tr ( P j Π k ) + j I + ( ρ ) λ j log ( 0 ) Tr ( P j Π 0 ) ,
where the contribution of P 0 does not appear due to the convention 0 log ( 0 ) = 0 .
The first term is always finite, since only strictly positive eigenvalues appear. Instead, the behavior of the second term depends on the value of Tr ( P j Π 0 ) . To compute it, let us consider any eigenbasis ( x j ) j I + ( ρ ) of supp ( ρ ) so that P j ( x j ) = x j for all j I + ( ρ ) and use the fact that orthogonal projectors are Hermitian and idempotent to write
Tr ( P j Π 0 ) = j I + ( ρ ) x j , P j Π 0 x j = j I + ( ρ ) P j x j , Π 0 x j = j I + ( ρ ) x j , Π 0 x j = j I + ( ρ ) x j , Π 0 Π 0 x j = j I + ( ρ ) Π 0 x j , Π 0 x j = j I + ( ρ ) Π 0 x j 2 0 .
The second term in Equation (31) diverges to if and only if Tr ( P j Π 0 ) > 0 , i.e., when there exists at least one j I + ( ρ ) such that x j supp ( ρ ) has a non-trivial projection onto ker ( σ ) , i.e., Π 0 x j is non-null, which is equivalent to saying that x j ker ( σ ) = supp ( σ ) . Therefore,
Tr ( ρ log σ ) = Tr ( P j Π 0 ) > 0 supp ( ρ ) supp ( σ ) ,
which justifies the definition given in (27).
Instead, if supp ( ρ ) supp ( σ ) , then Tr ( P j Π 0 ) = 0 ; additionally, using again the convention 0 log ( 0 ) = 0 , the second term in Equation (31) vanishes, and we remain just with the first term, which can be written in an alternative form. Consider now the orthonormal eigenbases ( x j ) j J + ( ρ ) , ( y k ) k I + ( σ ) of supp ( ρ ) and supp ( σ ) , respectively; then, we have the following well-known formula (see, e.g., [21]):
Tr ( P j Π k ) = | x j , y k | 2 ,
which shows that the factor Tr ( P j Π k ) codifies the transition probability between the pure states represented by the unit vectors x j and y k .
In summary, when supp ( ρ ) supp ( σ ) , the relative entropy between ρ and σ can be written explicitly as follows:
S ( ρ σ ) = S ( ρ ) j I + ( ρ ) k I + ( σ ) λ j log ( μ k ) Tr ( P j Π k ) = j I + ( ρ ) λ j log ( λ j ) k I + ( σ ) log ( μ k ) | x j , y k | 2 .
Two cases are particularly relevant in practical contexts:
  • If ρ , σ > 0 , then their supports coincide with the entire Hilbert space H , and so their relative entropy is the finite real number given by Equation (35);
  • Instead, if ρ > 0 but σ is not, then their relative entropy is infinite. This happens, for instance, when ρ is a full-rank mixed state and σ is a pure state.
An equivalent and useful definition of the von Neumann and relative entropy appears in the literature (see, e.g., [22]). Rather than relying on the support of the density operators, this alternative definition is based on a regularization procedure.
Specifically, given a state ρ on H , one defines the regularized operator
ρ ε : = ρ + ε i d H ,
with ε > 0 . ρ ε is a positive definite Hermitian operator, and the spectral decompositions of ρ ε and log ρ ε are
ρ ε = j I ( ρ ) ( λ j + ε ) P j , log ρ ε = j I ( ρ ) log ( λ j + ε ) P j .
We have
( λ j + ε ) log ( λ j + ε ) ε 0 + λ j log ( λ j ) if j I + ( ρ ) 0 if λ j = 0 ,
so the von Neumann entropy of ρ is well-defined via the limit
S ( ρ ) : = lim ε 0 + Tr ρ ε log ρ ε .
Similarly, setting σ ε : = σ + ε i d H , with ε > 0 , the relative entropy between ρ and σ can be defined as
S ( ρ σ ) : = lim ε 0 + Tr ( ρ ε log ρ ε ρ ε log σ ε ) ,
known as the ‘regularized definition’ of relative entropy.
Let us verify that the regularized and support-based definitions of relative entropy coincide. Equation (40) splits into two terms; the first equals minus the regularized definition of the von Neumann entropy, which is finite by Equation (39), so the only issue to address concerns the second term of Equation (40).
For that, we write the spectral decomposition of σ ε as follows:
σ ε = k I + ( σ ) ( μ k + ε ) Π k + ε Π 0 .
Repeating the same computations performed in the case of the support-based definition of S ( ρ σ ) , we obtain
Tr ( ρ ε log σ ε ) = k I + ( σ ) log ( μ k + ε ) Tr ( ρ ε Π k ) + log ( ε ) Tr ( ρ ε Π 0 ) .
If supp ( ρ ) supp ( σ ) , then Tr ( ρ ε Π 0 ) 0 when ε 0 + and the last term in Equation (42) is absent, so the limit converges to the correct value.
Instead, if supp ( ρ ) supp ( σ ) , then Tr ( ρ ε Π 0 ) α > 0 when ε 0 + ; consequently, the second term in Equation (42) diverges to , thus matching the behavior of the support-based definition.
The relative entropy has several important properties (see, e.g., [23]):
  • Klein’s inequality: S ( ρ σ ) 0 for all ρ , σ , and S ( ρ σ ) = 0 if and only if ρ = σ . This property motivates the reason why the relative entropy, despite lacking symmetry in its arguments, is taken to be a measure of the distinguishability of states in quantum theories.
  • Invariance under unitary conjugation: S ( U ρ U U σ U ) = S ( ρ σ ) for all unitary operators U acting on the same Hilbert space as ρ and σ .
  • Additivity w.r.t. tensor product: S ( ρ 1 ρ 2 σ 1 σ 2 ) = S ( ρ 1 σ 1 ) + S ( ρ 2 σ 2 ) for all density operators σ j , ρ j , j = 1 , 2 .
The monotonicity of S under partial trace is represented by the inequality
S ( Tr b ( ρ ) Tr b ( σ ) ) S ( ρ σ ) ,
which, together with Stinespring’s theorem and the three previously mentioned properties of S, permits to prove that quantum distinguishability does not increase under the action of a generic channel C
S ( C ( ρ ) C ( σ ) ) S ( ρ σ ) ,
a formula also known as the data processing inequality (DPI). In fact,
S ( C ( ρ ) C ( σ ) ) = S ( Tr b ( U ( ρ X ) U ) Tr b ( U ( σ X ) U ) ) S ( U ( ρ X ) U U ( σ X ) U ) = S ( ρ X σ X ) = S ( ρ σ ) + S ( X , X ) = S ( ρ σ ) .
Inequality (44) explains why, in the quantum information literature, a channel is often referred to as a coarse-graining procedure, a term borrowed from statistical mechanics.
This terminology reflects the idea that information about different quantum states is lost through the action of the channel, as previously distinguishable states may become indistinguishable after the transformation.
While the first three properties of S mentioned above are relatively straightforward to prove, its monotonicity under partial trace is considerably more subtle. In the next two sections, we provide a detailed analysis of the proof originally proposed by Petz and Uhlmann.

3. Petz’s Proof of the Monotonicity of the Relative Entropy Under Partial Trace

In this section, we examine the strategy proposed by Petz in [10] for proving the monotonicity of relative entropy under the partial trace operation, which is based on a clever reformulation of the expression of the relative entropy as a suitable inner product inspired by an analogous construction by Araki [2].
We will show that Petz’s proof is flawed due to an incorrect application of the contractive version of Jensen’s operator inequality. We will explain how this issue can be circumvented, thus restoring the validity of Petz’s overall approach. Furthermore, we will show how to extend it to also incorporate non-invertible density operators.
The notation that will be used throughout this section is detailed below:
  • Given the finite-dimensional Hilbert spaces ( H a , , a ) and ( H b , , b ) , we define H a b : = H a H b , with inner product , a b induced by those of H a and H b ;
  • Operators of B H ( H a ) will be denoted as X , Y , Z , and operators of B H ( H a b ) will be denoted by A , B , C ;
  • ρ , σ B H ( H a b ) are two positive definite (invertible) density operators (actually, for the following analysis, only ρ has to be invertible; however, as we noted with the support-based definition of the relative entropy, if ρ > 0 , then its support is the entire H , and so, for our analysis to be meaningful, we also have to demand σ > 0 ): ρ , σ > 0 .
In order to rewrite the relative entropy as an inner product, we must introduce some suitable superoperators.
Precisely, for fixed B , C B H ( H a b ) , B > 0 , consider the Hermitian superoperators R B , L C , Δ B , C B H ( B H ( H a b ) ) given by the right and left operator multiplications and the relative modular operator, respectively, i.e.,
R B ( A ) : = A B , L C ( A ) : = C A , Δ B , C ( A ) : = L C R B 1 ( A ) = C A B 1 .
Clearly, R B 1 = R B 1 , L C 1 = L C 1 , and
[ L C , R B ] = [ L C , R B 1 ] = 0 .
All these superoperators are Hermitian; in fact,
R B ( A ) , C a b = Tr ( ( A B ) C ) = Tr ( B A C ) = Tr ( A C B ) = A , R B ( C ) a b ,
and similarly for L C . Regarding Δ B , C , using Equation (47), we have
Δ B , C = ( L C R B 1 ) = ( R B 1 ) L C = ( R B ) 1 L C = R B 1 L C = L C R B 1 = Δ B , C .
If we consider the specific case in which A = ρ > 0 and B = σ > 0 , then we obtain
log ( Δ ρ , σ ) = log ( L σ R ρ 1 ) = log ( exp ( log ( L σ ) ) exp ( log ( R ρ 1 ) ) = log ( exp ( log ( L σ ) log ( R ρ ) ) ) = log ( L σ ) log ( R ρ ) .
Since log ( L σ ) = L log ( σ ) and log ( R ρ ) = R log ( ρ ) , applying formula (50) to ρ 1 / 2 gives
log ( Δ ρ , σ ) ( ρ 1 / 2 ) = log ( σ ) ρ 1 / 2 ρ 1 / 2 log ( ρ ) .
This last identity, the cyclic property of the trace, and the property [ ρ 1 / 2 , log ( ρ ) ] = 0 imply that
S ( ρ σ ) = Tr [ ρ log ( ρ ) ρ log ( σ ) ] = Tr [ ρ 1 / 2 ( ρ 1 / 2 log ( ρ ) ρ 1 / 2 log ( σ ) ) ] = Tr [ ( ρ 1 / 2 log ( ρ ) ρ 1 / 2 log ( σ ) ) ρ 1 / 2 ] = Tr [ ρ 1 / 2 log ( ρ ) ρ 1 / 2 ρ 1 / 2 log ( σ ) ρ 1 / 2 ] = Tr [ ρ 1 / 2 ρ 1 / 2 log ( ρ ) ρ 1 / 2 log ( σ ) ρ 1 / 2 ] = Tr [ ρ 1 / 2 log ( σ ) ρ 1 / 2 ρ 1 / 2 log ( ρ ) ] = ρ 1 / 2 , log ( Δ ρ , σ ) ( ρ 1 / 2 ) a b ,
which is the ‘inner product reformulation’ of the relative entropy between the positive definite states ρ and σ that we were searching for.
We can obtain an analogous formula for the relative entropy of the partial traces of ρ and σ . To this end, if we fix Y , Z B H ( H a ) , Y > 0 , then we can define the Hermitian superoperators R Y a , L Z a , Δ ρ , σ a B H ( B H ( H a ) ) as follows:
R Y a ( X ) : = X Y , L Z a ( X ) : = Z X , Δ Y , Z a ( X ) : = L Z R Y 1 ( X ) = Z X Y 1 .
By carrying out computations analogous to those in Equation (52) but this time using the superoperators R Tr b ( ρ ) a , L Tr b ( σ ) a , Δ ρ , σ a : = L Tr b ( σ ) a R Tr b ( ρ ) 1 a , we obtain
S ( Tr b ( ρ ) Tr b ( σ ) ) = Tr b ( ρ ) 1 / 2 , log ( Δ ρ , σ a ) ( Tr b ( ρ ) 1 / 2 ) a .
Due to the minus sign in front of the inner products appearing in Equations (52) and (54), we have that the monotonicity of the relative entropy under partial trace is equivalent to
ρ 1 / 2 , log ( Δ ρ , σ ) ( ρ 1 / 2 ) a b Tr b ( ρ ) 1 / 2 , log ( Δ ρ , σ a ) ( Tr b ( ρ ) 1 / 2 ) a .
These inner products are defined on two different Hilbert spaces, B H ( H a b ) and B H ( H a ) ; in order to perform a meaningful comparison and prove the inequality, Petz introduced a superoperator V ρ : B H ( H a ) B H ( H a b ) through the explicit formula
V ρ ( X Tr b ( ρ ) 1 / 2 ) : = Tr b ( X ) ρ 1 / 2 ,
which serves as a bridge between the reduced state Tr b ( ρ ) and the full state ρ . While, as we are going to show, this definition is computationally effective, it may at first appear somewhat ad hoc. Actually, a seemingly more natural choice for V ρ would have been Tr b for two reasons: first, as for V ρ , Tr b is a map between B H ( H a ) and B H ( H a b ) , and, second, it satisfies the Schwarz inequality (17), which, in the following, will play a crucial role in the proof of inequality (55).
It turns out that V ρ is tightly related to the adjoint of the partial trace Tr b , not w.r.t. the original inner products of H a and H a b , but w.r.t. suitably weighted inner products that naturally emerge from the previous structural analysis of the relative entropy in terms of superoperators.
In fact, the reformulations of the relative entropy obtained in Equations (52) and (54) involve inner products weighted by ρ 1 / 2 and Tr b ( ρ ) 1 / 2 , respectively. This observation suggests that the correct notion of adjoint to consider for Tr b is the one defined relative to the inner products (the positive definiteness of these inner products is guaranteed by the fact that ρ , Tr b ( ρ ) > 0 )
A , B a b , ρ : = R ρ 1 / 2 ( A ) , B a b , X , Y a , ρ : = R Tr b ( ρ ) 1 / 2 a ( X ) , Y a .
We have
X , Tr b ( A ) a , ρ = R Tr b ( ρ ) 1 / 2 a ( X ) , Tr b ( A ) a = Tr b R Tr b ( ρ ) 1 / 2 a ( X ) , A a b = R ρ 1 / 2 R ρ 1 / 2 Tr b R Tr b ( ρ ) 1 / 2 a ( X ) , A a b = R ρ 1 / 2 Tr b R Tr b ( ρ ) 1 / 2 a ( X ) , A a b , ρ .
So, the adjoint of Tr b w.r.t. the weighted inner products defined above is the operator Tr b , ρ : = R ρ 1 / 2 Tr b R Tr b ( ρ ) 1 / 2 a , which, for all X B H ( H a ) , satisfies
( R ρ 1 / 2 Tr b R Tr b ( ρ ) 1 / 2 a ) ( X Tr b ( ρ ) 1 / 2 ) = ( R ρ 1 / 2 Tr b ) ( X ) = Tr b ( X ) ρ 1 / 2 ,
and, therefore, V ρ = Tr b , ρ ; i.e.,
V ρ = R ρ 1 / 2 Tr b R Tr b ( ρ ) 1 / 2 a .
From this fact, we obtain that the explicit action of V ρ on any X B H ( H a ) is
V ρ ( X ) = ( X Tr b ( ρ ) 1 / 2 i d b ) ρ 1 / 2 ,
and this implies immediately that V ρ transforms Tr b ( ρ ) 1 / 2 into ρ 1 / 2 :
V ρ ( Tr b ( ρ ) 1 / 2 ) = ρ 1 / 2 .
Repeatedly using the cyclic property of the trace and Schwarz’s inequality (17) satisfied by Tr b , we can prove that V ρ is a contraction
V ρ ( X Tr b ( ρ ) 1 / 2 ) 2 = Tr b ( X ) ρ 1 / 2 , Tr b ( X ) ρ 1 / 2 a b = Tr [ ( Tr b ( X ) ρ 1 / 2 ) Tr b ( X ) ρ 1 / 2 ] = Tr [ ρ 1 / 2 Tr b ( X ) Tr b ( X ) ρ 1 / 2 ] = Tr [ Tr b ( X ) Tr b ( X ) ρ ] Tr [ Tr b ( X X ) ρ ] = Tr b ( X X ) , ρ a b = X X , Tr b ( ρ ) a = Tr ( X X Tr b ( ρ ) 1 / 2 Tr b ( ρ ) 1 / 2 ) = Tr [ Tr b ( ρ ) 1 / 2 X X Tr b ( ρ ) 1 / 2 ] = Tr [ ( X Tr b ( ρ ) 1 / 2 ) X Tr b ( ρ ) 1 / 2 ] = X Tr b ( ρ ) 1 / 2 , X Tr b ( ρ ) 1 / 2 a = X Tr b ( ρ ) 1 / 2 2 .
Note now that
Δ ρ , σ a ( X ) Tr b ( ρ ) 1 / 2 , X Tr b ( ρ ) 1 / 2 a = Tr b ( σ ) X Tr b ( ρ ) 1 / 2 , X Tr b ( ρ ) 1 / 2 a = Tr [ ( Tr b ( σ ) X Tr b ( ρ ) 1 / 2 ) X Tr b ( ρ ) 1 / 2 ] = Tr [ Tr b ( ρ ) 1 / 2 X Tr b ( σ ) X Tr b ( ρ ) 1 / 2 ] = Tr [ X X Tr b ( σ ) ] .
Using the equality just proven, we can show that V ρ Δ ρ , σ V ρ Δ ρ , σ a ; in fact,
V ρ Δ ρ , σ V ρ ( X Tr b ( ρ ) 1 / 2 ) , X Tr b ( ρ ) 1 / 2 a = Δ ρ , σ V ρ ( X Tr b ( ρ ) 1 / 2 ) , V ρ ( X Tr b ( ρ ) 1 / 2 ) a b = Δ ρ , σ ( Tr b ( X ) ρ 1 / 2 ) , Tr b ( X ) ρ 1 / 2 a b = σ Tr b ( X ) ρ 1 / 2 , Tr b ( X ) ρ 1 / 2 = Tr [ ρ 1 / 2 Tr b ( X ) σ Tr b ( X ) ρ 1 / 2 ] = Tr [ Tr b ( X ) Tr b ( X ) σ ] Tr [ Tr b ( X X ) σ ] = Tr b ( X X ) , σ a b = X X , Tr b ( σ ) a = Tr [ X X Tr b ( σ ) ] = Δ ρ , σ a ( X ) Tr b ( ρ ) 1 / 2 , X Tr b ( ρ ) 1 / 2 a ,
where we again used Schwarz’s inequality, and, to write the last equality, we applied Equation (64).
Since log ( x ) is an operator monotone function, the inequality V ρ Δ ρ , σ V ρ Δ ρ , σ a implies
log ( V ρ Δ ρ , σ V ρ ) log ( Δ ρ , σ a ) ,
and so
S ( Tr b ( ρ ) Tr b ( σ ) ) = Tr b ( ρ ) 1 / 2 , log ( Δ ρ , σ a ) ( Tr b ( ρ ) 1 / 2 ) a Tr b ( ρ ) 1 / 2 , log ( V ρ Δ ρ , σ V ρ ) ( Tr b ( ρ ) 1 / 2 ) a .
Petz’s strategy to make the relative entropy S ( ρ σ ) appear on the right-hand side of the previous inequality consists of using the fact that V ρ is a contraction to apply the contractive Jensen operator inequality (9) with f ( x ) = log ( x ) . In this way, due to (62), we would get
S ( Tr b ( ρ ) Tr b ( σ ) ) Tr b ( ρ ) 1 / 2 , log ( V ρ Δ ρ , σ V ρ ) ( Tr b ( ρ ) 1 / 2 ) a Tr b ( ρ ) 1 / 2 , V ρ ( log ( Δ ρ , σ ) ) V ρ ( Tr b ( ρ ) 1 / 2 ) a = V ρ Tr b ( ρ ) 1 / 2 , log ( Δ ρ , σ ) V ρ ( Tr b ( ρ ) 1 / 2 ) a b = ρ 1 / 2 , log ( Δ ρ , σ ) ( ρ 1 / 2 ) a b = S ( ρ σ ) ,
and so the proof of the monotonicity of S w.r.t. the partial trace Tr b would be achieved.
For this argument to be valid, log ( x ) is required to be operator convex, which is true, to be well-defined at x = 0 and to satisfy log ( 0 ) 0 .
Petz circumvented the lack of definition of log ( x ) in x = 0 by using the following integral identity:
0 + 1 x + ξ 1 1 + ξ d ξ = lim M + log ( x + ξ ) log ( 1 + ξ ) 0 M = = lim M + log x + M 1 + M log ( x ) = log ( x ) .
Since the integral over [ 0 , + ) coincides with that over ( 0 , + ) , we may restrict our attention to strictly positive values of ξ . If we denote by i d a b the identity operator on B H ( H a b ) , we have
Δ ρ , σ , ξ ( Δ ρ , σ + ξ i d a b ) 1 ( i d a b + ξ i d a b ) 1 = ( Δ ρ , σ + ξ i d a b ) 1 i d a b ( 1 + ξ ) 1 ,
moreover,
ρ 1 / 2 , i d a b ( 1 + ξ ) 1 ρ 1 / 2 a b = ( 1 + ξ ) 1 Tr ( ρ ) = ( 1 + ξ ) 1 ,
thus,
ρ 1 / 2 , Δ s , t , ξ ρ 1 / 2 a b = ρ 1 / 2 , ( Δ ρ , σ + ξ i d a b ) 1 ρ 1 / 2 a b ( 1 + ξ ) 1 .
It follows that
S ( ρ σ ) = 0 ρ 1 / 2 , ( Δ ρ , σ + ξ i d a b ) 1 ρ 1 / 2 a b ( 1 + ξ ) 1 d ξ ,
and, analogously,
S ( Tr b ( ρ ) Tr b ( σ ) ) = 0 Tr b ( ρ ) 1 / 2 , ( Δ ρ , σ a + ξ i d a ) 1 Tr b ( ρ ) 1 / 2 a ( 1 + ξ ) 1 d ξ .
For all ξ ( 0 , + ) , the function g ξ given by x ( x + ξ ) 1 ( 1 + ξ ) 1 is well-defined for x = 0 , and it is operator convex and operator monotone (decreasing); see [16] chapter V.1. Thanks to this last property, we have
( Δ ρ , σ a + ξ ) 1 ( V ρ Δ ρ , σ a V ρ + ξ ) 1 .
However, g ξ ( 0 ) = ξ 1 ( 1 + ξ ) 1 > 0 for all ξ ( 0 , + ) , so the contractive Jensen operator inequality cannot be used to write
( V ρ Δ ρ , σ a V ρ + ξ ) 1 V ρ ( Δ ρ , σ a + ξ ) 1 V ρ ,
which would lead to the proof of the monotonicity of the relative entropy with computations analogous to those shown in Formula (68).
A simple counterexample illustrating the failure of inequality (76) arises in the scalar case, where a contraction reduces to a multiplication by a real coefficient α ( 0 , 1 ] and satisfies α = α . The inequality
( α x α + ξ ) 1 α ( x + ξ ) 1 α ,
is false for all ξ ( 0 , + ) , as shown in Figure 1.
Note that the inequality log ( V X V ) V log ( X ) V is also false for a generic contraction V, as we show in Figure 2 with a counterexample in the scalar case.
To summarize, showing that V ρ is a contraction does not permit the use of the contractive version of Jensen’s operator inequality to prove the monotonicity of the relative entropy.

3.1. Correction of Petz’s Strategy to Prove the Relative Entropy Monotonicity

There is a simple correction of Petz’s strategy that restores the validity of his approach to prove the monotonicity of the relative entropy in a rigorous way. The correction was provided by Petz himself, together with Nielsen, in [24]. The same line of reasoning can also be found in [25,26].
The key idea of the correction lies in recognizing that the operator V ρ is not merely a contraction but an isometry; i.e., V ρ V ρ is the identity operator on B H ( H a ) . In fact, using Equations (11) and (56), we have
Y Tr b ( ρ ) 1 / 2 , V ρ V ρ ( X Tr b ( ρ ) 1 / 2 ) a = V ρ ( Y Tr b ( ρ ) 1 / 2 ) , V ρ ( X Tr b ( ρ ) 1 / 2 ) a b = Tr b ( Y ) ρ 1 / 2 , Tr b ( X ) ρ 1 / 2 a b = Tr [ ( Y i d b ) ( X i d b ) ρ ] = Tr [ ( Y X i d b ) ρ ] = Tr [ Y X Tr b ( ρ ) ] = Tr [ Tr b ( ρ ) 1 / 2 Y X Tr b ( ρ ) 1 / 2 ] = Tr [ ( Y Tr b ( ρ ) 1 / 2 ) X Tr b ( ρ ) 1 / 2 ] = Y Tr b ( ρ ) 1 / 2 , X Tr b ( ρ ) 1 / 2 a .
This allows us to apply point ( i i i ) of Theorem 1 with H = B H ( H a b ) , K = B H ( H a ) and X = Δ ρ , σ , which ensures that the inequality
log ( V s Δ ρ , σ V ρ ) V s log ( Δ ρ , σ ) V ρ ,
holds true, confirming the validity of the computations in (68), thereby establishing the monotonicity of the relative entropy.

3.2. Extension of Petz’s Proof to Non-Invertible Density Operators

The corrected version of Petz’s strategy for proving the monotonicity of relative entropy can be extended to include non-invertible density operators. To this end, the interplay between the support-based and the regularized definition of relative entropy given in (27) and (40), respectively, will prove to be useful.
First of all, note that, if supp ( ρ ) supp ( σ ) , then S ( ρ σ ) = + , and the monotonicity statement is trivially true.
We therefore restrict our attention to the case supp ( ρ ) supp ( σ ) , which ensures that the relative entropy is finite. Within this setting, we assume that ρ is non-invertible, while σ may or may not be invertible. This covers all cases not addressed by Petz’s original analysis in [10].
To establish the monotonicity of relative entropy in this broader context, we must first ensure that S ( Tr b ( ρ ) Tr b ( σ ) ) < + , i.e., that supp ( Tr b ( ρ ) ) supp ( Tr b ( σ ) ) . Observe that it is not necessary to check this condition when ρ , σ > 0 , since, in that case, Tr b ( ρ ) and Tr b ( σ ) are also positive definite, and, thus, their supports coincide with H a .
We need a preliminary lemma regarding the kernel of positive semi-definite operators.
Lemma 1.
Let T 1 and T 2 be Hermitian positive semi-definite operators on a finite-dimensional Hilbert space H . Then,
ker ( T 1 + T 2 ) = ker ( T 1 ) ker ( T 2 ) .
Proof. 
Proving the inclusion ker ( T 1 ) ker ( T 2 ) ker ( T 1 + T 2 ) is trivial: if we have x ker ( T 1 ) ker ( T 2 ) , then 0 H = T 1 x + T 2 x = ( T 1 + T 2 ) x , so x ker ( T 1 + T 2 ) .
To show the inclusion ker ( T 1 + T 2 ) ker ( T 1 ) ker ( T 2 ) , we first note that, thanks to the positive semi-definiteness of T 1 and T 2 , for all x H , we have
( T 1 + T 2 ) x , x = T 1 x , x + T 2 x , x 0 ,
with equality to 0 if and only if T 1 x , x = T 2 x , x = 0 , i.e., if and only if T 1 x = T 2 x = 0 because the eigenvalues of both T 1 and T 2 are non-negative.
Hence, if x ker ( T 1 + T 2 ) , then ( T 1 + T 2 ) x , x = 0 , so, due to the previous considerations, T 1 x = T 2 x = 0 , and, therefore, x ker ( T 1 ) ker ( T 2 ) . □
Proposition 1.
For all density operators ρ , σ such that supp ( ρ ) supp ( σ ) , it holds that
supp ( Tr b ( ρ ) ) supp ( Tr b ( σ ) ) .
Proof. 
Using the notation of Section 2, consider again the spectral decompositions of ρ and σ :
ρ = j I + ( ρ ) λ j P j + 0 P 0 , σ = k I + ( σ ) μ k Π k + 0 Π 0 ,
where P j and Π k are the orthogonal projectors onto the eigenspaces corresponding to the positive eigenvalues λ j of ρ and μ k of σ , respectively. Then, the following operator sums yield the orthogonal projectors onto supp ( ρ ) and supp ( σ ) , respectively:
P = j I + ( ρ ) P j , Π = k I + ( σ ) Π k .
Using the linearity and positivity of the partial trace and the fact that multiplying a positive semi-definite operator by a strictly positive scalar does not change its support, we have
supp Tr b ( ρ ) = supp j I + ( ρ ) λ j Tr b ( P j ) = supp j I + ( ρ ) Tr b ( P j ) = supp Tr b j I + ( ρ ) P j = supp ( Tr b ( P ) ) .
The same argument applies to σ and Π so that
supp ( Tr b ( σ ) ) = supp ( Tr b ( Π ) ) .
Moreover, according to a standard result about orthogonal projectors, the inclusion supp ( ρ ) supp ( σ ) is equivalent to the fact that the operator Q : = Π P is an orthogonal projector on supp ( σ ) supp ( ρ ) = supp ( σ ) ker ( ρ ) . By the linearity of the partial trace, we can write
Tr b ( Π ) = Tr b ( P ) + Tr b ( Q ) ,
and, moreover, using the fact that Tr b is a positive map and Lemma 1, we have
supp ( Tr b ( Π ) ) = supp ( Tr b ( P ) + Tr b ( Q ) ) = ker ( Tr b ( P ) + Tr b ( Q ) ) = ker ( Tr b ( P ) ) ker ( Tr b ( Q ) ) = span ( ker ( Tr b ( P ) ) ker ( Tr b ( Q ) ) ) = supp ( Tr b ( P ) ) + supp ( Tr b ( Q ) ) supp ( Tr b ( P ) ) ,
having used a standard property of the orthogonal complement of the intersection of two vector subspaces. Therefore,
supp ( Tr b ( ρ ) ) = supp ( Tr b ( P ) ) supp ( Tr b ( Π ) ) = supp ( Tr b ( σ ) ) ,
as claimed. □
While the support-based definition of relative entropy is useful to prove Equation (82), in order to prove the monotonicity of relative entropy for a non-invertible density operator ρ , the regularized definition (40) turns out to be more adequate.
Since both ρ ε > 0 and σ ε > 0 for any arbitrarily small ε > 0 , the modular operator Δ ρ ε , σ ε is well-defined. Similarly, the modular operator Δ ρ ε , σ ε a is also well-defined.
By applying the same steps used in the corrected version of Petz’s proof and using Equation (52), we obtain that, for any ε > 0 ,
Tr [ Tr b ( ρ ε ) log Tr b ( ρ ε ) Tr b ( ρ ε ) log Tr b ( σ ε ) ] = Tr b ( ρ ε ) 1 / 2 , log ( Δ ρ ε , σ ε a ) Tr b ( ρ ε ) 1 / 2 a ρ ε 1 / 2 , log ( Δ ρ ε , σ ε ) ρ ε 1 / 2 a b = Tr [ ρ ε log ρ ε ρ ε log σ ε ] .
Notice that we can deal with ρ ε , σ ε instead of ρ , σ even if they do not have unit trace because this property is used in Petz’s original (and flawed) argument only in Equation (71), which does not play any role in the corrected version outlined in Section 3.1.
By continuity, taking the limit ε 0 + on both sides of the previous inequality leads to
S ( Tr b ( ρ ) | | Tr b ( σ ) ) = lim ϵ 0 + Tr [ Tr b ( ρ ε ) log Tr b ( ρ ε ) Tr b ( ρ ε ) log Tr b ( σ ε ) ] lim ϵ 0 + Tr [ ρ ε log ρ ε ρ ε log σ ε ] = S ( ρ | | σ ) ,
which completes the proof for non-invertible ρ and general σ .
As a final remark, we note that, in [10], Petz claimed that his proof applies not only to quantum channels but also to adjoints of unital Schwarz maps. However, that claim relies on the flawed argument that we have pointed out. The validity of Petz’s proof can be restored by proving that V ρ is an isometry, a property that holds if we are dealing with the partial trace but not with the adjoint of a general unital Schwarz map.

4. Uhlmann’s Proof of the Monotonicity of the Relative Entropy Under Partial Trace

The proof offered by Uhlmann in [7] (see also [27] for a review) was more general than that offered by Petz because it also naturally encompassed the case of non-invertible density operators. However, thanks to the correction and generalization of Petz’s proof outlined in the previous section, now, we can state that the two procedures have the same generality.
Uhlmann’s proof is based on the concept of interpolations of positive sesquilinear forms, which we recall in the following subsection. Coherently with the analysis developed so far, we will consider only the case of finite dimensional vector spaces.

4.1. Interpolations of Positive Sesquilinear Forms

Let V be a vector space of finite dimension d over the field F = R or C , and let F ( V ) be the set of sesquilinear forms over V, assumed to be linear in the second variable and conjugate-linear in the first. The results of this subsection also encompass the case of bilinear forms.
We say that α F ( V ) is positive if α ( v , v ) 0 v V , and we denote the space of positive sesquilinear forms over V by F + ( V ) .
We can endow F + ( V ) with a Löwner-like partial ordering: given α , β F + ( V ) , we say that β α if α β 0 .
Fixing a basis of V, for any form α F ( V ) , there exists a unique Hermitian operator T End F ( V ) such that, for all v , w V , written in coordinates as x , y F d , one has
α ( v , w ) = x T y .
It can be easily proven that, for a positive form α F + ( V ) , the kernel of α coincides with the isotropic cone
ker ( α ) = { v V : α ( v , v ) = 0 } ,
and they are both equal to the kernel of the positive semi-definite operator T.
Let H now be a Hilbert space with inner product , H , h : V H be a linear surjective map, and A B H ( H ) . Then, α F + ( V ) is said to be represented by ( H , h , A ) , indicated with α ( H , h , A ) , if
α ( v , w ) = h ( v ) , A h ( w ) H , v , w V .
Two representations of positive sesquilinear forms α ( H , h , A ) and β ( H , h , B ) are said to be compatible if [ A , B ] = 0 . As we shall see shortly, compatibility is a key concept in constructing a functional calculus for sesquilinear forms.
The following theorem shows that compatible representations exist, and its constructive proof provides a (non-unique) way to build them.
Theorem 2.
Let α , β F + ( V ) . Then, there exist representations α ( H , h , A ) and β ( H , h , B ) (with the same mapping h) such that [ A , B ] = 0 .
Proof. 
Let N V be the kernel of the form α + β
N = { v V : α ( v , v ) + β ( v , v ) = 0 } .
By fixing a basis of V, we can associate α and β with two Hermitian and positive semi-definite operators T 1 , T 2 End F ( V ) via Equation (92). It follows that
N = ker ( α + β ) = ker ( T 1 + T 2 ) = ker ( T 1 ) ker ( T 2 ) ,
where the last equality is provided by Lemma 1.
Now, by setting H : = V / N , we can define the following inner product:
, : H × H F ( v + N , w + N ) v + N , w + N = α ( v , w ) + β ( v , w ) .
Note that this inner product is well-defined thanks to the equalities (96).
By taking h as the (surjective) quotient map h : V H , v h ( v ) = v + N , we can define the following positive sesquilinear forms on H :
α ˜ ( h ( v ) , h ( w ) ) = α ˜ ( v + N , w + N ) : = α ( v , w ) ,
β ˜ ( h ( v ) , h ( w ) ) = β ˜ ( v + N , w + N ) : = β ( v , w ) ,
for all v , w V .
Thanks to the Riesz representation theorem, there exists a unique couple of positive Hermitian operators A , B B H ( H ) such that
α ˜ ( h ( v ) , h ( w ) ) = h ( v ) , A h ( w ) = α ( v , w ) ,
β ˜ ( h ( v ) , h ( w ) ) = h ( v ) , B h ( w ) = β ( v , w ) .
It follows that, for all v , w V ,
h ( v ) , h ( w ) = α ( v , w ) + β ( v , w ) = α ˜ ( h ( v ) , h ( w ) ) + β ˜ ( h ( v ) , h ( w ) ) = h ( v ) , ( A + B ) h ( w ) ,
and, hence, A + B = i d H , which implies that A and B commute. □
Consider now R + 2 = { ( x , y ) R 2 : x 0 , y 0 } , and let J be the set of homogeneous (of degree 1), measurable, and locally bounded functions f : R + 2 R . It is possible to develop the concept of function of positive sesquilinear forms thanks to the following theorem, whose quite lengthy proof can be consulted in [28].
Theorem 3.
Let V be a vector space, α , β F + ( V ) , and let α ( H , h , A ) and β ( H , h , B ) be two compatible representations of α and β. Then, for any f J , the function γ : V × V F ,
γ ( v , w ) : = h ( v ) , f ( A , B ) h ( w ) , v , w V ,
is a well-defined sesquilinear form on V; i.e., γ is independent of the choice of representations, and, for any given f, γ depends only on α , β F + ( V ) .
Note that, on the right-hand side of (103), f ( A , B ) is intended as an operator function, which is well-defined since A and B commute, and so they can be simultaneously diagonalized.
Combining Theorems 2 and 3, every f J can be extended to a function of positive sesquilinear forms. Keeping the same symbol for simplicity, we can define
f : F + ( V ) × F + ( V ) F ( V ) ,
as follows: given α , β F + ( V ) and any compatible representations α ( H , h , A ) and β ( H , h , B ) , the sequilinear form f ( α , β ) is represented as
f ( α , β ) ( H , h , f ( A , B ) ) .
The elements of J used to define the concept of the interpolation of positive sesquilinear forms are the positive functions f t ( x , y ) = x 1 t y t , where t [ 0 , 1 ] . Given α , β F + ( V ) , we call interpolation from α to β the positive sesquilinear form
γ α β t : = f t ( α , β ) : V × V F ( v , w ) f t ( α , β ) ( v , w ) ,
which means that, for any compatible representation α ( H , h , A ) and β ( H , h , B ) ,
γ α β t ( v , w ) = h ( v ) , A 1 t   B t   h ( w ) , v , w V .
The interpolation is said to go from α to β because, clearly, γ α β 0 = α and γ α β 1 = β . Moreover, the interpolation of two interpolations is another interpolation. In fact, given t 1 , t 2 [ 0 , 1 ] , we have
γ γ α β t 1 γ α β t 2 t ( v , w ) = h ( v ) , ( A 1 t 1 B t 1 ) 1 t ( A 1 t 2 B t 2 ) t h ( w ) = h ( v ) , A 1 t 1 t + t 1 t A t t 2 t B t 1 t 1 t B t 2 t h ( w ) = h ( v ) , A 1 ( t 1 ( 1 t ) + t 2 t ) B t 1 ( 1 t ) + t 2 t h ( w ) = γ α β t ( v , w ) ,
with t = t 1 ( 1 t ) + t 2 t .
The interpolation of α , β computed at the value t = 1 / 2 corresponds to a particularly important positive sesquilinear form, indicated with
α β : = γ α β 1 / 2 ,
and called the geometric mean of α , β . Clearly, we have
α β ( v , w ) = γ α β 1 / 2 ( v , w ) = h ( v ) , A 1 / 2 B 1 / 2 h ( w ) , v , w V .
An important property of the geometric mean of α and β will emerge in connection with the following concept: a positive sesquilinear form r F + ( V ) is said to be dominated by α , β F + ( V ) if
| r ( v , w ) | 2 α ( v , v ) β ( w , w ) , v , w V .
The following theorem establishes that α β is the ‘maximal’ positive sesquilinear form dominated by α and β .
Theorem 4.
Let V be a vector space, and let α , β F + ( V ) ; then, α β is dominated by α , β . Moreover, interpreting F + ( V ) as a partially ordered set, let S F + ( V ) be the subset of positive sesquilinear forms dominated by α , β ; then, α β = sup S , i.e., r α β for all r S .
Proof. 
The first statement is simply an application of the Cauchy–Schwarz inequality. For all v , w V and for any compatible representations of α and β , we have
| h ( v ) , A 1 / 2 B 1 / 2 h ( w ) | 2 = | A 1 / 2 h ( v ) , B 1 / 2 h ( w ) | 2 A 1 / 2 h ( v ) , A 1 / 2 h ( v ) B 1 / 2 h ( w ) , B 1 / 2 h ( w ) = h ( v ) , A h ( v ) h ( w ) , B h ( w ) ,
having used the fact that A and B, and so their square roots are Hermitian operators. Thus,
| α β ( v , w ) | 2 α ( v , v ) β ( w , w ) , v , w V ,
so α β is dominated by α and β .
To prove the second statement, consider a form r F + ( V ) dominated by α , β , and let α ( H , h , A ) and β ( H , h , B ) be compatible representations. By applying the constructive proof of Theorem 2 to r, we can find a positive operator C B H ( H ) such that r ( v , w ) = h ( v ) , C h ( w ) ; however, the representation r ( H , h , C ) will not, in general, be compatible with that of α and β . Since r is dominated by α and β , we have
| h ( v ) , C h ( w ) | 2 h ( v ) , A h ( v ) h ( w ) , B h ( w ) , v , w V ,
or, thanks to the fact that h is surjective on H ,
| x , C y | 2 x , A x y , B y , x , y H .
Now, the second statement of the theorem, i.e., r α β , means that ( α β r ) ( v , v ) 0 for all v V and all r S , which is equivalent to u , ( A 1 / 2 B 1 / 2 C ) u 0 for all u H and all C satisfying inequality (115).
It follows that the second statement of the theorem will be proven if we manage to show that C A 1 / 2 B 1 / 2 . In order to obtain this result, a regularization procedure applied to the operators A , B will be helpful: since A 0 and B 0 , for all ε > 0 , A ε : = A + ε i d H , B ε : = B + ε i d H , and their square roots A ε 1 / 2 , B ε 1 / 2 are Hermitian, positive, and invertible. Moreover, it is clear that A A ϵ , B B ϵ , hence
A ε 1 / 2 A A ε 1 / 2 i d H , B ε 1 / 2 B B ε 1 / 2 i d H .
Finally, A ε and B ε are monotonically decreasing in ε w.r.t. the Löwner ordering and A ε A and B ε B , as ε 0 . For all x , y H , there exist unique vectors u , v H such that
x : = A ε 1 / 2 u , y : = B ε 1 / 2 v ,
then inequality (115) becomes
| A ε 1 / 2 u , C B ε 1 / 2 v | 2 A ε 1 / 2 u , A A ε 1 / 2 u B ε 1 / 2 v , B B ε 1 / 2 v ,
i.e.,
| u , A ε 1 / 2 C B ε 1 / 2 v | 2 u , A ε 1 / 2 A A ε 1 / 2 u v , B ε 1 / 2 B B ε 1 / 2 v u , u v , v ,
having used the inequalities written in (116).
By considering u = v and taking into account that A ε 1 / 2 C B ε 1 / 2 is a positive operator, we can write
u , A ε 1 / 2 C B ε 1 / 2 u u , u , u H ,
which implies that A ϵ 1 / 2 C B ϵ 1 / 2 i d H , thus C A ε 1 / 2 B ε 1 / 2 . By taking the limit ε 0 , we get C A 1 / 2 B 1 / 2 , and so the second statement of the theorem is also proven. □
The property of the geometric mean just proven allows us to extend the ordering relation between two positive sesquilinear forms to their interpolations, in the sense specified by the following theorem.
Theorem 5.
Let V be a vector space, and let α , α , β , β F + ( V ) such that α α and β β ; then,
γ α β t γ α β t , t [ 0 , 1 ] .
Proof. 
The statement is clearly satisfied for t = 0 , 1 . Setting t = 1 / 2 , thanks to the first part of Theorem 4, we get
| γ α β 1 / 2 ( v , w ) | 2 α ( v , v ) β ( w , w ) α ( v , v ) β ( w , w ) , v , w V ,
where the second inequality follows from the hypotheses of this theorem. This means that γ α β 1 / 2 is dominated by α , β , and so the extremality of the geometric mean implies that
γ α β 1 / 2 γ α β 1 / 2 .
If we now use Equation (108) with t = t 2 = 1 / 2 and t 1 = 0 , we get
γ α β 1 / 4 = γ γ α β 0 γ α β 1 / 2 1 / 2 , γ α β 1 / 4 = γ γ α β 0 γ α β 1 / 2 1 / 2 .
By repeating the previous argument, this time using Equation (123), we show that γ α β 1 / 4 γ α β 1 / 4 .
By iterating this procedure, we can prove the statement of the theorem for any t [ 0 , 1 ] of the type t k , n = k / 2 n , with n , k N , k 2 n , which is a dense subset of [ 0 , 1 ] .
Finally, the functions t k , n γ α β t k , n ( v , w ) and t k , n γ α β t k , n ( v , w ) are continuous for every fixed v , w V ; hence, the theorem holds for all t [ 0 , 1 ] . □
In a similar way, we can prove another important result. Let ψ : U V be a linear map between vector spaces, and let α , β F + ( V ) . Then, ψ allows us to pull back these sesquilinear forms on U as follows:
ψ * α : U × U F ( v , w ) α ( ψ ( v ) , ψ ( w ) ) , ψ * β : U × U F ( v , w ) β ( ψ ( v ) , ψ ( w ) ) .
The following theorem shows that the pull-back of an interpolation of the forms in F + ( V ) is always ‘smaller’ than the interpolation of their pull-backs, with respect to the partial ordering of F + ( U ) .
Theorem 6.
Let U , V be vector spaces, ψ : U V be a linear map, and α , β F + ( V ) ; then,
ψ * γ α β t γ ψ * α ψ * β t , t [ 0 , 1 ] .
Proof. 
The argument that we use is quite similar to the one appearing in the previous proof. The statement is true for t = 0 and t = 1 because, in these cases, we have ψ * α ψ * α and ψ * β ψ * β , respectively.
Let us now consider t = 1 / 2 , which gives rise to the geometric mean, so
| ψ * α β ( v , w ) | 2 = | α β ( ψ ( v ) , ψ ( w ) ) | 2 , v , w U .
Since α β is dominated by α , β , we can write
| ψ * α β ( v , w ) | 2 α ( ψ ( v ) ψ ( v ) ) β ( ψ ( w ) , ψ ( w ) ) = ψ * α ( v , v ) ψ * β ( w , w ) ,
which shows that ψ * α β is dominated by ψ * α , ψ * β . Thus, by the extremal property of the geometric mean, the statement of the theorem holds for t = 1 / 2 .
By iterating this reasoning as done in the proof of the previous theorem, the validity of inequality (126) can be generalized to all t [ 0 , 1 ] . □

4.2. Definition of the Relative Entropy in Terms of Interpolations of Forms

In this subsection, we apply the results previously established to reformulate the relative entropy in a manner that will facilitate the proof of its monotonicity under partial trace, which is to be presented in the next subsection.
Adopting notations analogous to those introduced at the beginning of Section 3, we identify the vector space V, on which the forms of interest for us will be defined, with B ( H a b ) . As we know, this is a Hilbert space w.r.t. the Hilbert–Schmidt inner product A , B a b = Tr ( A B ) , A , B B ( H a b ) , which is a positive definite sesquilinear form.
Given two density operators ρ , σ B H ( H a b ) , we can define the following positive sesquilinear forms ρ L , σ R : B ( H a b ) × B ( H a b ) F :
ρ L ( A , B ) : = Tr ( ρ B A ) = Tr ( A ρ B ) = A , L ρ B ,
σ R ( A , B ) : = Tr ( σ A B ) = Tr ( A B σ ) = A , R σ B ,
where the operators L ρ and R σ are defined as in Equation (46). We immediately recognize the two representations
ρ L ( B ( H a b ) , i d a b , L ρ ) , σ R ( B ( H a b ) , i d a b , R σ ) ,
where i d a b is the identity map on B ( H a b ) . These representations are compatible because, thanks to Equation (47), [ L ρ , R σ ] = 0 .
We can now define the relative entropy positive sesquilinear form between the two density operators ρ , σ , indicated with S ρ σ : B ( H a b ) × B ( H a b ) F , as the rate of change of the interpolation γ ρ L σ R t with respect to ρ L
S ρ σ ( A , B ) = lim inf t 0 + γ ρ L σ R t ( A , B ) ρ L ( A , B ) t , A , B B ( H a b ) .
Remark 1.
The use of lim inf instead of an ordinary limit is motivated by the following two arguments:
1. 
When ρ and σ are not invertible, the interpolation function t γ ρ L σ R t ( A , B ) may not be differentiable at t = 0 + . The ordinary limit of the differential quotient at t = 0 + might fail to exist due to oscillations or a lack of smoothness in the interpolation path. In such situations, the ordinary limit does not exist. However, lim inf always exists (possibly infinite), thereby ensuring that the entropy form S ρ σ ( A , B ) is always well-defined.
2. 
The function t γ ρ L σ R t ( A , B ) is convex in t, as it arises from the interpolation f t ( x , y ) = x 1 t y t , which is jointly operator convex for x , y > 0 . For convex functions, the left and right derivatives at an endpoint may differ, and the correct notion of derivative from the right at t = 0 is the lower right Dini derivative, i.e., the lim inf of the difference quotient. Thus, the use of lim inf aligns with standard practice in convex analysis.
The relative entropy between states ρ , σ can be recovered as follows.
Theorem 7.
For all density operators ρ , σ B H ( H a b ) , we have
S ( ρ σ ) = S ρ σ ( i d a b , i d a b ) .
Proof. 
Let us first consider the case of invertible density operators ρ > 0 and σ > 0 . Then,
S ρ σ ( i d a b , i d a b ) = lim inf t 0 + γ ρ L σ R t ( i d a b , i d a b ) ρ L ( i d a b , i d a b ) t = lim inf t 0 + i d a b , L ρ 1 t R σ t i d a b i d a b , L ρ i d a b t = lim inf t 0 + Tr ( ρ 1 t σ t ) Tr ( ρ ) t = d d t t = 0 Tr ( ρ 1 t σ t ) = d d t t = 0 Tr exp ( ( 1 t ) log ρ ) exp ( t log σ ) = Tr [ exp ( ( 1 t ) log ρ ) log ρ exp ( t log σ ) + exp ( ( 1 t ) log ρ ) exp ( t log σ ) log σ ] t = 0 = Tr ρ log ρ + ρ log σ = Tr ρ log ρ ρ log σ = S ( ρ σ ) .
In the case of σ and ρ , which are not invertible, we can use the regularized version of relative entropy. We have seen an equivalent definition of the relative entropy in (40), and, thus, it is natural to consider
lim ϵ 0 + S ρ ϵ σ ϵ ( i d a b , i d a b ) .
Now, we have that both ρ ϵ , σ ϵ are positive definite, and, thus, log ρ ϵ , log σ ϵ are well-defined. By repeating the steps in (134), we get that
lim ϵ 0 + S ρ ϵ σ ϵ ( i d a b , i d a b ) = lim ϵ 0 + Tr ( ρ ϵ log ρ ϵ ρ ϵ log σ ϵ ) .
We obtain the same definition of relative entropy as in (27). □

4.3. Proof of the Monotonicity of the Relative Entropy Under Partial Trace

Thanks to formula (133), the monotonicity of the relative entropy under the partial trace Tr b : B ( H a b ) B ( H a ) will be proven if we show that
S Tr b ( ρ ) Tr b ( σ ) ( i d a , i d a ) S ρ σ ( i d a b , i d a b ) .
As in Petz’s proof, the adjoint of the partial trace plays a central, though conceptually distinct, role in establishing the monotonicity of relative entropy.
Specifically, in Uhlmann’s approach, Tr b acts as a pull-back map; i.e., we define ψ : = Tr b : B ( H a ) B ( H a b ) and use it to pull back the positive sesquilinear forms ρ L and σ R introduced in Equations (129) and (130), respectively. Thanks to Theorem 6, we have
ψ * γ ρ L σ R t γ ψ * ρ L ψ * σ R t ,
i.e.,
γ ρ L σ R t ( Tr b ( X ) , Tr b ( X ) ) = ψ * γ ρ L σ R t ( X , X ) γ ψ * ρ L ψ * σ R t ( X , X ) ,
for all X B ( H a ) . Now, we can use the fact that Tr b preserves density operators and that Tr b is a Schwarz map to write
ψ * ρ L ( X , X ) = ρ L ( Tr b ( X ) , Tr b ( X ) ) = Tr ( Tr b ( X ) ρ Tr b ( X ) ) = Tr ( ρ Tr b ( X ) Tr b ( X ) ) Tr ( ρ Tr b ( X X ) ) = ρ , Tr b ( X X ) = Tr b ( ρ ) , X X = Tr ( Tr b ( ρ ) X X ) = Tr b ( ρ ) L ( X , X ) .
Replacing ρ L with σ R , we find
ψ * σ R ( X , X ) Tr b ( σ ) R ( X , X ) .
Thus, we have proven that ψ * ρ L Tr b ( ρ ) L and ψ * σ R Tr b ( σ ) R , and so Theorem 5 implies that, for all t [ 0 , 1 ] , it holds that
γ ψ * ρ L ψ * σ R t γ Tr b ( ρ ) L Tr b ( σ ) R t .
This result and inequality (139) imply
γ ρ L σ R t ( Tr b ( X ) , Tr b ( X ) ) γ Tr b ( ρ ) L Tr b ( σ ) R t ( X , X ) ,
for all X B ( H a ) . By considering in particular X = i d a and recalling that Tr b is a unital map, i.e., Tr b ( i d a ) = i d a b , we obtain
γ ρ L σ R t ( i d a b , i d a b ) γ Tr b ( ρ ) L Tr b ( σ ) R t ( i d a , i d a ) .
Now, since Tr b is trace-preserving, we have
Tr b ( ρ ) L ( i d a , i d a ) = i d a , L Tr b ( ρ ) i d a a = Tr ( Tr b ( ρ ) ) = Tr ( ρ ) = i d a b , L ρ i d a b a b = ρ L ( i d a b , i d a b ) ,
and, thus, we can rewrite inequality (144) as follows:
γ ρ L σ R t ( i d a b , i d a b ) ρ L ( i d a b , i d a b ) γ Tr b ( ρ ) L Tr b ( σ ) R t ( i d a , i d a ) Tr b ( ρ ) L ( i d a , i d a ) .
Thanks to Equations (133) and (132), this last inequality implies the monotonicity of the relative entropy under partial trace.

5. Conclusions

We revisited the monotonicity of relative entropy under the action of quantum channels by focusing on two important proofs: those by Petz and Uhlmann. While both approaches are foundational, their complexity has often hindered their pedagogical dissemination.
Our aim was to clarify and reconstruct these strategies within a finite-dimensional operator framework. In particular, we pointed out a subtle flaw in Petz’s original argument, whose validity was nonetheless restored by Petz and Nielsen soon after, and we showed how to rigorously extend this approach to incorporate non-invertible density operators.
It is also worth noting that our explicit construction of the isometric operator defined in Equation (60) sheds new light on its structural role within the broader context of quantum information theory. In particular, this operator can be seen as an essential component of the Petz recovery map, originally introduced in [9] in the setting of von Neumann algebras. Given a quantum channel C and a fixed full-rank state σ B H ( H a b ) , the Petz recovery map associated with C and σ is defined by
P σ , C ( ρ ) : = σ 1 / 2 C C ( σ ) 1 / 2 ρ C ( σ ) 1 / 2 σ 1 / 2 ,
or, equivalently, in terms of superoperators,
P σ , C = L σ 1 / 2 R σ 1 / 2 C R C ( σ ) 1 / 2 a L C ( σ ) 1 / 2 a .
When C = Tr b , this reduces to
P σ , Tr b = L σ 1 / 2 ( R σ 1 / 2 Tr b R Tr b ( σ ) 1 / 2 a ) L Tr b ( σ ) 1 / 2 a = L σ 1 / 2 V σ L Tr b ( σ ) 1 / 2 a ,
or
V σ = L σ 1 / 2 P σ , Tr b L Tr b ( σ ) 1 / 2 a .
So, just as the Petz recovery map characterizes the reversibility of quantum channels and identifies conditions for saturation of the monotonicity inequality, the operator V σ explicitly captures the mechanism by which relative entropy is contracted under partial trace.
In recent developments, particularly in the work of Fawzi and Renner [29], the Petz recovery map plays a central role in quantitative refinements of the data processing inequality. Specifically, for states ρ and σ and a channel C , the inequality
S ( ρ σ ) S ( C ( ρ ) C ( σ ) ) 2 log F ρ , ( P σ , C C ) ( ρ ) ,
bounds the loss of distinguishability in terms of the fidelity F between the original state ρ and its recovered approximation via P σ , C C .
The explicit identification of V σ offers a concrete realization of this recovery mechanism, reinforcing its interpretive clarity and suggesting further applications in entropy inequalities and recoverability conditions.

Author Contributions

Conceptualization, S.M., F.B. and E.P.; Formal analysis, S.M., F.B. and E.P.; Writing—original draft, S.M., F.B. and E.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are grateful to Mark M. Wilde for his valuable insights on the literature concerning relative entropy and, in particular, for clarifying the correction of the flawed Petz argument through the proof that the operator V ρ is an isometry.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Umegaki, H. Conditional expectation in an operator algebra, IV (entropy and information). Kodai Math. Semin. Rep. 1962, 14, 59–85. [Google Scholar] [CrossRef]
  2. Araki, H. Relative Entropy of States of von Neumann Algebras. Publ. Res. Inst. Math. Sci. 1976, 11, 809–833. [Google Scholar] [CrossRef]
  3. Lanford, O.E.; Robinson, D.W. Mean Entropy of States in Quantum Statistical Mechanics. J. Math. Phys. 1968, 9, 1120–1125. [Google Scholar] [CrossRef]
  4. Lieb, E.H. Convex Trace Functions and the Wigner-Yanase-Dyson Conjecture. Adv. Math. 1973, 11, 267–288. [Google Scholar] [CrossRef]
  5. Lieb, E.H.; Ruskai, M.B. Proof of the Strong Subadditivity of Quantum-Mechanical Entropy. J. Math. Phys. 1973, 14, 1938–1941. [Google Scholar] [CrossRef]
  6. Lindblad, G. Completely Positive Maps and Entropy Inequalities. Commun. Math. Phys. 1975, 40, 147–151. [Google Scholar] [CrossRef]
  7. Uhlmann, A. Relative entropy and the Wigner-Yanase-Dyson-Lieb concavity in an interpolation theory. Commun. Math. Phys. 1977, 54, 21–32. [Google Scholar] [CrossRef]
  8. Petz, D. Sufficient subalgebras and the relative entropy of states of a von Neumann algebra. Commun. Math. Phys. 1986, 105, 123–131. [Google Scholar] [CrossRef]
  9. Petz, D. Sufficiency of channels over von Neumann algebras. Q. J. Math. 1988, 39, 97–108. [Google Scholar] [CrossRef]
  10. Petz, D. Monotonicity of Quantum Relative Entropy Revisited. Rev. Math. Phys. 2003, 15, 79–91. [Google Scholar] [CrossRef]
  11. Müller-Hermes, A.G.; Reeb, D. Monotonicity of the Quantum Relative Entropy under Positive Maps. Ann. Henri Poincaré 2017, 18, 1777–1788. [Google Scholar] [CrossRef]
  12. Sharma, N. More on a trace inequality in quantum information theory. arXiv 2015, arXiv:1512.00226. [Google Scholar] [CrossRef]
  13. Datta, N.; Wilde, M.M. Quantum Markov chains, sufficiency of quantum channels, and Rényi information measures. J. Phys. A Math. Theor. 2015, 48, 505301. [Google Scholar] [CrossRef]
  14. Zhang, L. A strengthened monotonicity inequality of quantum relative entropy: A unifying approach via Rényi relative entropy. Lett. Math. Phys. 2016, 106, 557–573. [Google Scholar] [CrossRef]
  15. Junge, M.; Renner, R.; Sutter, D.; Wilde, M.M.; Winter, A. Universal recovery maps and approximate sufficiency of quantum relative entropy. Ann. Henri Poincaré 2018, 19, 2955–2978. [Google Scholar] [CrossRef]
  16. Bhatia, R. Matrix Analysis; Springer: New York, NY, USA, 1997. [Google Scholar] [CrossRef]
  17. Hansen, F.; Pedersen, G.K. Jensen’s operator inequality. Bull. Lond. Math. Soc. 2003, 35, 553–564. [Google Scholar] [CrossRef]
  18. Heinosaari, T.; Ziman, M. The Mathematical Language of Quantum Theory: From Uncertainty to Entanglement; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  19. Stinespring, W.F. Positive functions on C*-algebras. Proc. Am. Math. Soc. USA 1955, 6, 211–216. [Google Scholar] [CrossRef]
  20. Choi, M.D. A Schwarz inequality for positive linear maps on C*-algebras. Ill. J. Math. 1974, 18, 565–574. [Google Scholar] [CrossRef]
  21. Moretti, V. Spectral Theory and Quantum Mechanics; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  22. Wilde, M.M. Quantum Information Theory, 2nd ed.; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar] [CrossRef]
  23. Vedral, V. The role of relative entropy in quantum information theory. Rev. Mod. Phys. 2002, 74, 197. [Google Scholar] [CrossRef]
  24. Nielsen, M.A.; Petz, D. A simple proof of the strong subadditivity inequality. Quantum Inf. Comput. 2005, 5, 480–486. [Google Scholar] [CrossRef]
  25. Tomamichel, M.; Colbeck, R.; Renner, R. A Fully Quantum Asymptotic Equipartition Property. IEEE Trans. Inf. Theory 2009, 55, 5840–5847. [Google Scholar] [CrossRef]
  26. Khatri, S.; Wilde, M.M. Principles of Quantum Communication Theory: A Modern Approach. arXiv 2024, arXiv:2011.04672. [Google Scholar] [CrossRef]
  27. Pérez-Pardo, J.M. On Uhlmann’s proof of the Monotonicity of the Relative Entropy. In Particles, Fields and Topology: Celebrating AP Balachandran; World Scientific: Singapore, 2023; pp. 145–155. [Google Scholar]
  28. Pusz, W.; Woronowicz, S.L. Functional Calculus for Sesquilinear Forms and the Purification Map. Rep. Math. Phys. 1975, 8, 159–170. [Google Scholar] [CrossRef]
  29. Fawzi, O.; Renner, R. Quantum Conditional Mutual Information and Approximate Markov Chains. Commun. Math. Phys. 2015, 340, 575–611. [Google Scholar] [CrossRef]
Figure 1. Comparison of ( α x α + ξ ) 1 and α ( x + ξ ) 1 α , illustrating the failure of the Jensen-type inequality in the scalar case, with α = 0.5 and ξ = 0.5 .
Figure 1. Comparison of ( α x α + ξ ) 1 and α ( x + ξ ) 1 α , illustrating the failure of the Jensen-type inequality in the scalar case, with α = 0.5 and ξ = 0.5 .
Entropy 27 00954 g001
Figure 2. Counterexample showing that the inequality log ( α x α ) α log ( x ) α fails for α = 0.5 .
Figure 2. Counterexample showing that the inequality log ( α x α ) α log ( x ) α fails for α = 0.5 .
Entropy 27 00954 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matheus, S.; Bottacin, F.; Provenzi, E. On the Monotonicity of Relative Entropy: A Comparative Study of Petz’s and Uhlmann’s Approaches. Entropy 2025, 27, 954. https://doi.org/10.3390/e27090954

AMA Style

Matheus S, Bottacin F, Provenzi E. On the Monotonicity of Relative Entropy: A Comparative Study of Petz’s and Uhlmann’s Approaches. Entropy. 2025; 27(9):954. https://doi.org/10.3390/e27090954

Chicago/Turabian Style

Matheus, Santiago, Francesco Bottacin, and Edoardo Provenzi. 2025. "On the Monotonicity of Relative Entropy: A Comparative Study of Petz’s and Uhlmann’s Approaches" Entropy 27, no. 9: 954. https://doi.org/10.3390/e27090954

APA Style

Matheus, S., Bottacin, F., & Provenzi, E. (2025). On the Monotonicity of Relative Entropy: A Comparative Study of Petz’s and Uhlmann’s Approaches. Entropy, 27(9), 954. https://doi.org/10.3390/e27090954

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop