Next Article in Journal
Empirical Lossless Compression Bound of a Data Sequence
Previous Article in Journal
Mutual Information and Quantum Coherence in Minimum Error Discrimination of N Pure Equidistant Quantum States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variations on the Expectation Due to Changes in the Probability Measure

by
Samir M. Perlaza
1,2,3,* and
Gaetan Bisson
3
1
Centre Inria d’Université Côte d’Azur, INRIA, 06902 Sophia Antipolis, France
2
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA
3
GAATI Mathematics Laboratory, University of French Polynesia, 98702 Faaa, French Polynesia
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 865; https://doi.org/10.3390/e27080865
Submission received: 23 July 2025 / Revised: 11 August 2025 / Accepted: 12 August 2025 / Published: 14 August 2025
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

In this paper, closed-form expressions for the variation of the expectation of a given function due to changes in the probability measure (probability distribution drifts) are presented. These expressions unveil interesting connections with Gibbs probability measures, information projections, Pythagorean identities for relative entropy, mutual information, and lautum information.

1. Introduction

Let m be a positive integer and denote by ( R m ) the set of all probability measures on the measurable space R m , B R m , with B R m being the Borel σ -algebra on R m . Given a Borel measurable function h : R n × R m R , consider the functional
G h : R n × ( R m ) × ( R m ) R ( x , P 1 , P 2 ) h ( x , y ) d P 1 ( y ) h ( x , y ) d P 2 ( y ) .
The functional G h in (1) is defined when both integrals exist and are finite. Hence,
G h ( x , P 1 , P 2 ) = h ( x , y ) d P 1 ( y ) h ( x , y ) d P 2 ( y ) ,
is the variation of the expectation of the measurable function h due to a change in the probability measure from P 2 to P 1 . These changes in the probability measure are often referred to as probability distribution drifts in some application areas. See for instance [1,2,3] and references therein.
In order to define the expectation of G h x , P 1 , P 2 in (2) when x is obtained by sampling a probability measure in ( R n ) , the structure formalized below is required.
Definition 1.
A family P Y | X ( P Y | X = x ) x R n of elements of ( R m ) indexed by R n is said to be a conditional probability measure if, for all sets A B R m , the map
R n [ 0 , 1 ] x P Y | X = x ( A )
is Borel measurable. The set of all such conditional probability measures is denoted by R m | R n .
In this setting, consider the functional
G ¯ h : R m | R n × R m | R n × R n R P Y | X ( 1 ) , P Y | X ( 2 ) , P X G h x , P Y | X = x ( 1 ) , P Y | X = x ( 2 ) d P X ( x ) .
Hence,
G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X = G h x , P Y | X = x ( 1 ) , P Y | X = x ( 2 ) d P X ( x )
    = h ( x , y ) d P Y | X = x ( 1 ) ( y ) h ( x , y ) d P Y | X = x ( 2 ) ( y ) d P X ( x )
    = h ( x , y ) d P Y | X ( 1 ) P X ( y , x ) h ( x , y ) d P Y | X ( 2 ) P X ( y , x ) ,
where the functional G h is defined in (1), is the variation of the integral (expectation) of the function h when the probability measure changes from the joint probability measure P Y | X ( 1 ) P X to another joint probability measure P Y | X ( 2 ) P X , both in R m × R n .
Special attention is given to the quantity G ¯ h P Y , P Y | X , P X , for some P Y | X R m | R n , with P Y being the marginal on R m of the joint probability measure P Y | X P X on R m × R n . That is, for all sets A B R m ,
P Y A = P Y | X = x A d P X ( x ) .
The relevance of the quantity G ¯ h P Y , P Y | X , P X stems from the fact that it captures the variation of the expectation of the function h when the probability measure changes from the joint probability measure P Y | X P X to the product of its marginals P Y P X . That is,
G ¯ h P Y , P Y | X , P X = G h x , P Y , P Y | X = x d P X ( x )
    = h ( x , y ) d P Y ( y ) h ( x , y ) d P Y | X = x ( y ) d P X ( x )
    = h ( x , y ) d P Y P X ( y , x ) h ( x , y ) d P Y | X P X ( y , x ) ,
where the functional G h is defined in (1).

1.1. Novelty and Contributions

This work makes two key contributions: First, it provides a closed-form expression for the variation G h x , P 1 , P 2 in (2) for a fixed x R n and two arbitrary probability measures P 1 and P 2 , formulated explicitly in terms of relative entropies. Second, it derives a closed-form expression for the expected variation G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X in (4), again in terms of information measures, for arbitrary conditional probability measures P Y | X ( 1 ) , P Y | X ( 2 ) , and an arbitrary probability measure P X .
A further contribution of this work is the derivation of specific closed-form expressions for G ¯ h P Y , P Y | X , P X in (10), which reveal deep connections with both mutual information [4] and lautum information [5]. Notably, when P Y | X is a Gibbs conditional probability measure, this variation simplifies (up to a constant factor) to the sum of the mutual and lautum information induced by the joint distribution P Y | X P X .
These results were originally discovered in the analysis of generalization error of machine-learning algorithms, see for instance [6,7,8,9,10]. Therein, the function h in (2) was assumed to represent an empirical risk. This paper presents such results in a comprehensive and general setting that is no longer tied to such assumptions. Also, strong connections with information projections and Pythagorean identities [11,12] are discussed. This new general presentation not only unifies previously scattered insights but also makes the results applicable across a broad range of domains in which probability distribution shifts are relevant.

1.2. Applications

The study of the variation of the integral (expectation) of h (for some fixed x R n ) due to a measure change from P 2 to P 1 , i.e., the value G h x , P 1 , P 2 in (2), plays a central role in the definition of integral probability metrics (IPMs) [13,14]. Using the notation in (2), an IPM results from the optimization problem
sup h H | G h x , P 1 , P 2 | ,
for some fixed x R n and a particular class of functions H . Note, for instance, that the maximum mean discrepancy [15] and the Wasserstein distance of order one [16,17,18,19] are both IPMs.
Other areas of mathematics in which the variation G h x , P 1 , P 2 in (2) plays a key role are distributionally robust optimization (DRO) [20,21] and optimization with relative entropy regularization [7,8]. In these areas, the variation G h x , P 1 , P 2 is a central tool. See for instance, [6,22] and references therein.
Variations of the form G h x , P 1 , P 2 in (2) have also been studied in [9,10] in the particular case of statistical machine learning for the analysis of generalization error. The central observation is that the generalization error of machine-learning algorithms can be written in the form G ¯ h P Y , P Y | X , P X in (10). This observation is the main building block of the method of gaps introduced in [10], which leads to a number of closed-form expressions for the generalization error involving mutual information, lautum information, among other information measures.

2. Preliminaries

The main results presented in this work involve Gibbs conditional probability measures. Such measures are parametrized by a Borel measurable function h : R n × R m R ; a σ -finite measure Q on R m ; and a real λ R . Often, the measure Q is called the reference measure, and the real parameter λ is called the temperature parameter. The use of a σ -finite reference measure was first introduced in [7]. This class of measures includes the Lebesgue measure and the counting measure, enabling a unified treatment of random variables with either probability density functions or probability mass functions. The standard case in which the reference measure is a probability measure representing a prior is also covered. Finally, note that the variable x will remain inactive until Section 4. However, it is introduced now for consistency.
Consider the following function:
K h , Q , x : R R t log exp t h ( x , y ) d Q y .
Under the assumption that Q is a probability measure, the function K h , Q , x in (12) is the cumulant generating function of the random variable h ( x , Y ) , for some fixed x R n and Y Q . Using this notation, the definition of the Gibbs conditional probability measure is presented hereunder.
Definition 2
(Gibbs Conditional Probability Measure). Given a Borel measurable function h : R n × R m R ; a σ-finite measure Q on R m ; and a λ R , the probability measure P Y | X ( h , Q , λ ) R m | R n is said to be an ( h , Q , λ ) -Gibbs conditional probability measure if
x X , K h , Q , x λ < + ;
for some set X R n ; and for all ( x , y ) X × supp Q ,
d P Y | X = x ( h , Q , λ ) d Q y = exp λ h x , y K h , Q , x λ ,
where the function K h , Q , x is defined in (12); the set supp Q is the support of the σ-finite measure Q; and the function d P Y | X = x ( h , Q , λ ) d Q is the Radon-Nikodym derivative [23,24] of the probability measure P Y | X = x ( h , Q , λ ) with respect to Q.
Note that, while P Y | X ( h , Q , λ ) is an ( h , Q , λ ) -Gibbs conditional probability measure, the measure P Y | X = x ( h , Q , λ ) , obtained by conditioning it upon a given vector x X , is referred to as an ( h , Q , λ ) -Gibbs probability measure.
The condition in (13) is easily met under certain assumptions. For instance, if h is a nonnegative function and Q is a finite measure, then it holds for all λ 0 , + . Let Q R m P R m : P Q , with P Q standing for “P absolutely continuous with respect to Q”. The relevance of ( h , Q , λ ) -Gibbs probability measures relies on the fact that under some conditions, they are the unique solutions to problems of the form,
min P Q ( R m ) h ( x , y ) d P ( y ) + 1 λ D P Q , and
max P Q ( R m ) h ( x , y ) d P ( y ) + 1 λ D P Q ,
where λ R { 0 } , x R , and D P Q denotes the relative entropy (or KL divergence) of P with respect to Q.
Definition 3
(Relative Entropy). Given two σ-finite measures P and Q on the same measurable space, such that P is absolutely continuous with respect to Q, the relative entropy of P with respect to Q is
D P Q = d P d Q ( x ) log d P d Q ( x ) d Q ( x ) ,
where the function d P d Q is the Radon-Nikodym derivative of P with respect to Q.
The key observation is that when λ > 0 , the objective function in (15) is convex with P. Alternative, when λ < 0 , the objective function in (16) is concave. The connection between the optimization problems (15) and (16) and the Gibbs probability measure P Y | X = x ( h , Q , λ ) in (14) has been pointed out by several authors. See for instance, Theorem 3 in [7] and [6,25,26,27,28,29,30,31,32,33] for the former; and Theorem 1 in [9], together with [34,35,36] for the latter. In these references, a variety of assumptions and proof techniques have been used to highlight such connections. A general and unified statement of these observations is presented hereunder.
Lemma 1.
Assume that the optimization problem in (15) (respectively, in (16)) admits a solution. Then, if λ > 0 (respectively, if λ < 0 ), the probability measure P Y | X = x ( h , Q , λ ) in (14) is the unique solution.
Proof. 
For the case in which λ > 0 , the proof follows the same approach as the proof of Theorem 3 in [7]. Alternatively, for the case in which λ < 0 , the proof follows along the lines of the proof of Theorem 1 in [9]. □
The following lemma highlights a key property of ( h , Q , λ ) -Gibbs probability measures.
Lemma 2.
Given an ( h , Q , λ ) -Gibbs probability measure, denoted by P Y | X = x ( h , Q , λ ) , with x R n ,
1 λ K h , Q , x λ = h ( x , y ) d Q y 1 λ D Q P Y | X = x ( h , Q , λ )
= h ( x , y ) d P Y | X = x ( h , Q , λ ) y + 1 λ D P Y | X = x ( h , Q , λ ) Q ;
moreover, if λ > 0 ,
1 λ K h , Q , x λ = min P Q ( R m ) h ( x , y ) d P ( y ) + 1 λ D P Q ;
alternatively, if λ < 0 ,
1 λ K h , Q , x λ = max P Q ( R m ) h ( x , y ) d P ( y ) + 1 λ D P Q ,
where the function K h , Q , x is defined in (12).
Proof. 
The proof of (19) follows from taking the logarithm of both sides of (14) and integrating with respect to P Y | X = x ( h , Q , λ ) . As for the proof of (18), it follows by noticing that for all ( x , y ) R n × supp Q , the Radon-Nikodym derivative d P Y | X = x ( h , Q , λ ) d Q y in (14) is strictly positive. Thus, from Theorem 5 in [37], it holds that d Q d P Y | X = x ( h , Q , λ ) y = d P Y | X = x ( h , Q , λ ) d Q y 1 . Hence, taking the negative logarithm on both sides of (14) and integrating with respect to Q leads to (18). Finally, the equalities in (20) and (21) follow from Lemma 1 and (19). □
The Equalities (19)–(21) in Lemma 2 can be seen as an immediate restatement of Donsker–Varadhan variational representation of the relative entropy [38]. Alternative interesting proofs for (18) have been presented by several authors including [9,33]. A proof for (19) appears in [6] (Lemma 3), in the specific case of λ > 0 .
The following lemma introduces the main building block of this work, which is a characterization of the variation of the expectation of the function h ( x , · ) : R m R when the probability measure changes from the probability measure P Y | X = x ( h , Q , λ ) in (14) to an arbitrary measure P Q R m , i.e., G h x , P , P Y | X = x ( h , Q , λ ) , for some fixed x R n . Such a result appeared for the first time in [6] (Theorem 1) for the case in which λ > 0 ; and in [9] (Theorem 6) for the case in which λ < 0 , in different contexts of statistical machine learning. A general and unified statement of such results is presented hereunder.
Lemma 3.
Consider an ( h , Q , λ ) -Gibbs probability measure, denoted by P Y | X = x ( h , Q , λ ) R m , with λ 0 and x R . For all P Q R m ,
G h x , P , P Y | X = x ( h , Q , λ ) = 1 λ D P P Y | X = x ( h , Q , λ ) + D P Y | X = x ( h , Q , λ ) Q D P Q .
Proof. 
The proof follows along the lines of the proofs of Theorem 1 in [6] and Theorem 6 in [9] for the cases in which λ > 0 and λ < 0 , respectively. A unified proof is presented hereunder by noticing that for all P Q R m ,
D P P Y | X = x ( h , Q , λ ) = log d P d P Y | X = x ( h , Q , λ ) ( y ) d P ( y )
= log d Q d P Y | X = x ( h , Q , λ ) ( y ) d P d Q ( y ) d P ( y )
  = log d Q d P Y | X = x ( h , Q , λ ) ( y ) d P ( y ) + D P Q
    = λ h ( x , y ) d P ( y ) + K h , Q , x λ + D P Q
= λ G h x , P , P Y | X = x ( h , Q , λ ) D P Y | X = x ( h , Q , λ ) Q + D P Q ,
where (24) follows from Theorem 4 in [37]; (26) follows from Theorem 5 in [37]; and (14); and (27) follows from (19). □
It is interesting to highlight that G h x , P , P Y | X = x ( h , Q , λ ) in (22) characterizes the variation of the expectation of the function h ( x , · ) : R m R , when λ > 0 (respectively, when λ < 0 ) and the probability measure changes from the solution to the optimization problem in (15) (respectively, in (16)) to an alternative measure P. This result takes another perspective if it is seen in the context of information projections [12]. Let Q be a probability measure and S Q R m be a convex set. From Theorem 1 in [12], it holds that for all measures P S ,
D P Q D P P + D P Q ,
where P satisfies
P arg min P S D P Q .
In the particular case in which the set S in (29) satisfies
S P Q R m : h ( x , y ) d P ( y ) = c ,
for some real c, with the vector x and the function h defined in Lemma 3, the optimal measure P in (29) is the Gibbs probability measure P Y | X = x ( h , Q , λ ) in (14), with λ > 0 chosen to satisfy
h ( x , y ) d P Y | X = x ( h , Q , λ ) ( y ) = c .
The case in which the measure Q in (29) is a σ -finite measure, for instance, either the Lebesgue measure or the counting measure, respectively, leads to the classical framework of differential entropy or discrete entropy maximization, which have been studied under particular assumptions on the set S in [34,35,36].
When the reference measure Q is a probability measure, under the assumption that (31) holds, it follows from Theorem 3 in [12] that for all P S , with S in (30),
D P Q = D P P Y | X = x ( h , Q , λ ) + D P Y | X = x ( h , Q , λ ) Q ,
which is known as the Pythagorean theorem for relative entropy. Such a geometric interpretation follows from admitting relative entropy as an analog of squared Euclidean distance. The first appearance of such a “Pythagorean theorem” was in [11] and was later revisited in [12]. Interestingly, the same result can be obtained from Lemma 3 by noticing that for all P S , with S in (30),
G h x , P , P Y | X = x ( h , Q , λ ) = 0 .
The converse of the Pythagorean theorem, e.g., Proposition 48 in [39], together with Lemma 3, lead to the geometric construction shown in Figure 1. A similar interpretation was also presented in [10] in the context of the generalization error of machine-learning algorithms. Nonetheless, the interpretation in Figure 1 is general and independent of such an application.
The relevance of Lemma 3 in the context of information projections follows from the fact that Q might be a σ -finite measure. The class of σ -finite measures includes the class of probability measures, and thus, unifies the results separately obtained in the realm of maximum entropy methods and information-projection methods.
The following lemma highlights that ( h , Q , λ ) -Gibbs conditional probability measures are related to another class of optimization problems.
Lemma 4.
Assume that the following optimization problems possess at least one solution for some x R n ,
min P Q R m h ( x , y ) d P ( y ) s . t . D P Q ρ .
and
max P Q R m h ( x , y ) d P ( y ) s . t . D P Q ρ .
Consider the ( h , Q , λ ) -Gibbs probability measure P Y | X = x ( h , Q , λ ) in (14), with λ R { 0 } such that ρ = D P Y | X = x ( h , Q , λ ) Q . Then, the ( h , Q , λ ) -Gibbs probability measure P Y | X = x ( h , Q , λ ) is a solution to (34) if λ > 0 ; or to (35) if λ < 0 .
Proof. 
Note that if λ > 0 , then, 1 λ D P P Y | X = x ( h , Q , λ ) 0 . Hence, from Lemma 3, it holds that for all probability measures P such that D P Q ρ ,
G h x , P , P Y | X = x ( h , Q , λ ) 1 λ D P Y | X = x ( h , Q , λ ) Q D P Q
  = 1 λ ρ D P Q
0 ,            
with equality if D P P Y | X = x ( h , Q , λ ) = 0 . This implies that P Y | X = x ( h , Q , λ ) is a solution to (34). Note also that if λ < 0 , from Lemma 3, it holds that for all probability measures P such that D P Q ρ ,
G h x , P , P Y | X = x ( h , Q , λ ) 1 λ D P Y | X = x ( h , Q , λ ) Q D P Q
  = 1 λ ρ D P Q
0 ,            
with equality if D P P Y | X = x ( h , Q , λ ) = 0 . This implies that P Y | X = x ( h , Q , λ ) is a solution to (35). □

3. Characterization of G h x , P 1 , P 2 in (2)

The main result of this section is the following theorem.
Theorem 1.
For all probability measures P 1 and P 2 , both absolutely continuous with respect to a given σ-finite measure Q on R m , the variation G h x , P 1 , P 2 in (2) satisfies,
G h x , P 1 , P 2 = 1 λ D P 1 P Y | X = x h , Q , λ D P 2 P Y | X = x h , Q , λ + D P 2 Q D P 1 Q ,
where the probability measure P Y | X = x h , Q , λ , with λ 0 , is an ( h , Q , λ ) -Gibbs probability measure.
Proof. 
The proof follows from Lemma 3 and by observing that
G h x , P 1 , P 2 = G h x , P 1 , P Y | X = x ( h , Q , λ ) G h x , P 2 , P Y | X = x ( h , Q , λ ) ,
which completes the proof. □
Theorem 1 might be particularly simplified in the case in which the reference measure Q is a probability measure. Consider for instance the case in which P 1 P 2 (or P 2 P 1 ). In such a case, the reference measure might be chosen as P 2 (or P 1 ), as shown hereunder.
Corollary 1.
Consider the variation G h x , P 1 , P 2 in (2). If the probability measure P 1 is absolutely continuous with respect to P 2 , then,
G h x , P 1 , P 2 = 1 λ D P 1 P Y | X = x h , P 2 , λ D P 2 P Y | X = x h , P 2 , λ D P 1 P 2 .
Alternatively, if the probability measure P 2 is absolutely continuous with respect to P 1 , then,
G h x , P 1 , P 2 = 1 λ D P 1 P Y | X = x h , P 1 , λ D P 2 P Y | X = x h , P 1 , λ + D P 2 P 1 ,
where the probability measures P Y | X = x h , P 1 , λ and P Y | X = x h , P 2 , λ are, respectively, ( h , P 1 , λ ) - and ( h , P 2 , λ ) -Gibbs probability measures, with λ 0 .
In the case in which neither P 1 is absolutely continuous with respect to P 2 ; nor P 2 is absolutely continuous with respect to P 1 , the reference measure Q in Theorem 1 can always be chosen as a convex combination of P 1 and P 2 . That is, for all Borel sets A B R m , Q A = α P 1 A + ( 1 α ) P 2 A , with α ( 0 , 1 ) .
Theorem 1 can be specialized to the cases in which Q is either the Lebesgue measure or the counting measure.
If Q is the Lebesgue measure, then the probability measures P 1 and P 2 in (42) admit probability density functions f 1 and f 2 , respectively. Moreover, the terms D P 1 Q and D P 2 Q are Shannon’s differential entropies [4] induced by P 1 and P 2 , denoted by h ( P 1 ) and h ( P 2 ) , respectively. That is, for all i { 1 , 2 } ,
h ( P i ) f i ( x ) log f i ( x ) d x .
The probability measure P Y | X = x h , Q , λ , with λ 0 , x R n , and Q the Lebesgue measure, possesses a probability density function, denoted by f Y | X = x h , Q , λ : R m ( 0 , + ) , which satisfies
f Y | X = x h , Q , λ ( y ) = exp λ h ( x , y ) exp λ h ( x , y ) d y .
If Q is the counting measure, then the probability measures P 1 and P 2 in (42) admit probability mass functions p 1 : Y [ 0 , 1 ] and p 2 : Y [ 0 , 1 ] , with Y a countable subset of R m . Moreover, D P 1 Q and D P 2 Q are, respectively, Shannon’s discrete entropies [4] induced by P 1 and P 2 , denoted by H ( P 1 ) and H ( P 2 ) , respectively. That is, for all i { 1 , 2 } ,
H ( P i ) y Y p i ( y ) log p i ( y ) .
The probability measure P Y | X = x h , Q , λ , with λ 0 and Q the counting measure, possesses a probability mass function, denoted by p Y | X = x h , Q , λ : Y ( 0 , + ) , which satisfies
p Y | X = x h , Q , λ ( y ) = exp λ h ( x , y ) y Y exp λ h ( x , y ) .
These observations lead to the following corollary of Theorem 1.
Corollary 2.
Given two probability measures P 1 and P 2 , with probability density functions f 1 and f 2 , respectively, the variation G h x , P 1 , P 2 in (2) satisfies,
G h x , P 1 , P 2 = 1 λ D P 1 P Y | X = x h , Q , λ D P 2 P Y | X = x h , Q , λ h P 2 + h P 1 ,
where the probability density function of the measure P Y | X = x h , Q , λ , with λ 0 and Q the Lebesgue measure, is defined in (46); and the entropy functional h is defined in (45). Alternatively, given two probability measures P 1 and P 2 , with probability mass functions p 1 and p 2 , respectively, the variation G h x , P 1 , P 2 in (2) satisfies,
G h x , P 1 , P 2 = 1 λ D P 1 P Y | X = x h , Q , λ D P 2 P Y | X = x h , Q , λ H P 2 + H P 1 ,
where the probability mass function of the measure P Y | X = x h , Q , λ , with λ 0 and Q the counting measure, is defined in (48); and the entropy functional H is defined in (47).

4. Characterizations of G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X in (4)

The main result of this section is a characterization of G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X in (4).
Theorem 2.
Consider the variation G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X in (4) and assume that for all x supp P X , the probability measures P Y | X = x ( 1 ) and P Y | X = x ( 2 ) are both absolutely continuous with respect to a σ-measure Q. Then,
G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X = 1 λ ( D P Y | X = x ( 1 ) P Y | X = x h , Q , λ D P Y | X = x ( 2 ) P Y | X = x h , Q , λ + D P Y | X = x ( 2 ) Q D P Y | X = x ( 1 ) Q ) d P X ( x ) ,
where the probability measure P Y | X h , Q , λ , with λ 0 , is an ( h , Q , λ ) -Gibbs conditional probability measure.
Proof. 
The proof follows from (4) and Theorem 1. □
Two special cases are particularly noteworthy. When the reference measure Q is the Lebesgue measure both D P Y | X = x ( 1 ) Q d P X ( x ) and D P Y | X = x ( 2 ) Q d P X ( x ) in (51) become Shannon’s differential conditional entropies, denoted by h P Y | X ( 1 ) | P X and h P Y | X ( 2 ) | P X , respectively. That is, for all i { 1 , 2 } ,
h P Y | X ( i ) | P X h P Y | X = x ( i ) d P X ( x ) ,
where h is the entropy functional in (45).
When the reference measure Q is the counting measure both D P Y | X = x ( 1 ) Q d P X ( x ) and D P Y | X = x ( 2 ) Q d P X ( x ) in (51) become Shannon’s discrete conditional entropies, denoted by H P Y | X ( 1 ) | P X and H P Y | X ( 2 ) | P X , respectively. That is, for all i { 1 , 2 } ,
H P Y | X ( i ) | P X H P Y | X = x ( i ) d P X ( x ) ,
where H is the entropy functional in (47).
These observations lead to the following corollary of Theorem 2.
Corollary 3.
Consider the variation G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X in (4) and assume that for all x supp P X , the probability measures P Y | X = x ( 1 ) and P Y | X = x ( 2 ) possess probability density functions. Then,
G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X = 1 λ D P Y | X = x ( 1 ) P Y | X = x h , Q , λ D P Y | X = x ( 2 ) P Y | X = x h , Q , λ d P X ( x ) 1 λ h P Y | X ( 2 ) | P X + 1 λ h P Y | X ( 1 ) | P X ,
where the probability density function of the measure P Y | X = x h , Q , λ , with λ 0 and Q the Lebesgue measure, is defined in (46); and for all i { 1 , 2 } , the conditional entropy h P Y | X ( i ) | P X is defined in (52). Alternatively, assume that for all x supp P X , the probability measures P Y | X = x ( 1 ) and P Y | X = x ( 2 ) possess probability mass functions. Then,
G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X = 1 λ D P Y | X = x ( 1 ) P Y | X = x h , Q , λ D P Y | X = x ( 2 ) P Y | X = x h , Q , λ d P X ( x ) 1 λ H P Y | X ( 2 ) | P X + 1 λ H P Y | X ( 1 ) | P X ,
where the probability mass function of the measure P Y | X = x h , Q , λ , with λ 0 and Q the counting measure, is defined in (48); and for all i { 1 , 2 } , the conditional entropy H P Y | X ( i ) | P X is defined in (53).
The general expression for the expected variation G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X in (4) might be simplified according to Corollary 1. For instance, if for all x supp P X , the probability measure P Y | X = x ( 1 ) is absolutely continuous with respect to P Y | X = x ( 2 ) , the measure P Y | X = x ( 2 ) can be chosen to be the reference measure in the calculation of G h x , P Y | X = x ( 1 ) , P Y | X = x ( 2 ) , with the functional G h in (1). This observation leads to the following corollary of Theorem 2.
Corollary 4.
Consider the variation G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X in (4) and assume that for all x supp P X , P Y | X = x ( 1 ) P Y | X = x ( 2 ) . Then,
G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X = 1 λ ( D P Y | X = x ( 1 ) P Y | X = x h , P Y | X = x ( 2 ) , λ D P Y | X = x ( 2 ) P Y | X = x h , P Y | X = x ( 2 ) , λ D P Y | X = x ( 1 ) P Y | X = x ( 2 ) ) d P X ( x ) .
Alternatively, if for all x supp P X , the probability measure P Y | X = x ( 2 ) is absolutely continuous with respect to P Y | X = x ( 1 ) , then,
G ¯ h P Y | X ( 1 ) , P Y | X ( 2 ) , P X = 1 λ ( D P Y | X = x ( 1 ) P Y | X = x h , P Y | X = x ( 1 ) , λ D P Y | X = x ( 2 ) P Y | X = x h , P Y | X = x ( 1 ) , λ + D P Y | X = x ( 2 ) P Y | X = x ( 1 ) ) d P X ( x ) ,
where the measures P Y | X = x h , P Y | X = x ( 1 ) , λ and P Y | X = x h , P Y | X = x ( 2 ) , λ are, respectively, ( h , P Y | X = x ( 1 ) , λ ) - and ( h , P Y | X = x ( 2 ) , λ ) -Gibbs probability measures.
The Gibbs probability measures P Y | X = x h , P Y | X = x ( 1 ) , λ and P Y | X = x h , P Y | X = x ( 2 ) , λ in Corollary 4 are particularly interesting as their reference measures depend on x. Gibbs measures of this form appear, for instance, in Corollary 10 in [7].

5. Characterizations of G ¯ h P Y , P Y | X , P X in (10)

The main result of this section is a characterization of G ¯ h P Y , P Y | X , P X in (10), which describes the variation of the expectation of the function h when the probability measure changes from the joint probability measure P Y | X P X to the product of its marginals P Y P X .
This result is presented hereunder and involves the mutual information I P Y | X ; P X and lautum information L P Y | X ; P X , defined as follows:
I P Y | X ; P X D P Y | X = x P Y d P X ( x ) ; and
L P Y | X ; P X D P Y P Y | X = x d P X ( x ) .
Theorem 3.
Consider the expected variation G ¯ h P Y , P Y | X , P X in (10) and assume that, for all x supp P X :
1. 
The probability measures P Y and P Y | X = x are both absolutely continuous with respect to a given σ-finite measure Q; and
2. 
The probability measures P Y and P Y | X = x are mutually absolutely continuous.
Then,
G ¯ h P Y , P Y | X , P X = 1 λ ( I P Y | X ; P X + L P Y | X ; P X + log d P Y | X = x d P Y | X = x h , Q , λ ( y ) d P Y ( y ) d P X ( x ) log d P Y | X = x d P Y | X = x h , Q , λ ( y ) d P Y | X = x ( y ) d P X ( x ) ) ,
where the probability measure P Y | X h , Q , λ , with λ 0 , is an ( h , Q , λ ) -Gibbs conditional probability measure.
Proof. 
The proof is presented in Appendix A. □
An alternative expression for G ¯ h P Y , P Y | X , P X in (10) involving only relative entropies is presented by the following theorem.
Theorem 4.
Consider the expected variation G ¯ h P Y , P Y | X , P X in (10) and assume that, for all x supp P X , the probability measure P Y | X = x is absolutely continuous with respect to a given σ-finite measure Q. Then, it follows that
G ¯ h P Y , P Y | X , P X = 1 λ D P Y | X = x 2 P Y | X = x 1 h , Q , λ D P Y | X = x 2 P Y | X = x 2 h , Q , λ d P X ( x 1 ) d P X ( x 2 ) ,
where P Y | X h , Q , λ , with λ 0 , is an ( h , Q , λ ) -Gibbs conditional probability measure.
Proof. 
The proof is presented in Appendix B. □
Theorem 4 expresses the variation G ¯ h P Y , P Y | X , P X in (10) as difference of two relative entropies. The former compares P Y | X = x 1 with P Y | X = x 2 h , Q , λ , where ( x 1 , x 2 ) X × X are independently sampled from the same probability measure P X . The latter compares these two conditional measures conditioning on the same element of X . That is, it compares P Y | X = x 2 with P Y | X = x 2 h , Q , λ .
An interesting observation from Theorems 3 and 4 is that the last two terms in the right-hand side of (60) are both zero in the case in which P Y | X is an ( h , Q , λ ) -Gibbs conditional probability measure. Similarly, in such a case, the second term in the right-hand side of (61) is also zero. This observation is highlighted by the following corollary.
Corollary 5.
Consider an ( h , Q , λ ) -Gibbs conditional probability measure, denoted by P Y | X ( h , Q , λ ) R m | R n , with λ 0 ; and a probability measure P X R n . Let the measure P Y ( h , Q , λ ) R m be such that for all sets A B R m ,
P Y ( h , Q , λ ) A = P Y | X = x ( h , Q , λ ) A d P X ( x ) .
Then,
G ¯ h P Y ( h , Q , λ ) , P Y | X ( h , Q , λ ) , P X = 1 λ I P Y | X ( h , Q , λ ) ; P X + L P Y | X ( h , Q , λ ) ; P X
= 1 λ D P Y | X = x 2 h , Q , λ P Y | X = x 1 h , Q , λ d P X ( x 1 ) d P X ( x 2 ) .
Please note that mutual information and lautum information are both nonnegative information measures, which from Corollary 5, implies that G ¯ h P Y ( h , Q , λ ) , P Y | X ( h , Q , λ ) , P X in (64) might be either positive or negative depending exclusively on the sign of the regularization factor λ . The following corollary exploits such an observation to present a property of Gibbs conditional probability measures and their corresponding marginal probability measures.
Corollary 6.
Given a probability measure P X R n , the ( h , Q , λ ) -Gibbs conditional probability measure P Y | X h , Q , λ in (14) and the probability measure P Y ( h , Q , λ ) in (62) satisfy
h ( x , y ) d P Y h , Q , λ ( y ) d P X ( x ) h ( x , y ) d P Y | X = x h , Q , λ ( y ) d P X ( x )   i f   λ > 0 ;
or
h ( x , y ) d P Y h , Q , λ ( y ) d P X ( x ) h ( x , y ) d P Y | X = x h , Q , λ ( y ) d P X ( x )   i f   λ < 0 .
Corollary 6 highlights the fact that a deviation from the joint probability measure P Y | X h , Q , λ P X Y × X to the product of its marginals P Y h , Q , λ P X Y × X might increase or decrease the expectation of the function h depending on the sign of λ .

6. Examples

An immediate application of the results presented above is the analysis of the generalization error of machine-learning algorithms [10], which was the scenario in which these results were originally discovered. In the remainder of this section, such results are presented as consequences of the more general results presented in this work.
Let M , X and Y , with M R d , be sets of models, patterns, and labels, respectively. A pair ( x , y ) X × Y is referred to as a data point. A dataset z X × Y n is a tuple of n data points of the form:
z = x 1 , y 1 , x 2 , y 2 , , x n , y n X × Y n .
Consider the function
L : X × Y n × M [ 0 , + ] z , θ 1 n i = 1 n x i , y i , θ ,
where x i , y i , θ is the risk or loss induced by the model θ with respect to the data point x i , y i .
Given a fixed dataset z of the form in (67), consider also the functional
R z : M R P L z , θ d P ( θ ) ,
where the function L is defined in (68). Using this notation, the empirical risk induced by the model θ with respect to the dataset z is L z , θ . The expectation of the empirical risk with respect to a fixed dataset z when models are sampled from a probability measure P M is R z ( P ) .
A machine-learning algorithm is represented by a conditional probability measure P Θ | Z M | X × Y n . The instance of such an algorithm generated by training it upon the dataset z in (67) is represented by the probability measure P Θ | Z = z M . The generalization error induced by the algorithm P Θ | Z is defined as follows.
Definition 4
(Generalization Error). The generalization error induced by the algorithm P Θ | Z M | X × Y n , under the assumption that training and test datasets are independently sampled from a probability measure P Z X × Y n , is denoted by G ¯ ¯ P Θ | Z , P Z , and
G ¯ ¯ P Θ | Z , P Z R u P Θ | Z = z R z P Θ | Z = z d P Z u d P Z z ,
where the functionals R u and R z are defined in (69).
Often, the term R u P Θ | Z = z is recognized to be the test error induced by the algorithm P Θ | Z with respect to a test dataset u X × Y n when it is trained upon the dataset z X × Y n . Alternatively, the term R z P Θ | Z = z is recognized to be the training error induced by the algorithm P Θ | Z when it is trained upon the dataset z X × Y n . From this perspective, the generalization error G ¯ ¯ P Θ | Z , P Z in (70) is the expectation of the difference between test error and training error when the test and training datasets are independently sampled from the same probability measure P Z . The key observation is that such generalization error G ¯ ¯ P Θ | Z , P Z can be written as a variation of an expectation of the empirical risk function L in (68), as shown hereunder.
Lemma 5
(Lemma 3 in [10]). Consider the generalization error G ¯ ¯ P Θ | Z , P Z in (70) and assume that for all z , the probability measure P Θ | Z = z is absolutely continuous with respect to the probability measure P Θ M , which satisfies for all measurable subsets C of M ,
P Θ C = P Θ | Z = z C d P Z z .
Then,
G ¯ ¯ P Θ | Z , P Z = G ¯ L P Θ , P Θ | Z , P Z ,
where the functional G ¯ L and the function L are defined in (3) and (68), respectively.
From Theorems 3 and 5, the following holds.
Theorem 5
(Theorem 14 in [10]). Consider the generalization error G ¯ ¯ P Θ | Z , P Z in (70) and assume that for all z X × Y n :
(a) 
The probability measures P Θ in (71) and P Θ | Z = z are both absolutely continuous with respect to some σ-finite measure Q M ;
(b) 
The measure Q is absolutely continuous with respect to P Θ ; and
(c) 
The measure P Θ is absolutely continuous with respect to P Θ | Z = z .
Then,
G ¯ ¯ ( P Θ | Z , P Z ) = λ I P Θ | Z ; P Z + L P Θ | Z ; P Z + λ log d P Θ | Z = z d P Θ | Z = z Q , λ θ d P Θ θ d P Z z λ log d P Θ | Z = z d P Θ | Z = z Q , λ θ d P Θ | Z = z θ d P Z z ,
where the measure P Θ | Z Q , λ is an ( L , Q , λ ) -Gibbs conditional probability measure, with the function L defined in (68).
Theorem 5 shows one of many closed-form expressions that can be obtained for the generalization error G ¯ ¯ P Θ | Z , P Z in (70) in terms of information measures. A complete exposition of several equivalent alternative expressions, as well as a discussion on their relevance, is presented in [10].
The important observation in this example is that the measure P Θ | Z Q , λ in (73) is an ( L , Q , λ ) -Gibbs conditional probability measure, which represents the celebrated Gibbs algorithm in statistical machine learning [40]. Thus, the term log d P Θ | Z = z d P Θ | Z = z Q , λ θ in (73) can be interpreted as a log-likelihood ratio in a hypothesis test in which the objective is to distinguish the probability measures P Θ | Z = z and P Θ | Z = z Q , λ based on the observation of the model θ . The former represents the algorithm under study trained upon z , whereas the latter represents a Gibbs algorithm trained upon the same dataset z .
From this perspective, the difference between the last two terms in (73), i.e.,
λ log d P Θ | Z = z d P Θ | Z = z Q , λ θ d P Θ θ d P Z z λ log d P Θ | Z = z d P Θ | Z = z Q , λ θ d P Θ | Z = z θ d P Z z ,
can be interpreted as the variation of the expectation of the log-likelihood ratio
log d P Θ | Z = z d P Θ | Z = z Q , λ θ
when the probability measure from which the model θ and dataset z are drawn changes from the ground-truth distribution P Θ | Z P Z to the product of the corresponding marginals P Θ P Z . As originally suggested in [10], Theorem 5 establishes an interesting connection between hypothesis testing, information measures, and generalization error. Nonetheless, this connection goes beyond this application in statistical machine learning as the same connection can be established directly from Theorem 3. This establishes a connection between the variation of the expectation due to changes in the probability measure, information measures, and hypothesis testing.

7. Final Remarks

A simple reformulation of Varadhan’s variational representation of relative entropy (Lemma 2) yields an explicit expression for the variation of the expectation of a real function when the probability measure shifts from a Gibbs measure to an arbitrary probability measure (Lemma 3). This result connects directly with information-projection methods, Pythagorean identities for relative entropy, and optimization problems with a constraint on the relative entropy of the measure to be optimized with respect to a reference measure (Lemma 4). A simple algebraic manipulation of Lemma 3 provides a general formula. It describes how the expectation changes with respect to changes in the probability measure (Theorem 1). The result is general, and the only assumption is that both initial (before the variation) and final (after the variation) measures are absolutely continuous with respect to a common reference. A key insight is the central role of Gibbs measures in this framework. The change in expectation is described through relative entropy comparisons between the initial and final measures, each with respect to a specific Gibbs measure built from the function under study. Notably, the reference measure of these Gibbs distributions need not be a probability measure. It can be a σ -finite measure, such as the Lebesgue measure or the counting measure. In such cases, the resulting expressions include Shannon’s fundamental information measures, including entropy and conditional entropy (Corollary 2). Building on these results, the variation of expectations under changes in joint probability measures is studied. Two cases are of special interest. In the former, one marginal remains unchanged (Theorem 2). In the latter, the joint measure changes to the product of its marginals (Theorems 3 and 4). In the case of Gibbs joint measures, the resulting expressions involve only standard information quantities: mutual information, lautum information, and relative entropy. These results show a broad connection between the variation in the expectation of measurable functions, induced by changes in probability measure, and information measures such as mutual and lautum information.

Author Contributions

All authors have equally contributed to all tasks. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the European Commission through the H2020-MSCA-RISE-2019 project 872172; the French National Agency for Research (ANR) through the Project ANR-21-CE25-0013 and the project ANR-22-PEFT-0010 of the France 2030 program PEPR Réseaux du Futur; and in part by the Agence de l’innovation de défense (AID) through the project UK-FR 2024352.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Theorem 3.
The proof follows from Theorem 2, which holds under assumption ( a ) and leads to
G ¯ h P Y , P Y | X , P X = 1 λ D P Y P Y | X = x h , Q , λ D P Y | X = x P Y | X = x h , Q , λ + D P Y | X = x Q D P Y Q d P X ( x ) .
The proof continues by noticing that
D P Y | X = x Q d P X ( x ) = log d P Y | X = x d Q ( y ) d P Y | X = x ( y ) d P X ( x )
= log d P Y | X = x d P Y ( y ) d P Y d Q ( y ) d P Y | X = x ( y ) d P X ( x )
  = log d P Y | X = x d P Y ( y ) d P Y | X = x ( y ) d P X ( x )
  + log d P Y d Q ( y ) d P Y | X = x ( y ) d P X ( x )
  = log d P Y | X = x d P Y ( y ) d P Y | X = x ( y ) d P X ( x )
+ d P Y | X = x d P Y ( y ) log d P Y d Q ( y ) d P Y ( y ) d P X ( x )
  = log d P Y | X = x d P Y ( y ) d P Y | X = x ( y ) d P X ( x )
+ log d P Y d Q ( y ) d P Y | X = x d P Y ( y ) d P X ( x ) d P Y ( y )
  = log d P Y | X = x d P Y ( y ) d P Y | X = x ( y ) d P X ( x )
  + log d P Y d Q ( y ) d P Y ( y )
  = I P Y | X ; P X + D P Y Q ,
where (A3) follows from Theorem 4 in [37]; (A5) follows from Theorem 2 in [37]; and (A7) follows from Theorem 10 in [37], which implies that for all y R m , d P Y | X = x d P Y ( y ) d P X ( x ) = 1 .
Note also that
D P Y P Y | X = x h , Q , λ d P X ( x ) = log d P Y d P Y | X = x h , Q , λ d P Y ( y ) d P X ( x )
    = log d P Y d P Y | X = x ( y ) d P Y | X = x d P Y | X = x h , Q , λ ( y ) d P Y ( y ) d P X ( x )
      = log d P Y d P Y | X = x ( y ) d P Y ( y ) d P X ( x )
      + log d P Y | X = x d P Y | X = x h , Q , λ ( y ) d P Y ( y ) d P X ( x )
    = L P Y | X = x ; P X + log d P Y | X = x d P Y | X = x h , Q , λ ( y ) d P Y ( y ) d P X ( x ) ,
where (A10) follows from Theorem 4 in [37]. Finally, using (A8) and (A12) in (A1) yields (60), which completes the proof. □

Appendix B

Proof of Theorem 4.
The proof follows by observing that the functional G ¯ h in (10) satisfies
G ¯ h P Y , P Y | X , P X = h ( x 2 , y ) d P Y | X = x 1 ( y ) h ( x 1 , y ) d P Y | X = x 1 ( y ) d P X ( x 2 ) d P X ( x 1 ) .
Using the functional G h in (1), the terms above can be written as follows
h ( x 1 , y ) d P Y | X = x 1 ( y ) = G h x 1 , P Y | X = x 1 , P Y | X = x 1 h , Q , λ + h ( x 1 , y ) d P Y | X = x 1 h , Q , λ ( y ) ,
and
h ( x 2 , y ) d P Y | X = x 1 ( y ) = G h x 2 , P Y | X = x 1 , P Y | X = x 2 h , Q , λ + h ( x 2 , y ) d P Y | X = x 2 h , Q , λ ( y ) .
Using (A14) and (A15) in (A13) yields
G ¯ h P Y , P Y | X , P X
= ( G h x 2 , P Y | X = x 1 , P Y | X = x 2 h , Q , λ + h ( x 2 , y ) d P Y | X = x 2 h , Q , λ ( y )
G h x 1 , P Y | X = x 1 , P Y | X = x 1 h , Q , λ h ( x 1 , y ) d P Y | X = x 1 h , Q , λ ( y ) ) d P X ( x 2 ) d P X ( x 1 ) ,
  = G h x 2 , P Y | X = x 1 , P Y | X = x 2 h , Q , λ G h x 1 , P Y | X = x 1 , P Y | X = x 1 h , Q , λ d P X ( x 2 ) d P X ( x 1 )
  = 1 λ D P Y | X = x 2 P Y | X = x 1 h , Q , λ D P Y | X = x 2 P Y | X = x 2 h , Q , λ d P X ( x 1 ) d P X ( x 2 ) ,
where the last equality holds from Lemma 3, which implies
G h x 2 , P Y | X = x 1 , P Y | X = x 2 h , Q , λ d P X ( x 2 ) d P X ( x 1 ) = 1 λ D P Y | X = x 1 P Y | X = x 2 ( h , Q , λ ) 1 λ d P X ( x 2 ) d P X ( x 1 ) + 1 λ D P Y | X = x 2 ( h , Q , λ ) Q d P X ( x 2 ) 1 λ D P Y | X = x 1 Q d P X ( x 1 ) ,
and
G h x 1 , P Y | X = x 1 , P Y | X = x 1 h , Q , λ d P X ( x 2 ) d P X ( x 1 )
= G h x 1 , P Y | X = x 1 , P Y | X = x 1 h , Q , λ d P X ( x 1 )
= 1 λ D P Y | X = x 1 P Y | X = x 1 ( h , Q , λ ) d P X ( x 1 ) + 1 λ D P Y | X = x 1 ( h , Q , λ ) Q d P X ( x 1 )
1 λ D P Y | X = x 1 Q d P X ( x 1 ) ,
which completes the proof. □

References

  1. Gama, J.; Medas, P.; Castillo, G.; Rodrigues, P. Learning with drift detection. In Proceedings of the 17th Brazilian Symposium on Artificial Intelligence, Sao Luis, Maranhao, Brazil, 29 September–1 October 2004; pp. 286–295. [Google Scholar]
  2. Webb, G.I.; Lee, L.K.; Goethals, B.; Petitjean, F. Analyzing concept drift and shift from sample data. Data Min. Knowl. Discov. 2018, 32, 1179–1199. [Google Scholar] [CrossRef]
  3. Oliveira, G.H.F.M.; Minku, L.L.; Oliveira, A.L. Tackling virtual and real concept drifts: An adaptive Gaussian mixture model approach. IEEE Trans. Knowl. Data Eng. 2021, 35, 2048–2060. [Google Scholar] [CrossRef]
  4. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  5. Palomar, D.P.; Verdú, S. Lautum information. IEEE Trans. Inf. Theory 2008, 54, 964–975. [Google Scholar] [CrossRef]
  6. Perlaza, S.M.; Esnaola, I.; Bisson, G.; Poor, H.V. On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets and their Aggregation. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, 25–30 June 2023. [Google Scholar]
  7. Perlaza, S.M.; Bisson, G.; Esnaola, I.; Jean-Marie, A.; Rini, S. Empirical Risk Minimization with Relative Entropy Regularization. IEEE Trans. Inf. Theory 2024, 70, 5122–5161. [Google Scholar] [CrossRef]
  8. Zou, X.; Perlaza, S.M.; Esnaola, I.; Altman, E. Generalization Analysis of Machine Learning Algorithms via the Worst-Case Data-Generating Probability Measure. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024. [Google Scholar]
  9. Zou, X.; Perlaza, S.M.; Esnaola, I.; Altman, E.; Poor, H.V. The Worst-Case Data-Generating Probability Measure in Statistical Learning. IEEE J. Sel. Areas Inf. Theory 2024, 5, 175–189. [Google Scholar] [CrossRef]
  10. Perlaza, S.M.; Zou, X. The Generalization Error of Machine Learning Algorithms. arXiv 2024, arXiv:2411.12030. [Google Scholar] [CrossRef]
  11. Chentsov, N.N. Nonsymmetrical distance between probability distributions, entropy and the theorem of Pythagoras. Math. Notes Acad. Sci. USSR 1968, 4, 686–691. [Google Scholar] [CrossRef]
  12. Csiszár, I.; Matus, F. Information projections revisited. IEEE Trans. Inf. Theory 2003, 49, 1474–1490. [Google Scholar] [CrossRef]
  13. Müller, A. Integral probability metrics and their generating classes of functions. Adv. Appl. Probab. 1997, 29, 429–443. [Google Scholar] [CrossRef]
  14. Zolotarev, V.M. Probability metrics. Teor. Veroyatnostei i ee Primen. 1983, 28, 264–287. [Google Scholar] [CrossRef]
  15. Gretton, A.; Borgwardt, K.M.; Rasch, M.J.; Schölkopf, B.; Smola, A. A Kernel Two-Sample Test. J. Mach. Learn. Res. 2012, 13, 723–773. [Google Scholar]
  16. Villani, C. Optimal Transport: Old and New, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  17. Liu, W.; Yu, G.; Wang, L.; Liao, R. An Information-Theoretic Framework for Out-of-Distribution Generalization with Applications to Stochastic Gradient Langevin Dynamics. arXiv 2024, arXiv:2403.19895. [Google Scholar] [CrossRef]
  18. Liu, W.; Yu, G.; Wang, L.; Liao, R. An Information-Theoretic Framework for Out-of-Distribution Generalization. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Athens, Greece, 7–12 July 2024; pp. 2670–2675. [Google Scholar]
  19. Agrawal, R.; Horel, T. Optimal Bounds between f-Divergences and Integral Probability Metrics. J. Mach. Learn. Res. 2021, 22, 1–59. [Google Scholar]
  20. Rahimian, H.; Mehrotra, S. Frameworks and results in distributionally robust optimization. Open J. Math. Optim. 2022, 3, 1–85. [Google Scholar] [CrossRef]
  21. Xu, C.; Lee, J.; Cheng, X.; Xie, Y. Flow-based distributionally robust optimization. IEEE J. Sel. Areas Inf. Theory 2024, 5, 62–77. [Google Scholar] [CrossRef]
  22. Hu, Z.; Hong, L.J. Kullback-Leibler divergence constrained distributionally robust optimization. Optim. Online 2013, 1, 9. [Google Scholar]
  23. Radon, J. Theorie und Anwendungen der Absolut Additiven Mengenfunktionen, 1st ed.; Hölder: Vienna, Austria, 1913. [Google Scholar]
  24. Nikodym, O. Sur une généralisation des intégrales de MJ Radon. Fundam. Math. 1930, 15, 131–179. [Google Scholar] [CrossRef]
  25. Aminian, G.; Bu, Y.; Toni, L.; Rodrigues, M.; Wornell, G. An Exact Characterization of the Generalization Error for the Gibbs Algorithm. Adv. Neural Inf. Process. Syst. 2021, 34, 8106–8118. [Google Scholar]
  26. Perlaza, S.M.; Bisson, G.; Esnaola, I.; Jean-Marie, A.; Rini, S. Empirical Risk Minimization with Relative Entropy Regularization: Optimality and Sensitivity. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022; pp. 684–689. [Google Scholar]
  27. Jiang, W.; Tanner, M.A. Gibbs posterior for variable selection in high-dimensional classification and data mining. Ann. Stat. 2008, 36, 2207–2231. [Google Scholar] [CrossRef]
  28. Alquier, P.; Ridgway, J.; Chopin, N. On the properties of variational approximations of Gibbs posteriors. J. Mach. Learn. Res. 2016, 17, 8374–8414. [Google Scholar]
  29. Bu, Y.; Aminian, G.; Toni, L.; Wornell, G.W.; Rodrigues, M. Characterizing and understanding the generalization error of transfer learning with Gibbs algorithm. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS), Virtual Conference, 28–30 March 2022; pp. 8673–8699. [Google Scholar]
  30. Raginsky, M.; Rakhlin, A.; Tsao, M.; Wu, Y.; Xu, A. Information-theoretic analysis of stability and bias of learning algorithms. In Proceedings of the IEEE Information Theory Workshop (ITW), Cambridge, UK, 11–14 September 2016; pp. 26–30. [Google Scholar]
  31. Zou, B.; Li, L.; Xu, Z. The Generalization Performance of ERM algorithm with Strongly Mixing Observations. Mach. Learn. 2009, 75, 275–295. [Google Scholar] [CrossRef]
  32. He, H.; Aminian, G.; Bu, Y.; Rodrigues, M.; Tan, V.Y. How Does Pseudo-Labeling Affect the Generalization Error of the Semi-Supervised Gibbs Algorithm? In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS), Valencia, Spain, 25–27 April 2023; pp. 8494–8520. [Google Scholar]
  33. Hellström, F.; Durisi, G.; Guedj, B.; Raginsky, M. Generalization Bounds: Perspectives from Information Theory and PAC-Bayes. Found. Trends® Mach. Learn. 2025, 18, 1–223. [Google Scholar] [CrossRef]
  34. Jaynes, E.T. Information Theory and Statistical Mechanics I. Phys. Rev. J. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  35. Jaynes, E.T. Information Theory and Statistical Mechanics II. Phys. Rev. J. 1957, 108, 171–190. [Google Scholar] [CrossRef]
  36. Kapur, J.N. Maximum Entropy Models in Science and Engineering, 1st ed.; Wiley: New York, NY, USA, 1989. [Google Scholar]
  37. Bermudez, Y.; Bisson, G.; Esnaola, I.; Perlaza, S.M. Proofs for Folklore Theorems on the Radon-Nikodym Derivative; Technical Report RR-9591; Centre Inria d’Université Côte d’Azur, INRIA: Sophia Antipolis, France, 2025. [Google Scholar]
  38. Donsker, M.D.; Varadhan, S.S. Asymptotic evaluation of certain Markov process expectations for large time, I. Commun. Pure Appl. Math. 1975, 28, 1–47. [Google Scholar] [CrossRef]
  39. Heath, T.L. The Thirteen Books of Euclid’s Elements, 2nd ed.; Dover Publications, Inc.: New York, NY, USA, 1956. [Google Scholar]
  40. Azizian, W.; Lutzeler, F.; Malick, J.; Mertikopoulos, P. What is the Long-Run Distribution of Stochastic Gradient Descent? A Large Deviations Analysis. In Proceedings of the 41st International Conference on Machine Learning, 21–27 July 2024; Volume 235, pp. 2168–2229. [Google Scholar]
Figure 1. Geometric interpretation of Lemma 3, with Q a probability measure.
Figure 1. Geometric interpretation of Lemma 3, with Q a probability measure.
Entropy 27 00865 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Perlaza, S.M.; Bisson, G. Variations on the Expectation Due to Changes in the Probability Measure. Entropy 2025, 27, 865. https://doi.org/10.3390/e27080865

AMA Style

Perlaza SM, Bisson G. Variations on the Expectation Due to Changes in the Probability Measure. Entropy. 2025; 27(8):865. https://doi.org/10.3390/e27080865

Chicago/Turabian Style

Perlaza, Samir M., and Gaetan Bisson. 2025. "Variations on the Expectation Due to Changes in the Probability Measure" Entropy 27, no. 8: 865. https://doi.org/10.3390/e27080865

APA Style

Perlaza, S. M., & Bisson, G. (2025). Variations on the Expectation Due to Changes in the Probability Measure. Entropy, 27(8), 865. https://doi.org/10.3390/e27080865

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop