A Utility-Based Approach to Some Information Measures

We review a decision theoretic, i.e., utility-based, motivation for entropy and Kullback-Leibler relative entropy, the natural generalizations that follow, and various properties of thesegeneralized quantities. We then consider these generalized quantities in an easily interpreted spe-cial case. We show that the resulting quantities, share many of the properties of entropy andrelative entropy, such as the data processing inequality and the second law of thermodynamics.We formulate an important statistical learning problem – probability estimation – in terms of ageneralized relative entropy. The solution of this problem reflects general risk preferences via theutility function; moreover, the solution is optimal in a sense of robust absolute performance.


Introduction
It is well known that there are a number of ways to motivate the fundamental quantities of information theory, for example: • Entropy can be defined as essentially the only quantity that is consistent with a plausible set of information axioms (the approach taken by Shannon (1948) for a communication system-see also Csiszár and Körner (1997) for additional axiomatic approaches).
• The definition of entropy is related to the definition of entropy from thermodynamics (see, for example, Brillouin (1962) and Jaynes (1957)).
There is a substantial literature on various generalizations of entropy, for example, the Tsallis entropy (with an extensive biblioigraphy in Group of Statistical Physics ( 2006)), which was introduced by Cressie and Read (1984) and Cressie and Read (1989) and used for statistical decisions.There is a literature on income inequality which generalizes entropy (see, for example, Cowell (1981)).A generalization of relative entropy, relative Tsallis entropy (alpha-divergence), was introduced by Liese and Vajda (1987).Österreicher and Vajda (May, 1993), Stummer (1999), Stummer (2001), and Stummer (2004) discussed Tsallis entropy and optimal Bayesian decisions.A number of these generalization are closely related to the material in this paper.In this paper, we discuss entropy and Kullback-Leibler relative entropy from a decision theoretic point of view; in particular, we review utility-based motivations and generalizations of these information theoretic quantities, based on fundamental principles from decision theory applied in a particular "market" context.As we shall see, some generalizations share important properties with entropy and relative entropy.We then formulate a statistical learning problem (estimation of probability measures) in terms of these generalized quantities and discuss optimality properties of the solution.
The connection between information theoretic quantities and gambling is well known.Cover and Thomas (1991) (see Theorem 6.1.2),for example, show that in the horse race setting, an expected wealth growth rate maximizing investor, who correctly understands the probabilities, has expected wealth growth rate equal to that of a clairvoyant (someone who wins every bet) minus the entropy of the horse race.Thus, the entropy can be interpreted as the difference between the wealth growth rates attained by a clairvoyant and an expected wealth growth rate maximizing investor who allocates according to the true measure p.It follows from Cover and Thomas (1991), Theorem 6.1.2and Equation (6.13) that an investor who allocates according to the measure q rather than p, suffers a loss in expected wealth growth rate of Kullback-Leibler relative entropy D(p q).Investors who maximize expected wealth growth rates can be viewed as expected logarithmic utility maximizers; thus, the entropy can be interpreted as the difference in expected logarithmic utility attained by a clairvoyant and expected logarithmic utility maximizing investor who allocates according to the true measure p.By similar reasoning, an investor who allocates according to the measure q rather than p, suffers a loss in expected logarithmic utility of D(p q).Friedman and Sandow (2003b) extend these ideas and define a generalized entropy and a generalized Kullback-Leibler relative entropy for investors with general utility functions.
In Section 2, we review the generalizations and various properties of the generalized quantities introduced in Friedman and Sandow (2003b).In Section 3 we consider these quantities in a new and natural special case.We show that the resulting quantities, which are still more general than entropy and Kullback-Leibler relative entropy, share many of the properties of entropy and relative entropy, for example, the data processing inequality and the second law of thermodynamics.We note that one of the most popular families of utility functions from finance leads to a monotone transformation of the Tsallis entropy.In Section 4 we formulate a decision-theoretic approach to the problem of learning a probability measure from data.This approach, formulated in terms of the quantities described in Section 3, generalizes the minimum relative entropy approach to estimating probabilistic models.This more general formulation specifically incorporates the risk preferences of an investor who would use the probability measure that is to be learned in order to make decisions in a particular market setting.Moreover, we show that the probability measure that is "learned" is robust in the sense that an investor allocating according to the "learned" measure maximizes his expected utility in the most adverse environment consistent with certain data consistency constraints.

(U, O)-Entropy
According to utility theory (see, for example, Theorem 3, p. 31 of Ingersoll (1987)), a rational investor acts to maximize his expected utility based on the model he believes.Friedman and Sandow (2003b) use this fact to construct general information theoretic quantities that depend on the probability measures associated with potential states of a horse race (defined precisely below).
In this section, we review the horse race setting, notions from utility theory and definitions and some of the basic properties of the generalized entropy ((U, O)−entropy) and the generalized relative entropy (relative (U, O)-entropy) from Friedman and Sandow (2003b).

Probabilistic Model and Horse Race
We consider the random variable X with values, x, in a finite set X .1 Let q(x) denote prob{X = x} under the probability measure q.We adopt the viewpoint of an investor who uses the model to place bets on a horse race, which we define as follows:2 Definition 1 A horse race is a market characterized by the discrete random variable X with possible states x ∈ X , where an investor can place a bet that X = x, which pays the odds ratio O(x) > 0 for each dollar wagered if X = x, and 0, otherwise.
We assume that our investor allocates b(x) to the event X = x, where We note that we have not required b(x) to be nonnegative.For some utility functions, the investor may choose to "short" a particular horse.We also note that we can ease the constraint (1) so as to allow our investor to leverage or withhold cash.This will not lead to any significant change in the final results of this paper (see Friedman and Sandow (2004)).The investor's wealth after the bet is where x denotes the winning horse.We note that an investor who allocates $1 of capital, investing B O(x) to each state x, where receives the payoff B with certainty.This motivates the following definition: Definition 2 The riskless bank account payoff, B, is given by . (4) In a horse race with superfair odds,3 B > 1; in a horse race with subfair odds (there is a "track take"), B < 1.In financial markets, with positive "interest rates," B > 1.If the "interest rate" is zero, B = 1.

Utility and Optimal Betting Weights
In order to quantify the benefits, to an investor, of the information in the model q, we consider the investor's utility function, U .
Assumption 1 Our investor subscribes to the axioms of utility theory. 4The investor has a utility function, U : (iv) has the property (0, ∞) ⊆ range(U ′ ), i.e., there exists a 'blow-up point', W b , with and a 'saturation point', W s , with (v) is compatible with the market in the sense that W b < B < W s .
One can see easily that condition (v) just means that the wealth associated with the bank account lies in the domain of U .We note that many popular utility functions (for example, the logarithmic, exponential and power utilities (see Luenberger (1998)) are consistent with these conditions.According to Utility Theory, a rational investor who believes the measure q allocates so as to maximize the expectation, under q, of his utility function (as applied to his post-bet wealth).In conjunction with (2), this means that this investor allocates with where and The following lemma states how the optimal betting weights can be computed: Lemma 1 An expected utility maximizing investor who believes the probability measure q and invests in a horse race with odds ratios O will allocate where (U ′ ) −1 denotes the inverse of the function U ′ , and λ is the solution of the following equation: Under the assumptions of this paper, the solution to (11) is exists, is unique, and the optimal allocation, b * (q), depends continuously on q.

Generalization of Entropy and Relative Entropy
We define a utility-based generalization of Kullback Leibler relative entropy via: Definition 3 The relative (U, O)-entropy from the probability measure p to the probability measure q is given by: Interpretation: relative (U, O)-entropy is the difference between (i) the expected (under the measure p) utility of the payoffs if we allocate optimally according to p, and, (ii) the expected (under the measure p) utility of the payoffs if we allocate optimally according to the misspecified model, q.
Definition 4 The (U, O)-entropy of the probability measure p is given by: Interpretation: (U, O)-entropy is the difference between the expected utility of the payoffs for a clairvoyant who wins every race and the expected utility under the optimal allocation.H U,O (p) and D U,O (p q) have important properties summarized in the following theorem.Proof: See Theorem 2 in Friedman and Sandow (2003b).
Note that for U (W ) = log(W ), we recover the entropy and Kullback-Leibler relative entropy under any system of odds ratios.We note that, solving (12) for E p [U (b * (q), O)] and substituting from ( 13), one can obtain the following decomposition for the expected utility, under the measure p, for an investor who allocates under the (misspecified measure) q: On the right hand side of ( 14), the first term is the expected utility of an investor who wins every bet (a clairvoyant), the second term is the expected utility loss the investor incurs because the world (characterized by p) is uncertain, and the third term is the expected utility loss related to the investor's model misspecification.

Connection with Kullback-Leibler Relative Entropy
It so happens that relative (U, O)-entropy essentially reduces to Kullback-Leibler relative entropy for a logarithmic family of utilities: p||q), is independent of the odds ratios, O for any candidate model p and prior measure, q, if and only if the utility function, U , is a member of the logarithmic family where γ 1 > 0, γ 2 and γ < B are constants.In this case, Proof: See Friedman and Sandow (2003a), Theorem 3. It follows that relative (U, O)-entropy reduces, up to a multiplicative constant, to Kullback-Leibler relative entropy, if and only if the utility is a member of the logarithmic family ( 16).

U −Entropy
We have seen that D U,O (p q) ≥ 0 represents the expected gain in utility from allocating under the true measure p, rather than allocating according to the misspecified measure q under market odds O for an investor with utility U .One of the main goals of this paper is to explore what happens when we set q(x) equal to the homogeneous expected return measure, Readers familiar with finance will recognize this as the risk neutral pricing measure5 generated by the odds ratios.In finance, a risk neutral pricing measure is a measure under which the price of any perfectly hedgeable contingent claim is equal to the discounted (by the bank account) expectation of the payoff of the contingent claim.In the horse race setting, there is only one such measure, given by ( 17).We shall see in Section 4 that there are compelling reasons to consider this case.In this case, and In this special case, D U, B q (p q) can be interpreted as the excess performance from allocating according to the real world measure p over the (risk-neutral) measure q derived from the odds ratios.6Likewise, under (18), we have In the special case where q is the uniform distribution, q(x) = 1 |X | , which we denote by 1 |X | , we obtain For simplicity, we assume that B = 1 and U (B) = 0 from now on.7

Definitions
Motivated by ( 19), we make the following definition: Definition 5 The U -relative entropy from the probability measure p to the probability measure q is given by Motivated by (20), we define the U −entropy:8 Definition 6 The U -entropy of the probability measure p is given by Of course, these specializations of (U, O)−entropy and relative (U, O)−entropy inherit all of the properties stated in Section 2. It is easy to show that for logarithmic utilities, they reduce to entropy and Kullback-Leibler relative entropy, respectively, so these quantities are generalizations of entropy and Kullback-Leibler relative entropy, respectively.Next, we generalize the above definitions to conditional probability measures.To keep our notation simple, we use p(x) and p(y) to represent the probability distributions for the random variables X and Y and use both H U (p) and H U (X) to represent the U -entropy for the random variable X which has probability measure p.
Definition 7 The conditional relative U -entropy from the probability measure p to the probability measure q is given by Definition 8 The conditional U −entropy from the probability measure p to the probability measure q is given by We note that we could have stated the preceding definitions as special cases of conditional (U, O)−entropy and conditional relative (U, O)−entropy as in Friedman and Sandow (2003b).The latter quantities can be interpreted in the context of a conditional horse race.
The following definition generalizes mutual information.
Definition 9 The mutual U -information between the probability measures p and q is defined as We now establish that many properties that hold for the classical quantities of information theory also hold in this more general setting.
Proof.By definition, Corollary 3 (Shuffles increase entropy).If T is a random shuffle (permutation) of a deck of cards and X is the initial position of the cards in the deck and if the choice of shuffle T is independent of X, then where TX is the permutation of the deck induced by the shuffle T.
Proof.We follow the proof sketched on Page 48 of Cover and Thomas (1991).First, notice that for any fixed permutation T = t, from the definition of H U , we have (since we have only reordered the states, but we have not changed the probabilities associated with the states).So Proof.
Proof.We prove the first inequality by noting that by Corollary 2, and by Theorem 2, The second inequality follows from the fact that Corollary 5 The conditional entropy H U (X n |X 1 ) increases with n for a stationary Markov process.
Proof.The proof is essentially the same as that given on 36 of Cover and Thomas (1991). 2 In general, the chain rule for relative entropy does not hold, i.e.
However, we do have the following two inequalities.
Theorem 3 Proof.By definition, and 2 Theorem 4 (More refined horse races have higher relative U −entropies) Let T : X → Y be any onto function and p T (y), q T (y) be the induced probabilities on Y, i.e., Then Proof. 2 Theorem 4 can also be regarded as a special case of Theorem 3, part (i).
Theorem 5 (second law of thermodynamics) Let µ n and µ ′ n be the probability distributions (at time n) of two different Markov chains arising from the same transition mechanism but from different starting distributions.Then D U (µ n µ ′ n ) decreases with n.
Proof.Let the corresponding joint mass function be denoted by p(x n , x n+1 ) and q(x n , x n+1 ) and let r(•|•) denote the probability transition function for the Markov chain.Then p( (Here, we have used Jensen's inequality and the fact that By Theorem 3, Corollary 6 The relative U -entropy D U (µ n µ) between a distribution µ n on the states at time n and a stationary distribution µ decreases with n.
Proof: This is a special case of Theorem 5, where µ ′ n = µ for any n. 2 Corollary 7 If the uniform distribution 1 |X | is a stationary distribution, then entropy H U (µ n ) increases with n.
Theorem 6 ≤ sup Proof.We will first show that I U (X; Y ) = I U (X; Y, Z).From the previous result, On the other hand,

Power Utility
Consider the power utility, given by We note that for each fixed wealth level, taking limits as κ → 1, the power utility approaches the logarithmic utility.
In this section, we compute the power utility's U −entropy and relative U −entropy, explore limiting behavior and make comparisons with Tsallis entropy.We note that the power utility is commonly used (see, for example, Morningstar ( 2002)), since it has desirable optimality properties (see Stutzer (2003)) and the constant relative risk aversion property.The latter property follows from the fact that, for the power utility, the relative risk aversion coefficient (see, for example, Luenberger (1998)) is From Friedman and Sandow (2003b), with O(x) = 1 q(x) , we have where After some algebra, we obtain

U −Entropy for Large Relative Risk Aversion
Let us consider the limit of infinite risk aversion, which for the power utility corresponds to κ → ∞, as can be seen from ( 66).It follows from ( 67) and ( 68), that a power utility investor with infinite risk aversion invests all his money in the bank account, i.e., allocates according to no matter what his belief-measure, p, is, so that his after-bet wealth is B = 1 with certainty.Such an investor makes no use of the information provided by the model, so no model can ourperform another and, therefore, lim for any measures p, q and any utility function U .What happens if the relative risk aversion of a power-utility investor, i.e., κ, is large but finite?
To answer this question, we expand (69) as follows.
1 − e x q(x)log p(x) q(x) κ + 1 − e x q(x)log p(x) q(x) 1 + 1 2 var q log p(x) q(x) where the last equality follows from the fact that Thus, we have related, for large κ, D Uκ (p q) to Kullback-Leibler relative entropy.

Relation with Tsallis Entropy
Another generalization of entropy, which recently has been a topic of increasing interest in physics, is the Tsallis entropy (see Tsallis (1988)).The Tsallis entropy provides a theoretical framework for a nonextensive thermodynamics, i.e., for a thermodynamics for which the entropy of a system made up of two independent subsystems is not simply the sum of the entropies of the subsystems.
Examples for physical application of such a theory might be astronomical self-gravitating systems (see, for example, Plastino and Plastino (1999)).The Tsallis entropy can be generalized to the Tsallis relative entropy (see, for example, Tsallis and Brigatti (2004)) as follows: This entropy recovers the standard Boltzmann-Gibbs entropy α → 1.It turns out that there is a simple monotonic relationship between our relative U −entropy and the relative Tsallis entropies: Also note that Comparing this to the Tsallis entropy: we have the following simple monotonic relationship between the two entropies 4 Application: Probability Estimation via Relative U −Entropy Minimization Information theoretic ideas such as the maximum entropy principle (see, for example, Jaynes (1957) and Golan et al. (1996)) or the closely related minimum relative entropy (MRE) principle (see, for example Lebanon and Lafferty (2001)) have assumed major roles in statistical learning.
We discuss these principles from the point of view of a model user who would use a probabilistic model to make bets in the horse race setting of Section 2.9 As we shall see, probability models estimated by these principles are robust in the sense that they maximize "worst case" model performance, in terms of expected utility, for a horse race investor with the three parameter logarithmic utility (15) in a particular market (horse race).
As such, probability models estimated by MRE methods are special in the sense that they are tailored to the risk preferences of logarithmic-family-utility investors.However, not all risk preferences can be expressed by utility functions in the logarithmic family.In the financial community, a substantial percentage, if not a majority, of practitioners implement utility functions outside the logarithmic family (15) (see, for example, Morningstar ( 2002)).This is not surprising -other utility functions may more accurately reflect the risk preferences of certain investors, or possess important defining properties or optimality properties.
In order to tailor the statistical learning problem to the specific utility function of the investor, one can replace the usual MRE problem with a generalization, based on relative U -entropy, as indicated below.Before stating results for minimum relative U −entropy (MRUE) probability estimation, we review results for MRE probability estimation.

MRE and Dual Formulation
The usual MRE formulation (see, for example, Lebanon and Lafferty (2001) and Kitamura and Stutzer (1997)) is given by:10 Problem 1 Find where each f j represents a feature (i.e., a function that maps X to R), p 0 is a probability measure, p represents the empirical measure, and Q is the set of all probability measures.
The feature constraints (79) can be viewed as a mechanism to enforce "consistency" between the model and the data.Typically, p 0 is interpreted as representing prior beliefs, before modification in light of model training data.Below, we discuss what happens when p 0 is a benchmark measure or the pricing measure generated by the odds ratios.The solution to Problem 1 will be the measure satisfying the feature constraints that is closest to p 0 (in the sense of relative entropy).In many settings, the above (primal) problem is not as easy to solve numerically as the following dual problem: 11 Problem 2 Find where is the log-likelihood function, and p(x) denotes the empirical probability that X = x.
We note that the primal problem (Problem 1) and the dual problem (Problem 2) have the same solution, in the sense that p * = pβ * .
4.2 Robustness of MRE when the Prior Measure is the Odds Ratio Pricing Measure In a certain circumstance, the MRE measure (i.e., the solution of Problem 1) has a desirable robustness property, which we discuss in this section.First we state the following minimax result from Grünwald and Dawid (2004).
Known Result 3 If there exists a probability measure π ∈ K, with π x > 0, ∀x ∈ X , then inf where Q is the set of all possible probability measures, K is a closed convex set, such as the set of all p ∈ Q that satisfy the feature constraints, (79).Moreover, a saddle point is attained.
Replacing log p ′ with log p ′ − log p 0 in (84) doesn't change the finiteness or the convexity properties of the function under the infsup or the supinf.Since the proof of Known Result 3 relies only on those properties of this function, we have and a saddle point is attained here as well.Further using the fact that which follows from the information inequality (see, for example, Cover and Thomas (1991)), we obtain Using further b * (p) = p,12 for U (W ) = log W , we see that (87) leads to the following corollary: Corollary 9 If U (W ) = log W , and , ∀x ∈ X , (88) where Q is the set of all possible probability measures and K is the set of all p ∈ Q that satisfy the feature constraints, (79).13 Thus, if p 0 is given by pricing measure generated by the odds ratios, then the MRE measure, p * , is robust in the following sense: by choosing p * a rational (expected-utility optimizing) investor maximizes his (model-based optimal) expected utility in the most adverse environment consistent with the feature constraints.
Here, p 0 has a specific and non-traditional interpretation.It no longer represents prior beliefs about the real world measure that we estimate with p * .Rather, it represents the risk neutral pricing measure consistent with the odds ratios.If a logarithmic utility investor wants the measure that he estimates to have the optimality property (89), he is not free to choose p 0 to represent his prior beliefs, in general.However, if he sets p 0 to the risk neutral pricing measure determined by the odds ratios, then the optimality property (89) will hold.In this case, p * , can be thought of as a function of the odds ratios, rather than an update of prior beliefs.
It is this property, ( 89), that we seek to generalize to accommodate risk preferences more general than the ones compatible with a logarithmic utility function.

General Risk Preferences and Robust Relative Performance
In this section, we state a known result that we will use below to generalize the robustness property (89).
The MRE problem is generalized in Friedman and Sandow (2003a) to a minimum relative (U, O) entropy problem.The solution to this generalized problem is robust in the following sense: where Q is the set of all possible probability measures, K is the set of all p ∈ Q that satisfy the feature constraints, (79).
This is only a slight modification of a result from Grünwald and Dawid (2002) and is based on the logic from Topsøe (1979).A proof can be found in Friedman and Sandow (2003a).Known Result 4 states the following: minimizing D U,O (p||p 0 ) with respect to p ∈ K is equivalent to searching for the measure p * ∈ Q that maximizes the worst-case (with respect to the potential true measures, p ′ ∈ K) relative model performance (over the benchmark model, p 0 ), in the sense of expected utility.The optimal model, p * , is robust in the sense that for any other model, p, the worst (over potential true measures p ′ ∈ K) relative performance is even worse than the worstcase relative performance under p * .We do not know the true measure; an investor who makes allocation decisions based on p * is prepared for the worst that "nature" can offer.

Robust Absolute Performance and MRUE
We now address the following question: Is it possible to formulate a generalized MRE problem for which the solution is robust in the absolute sense of (89), as opposed to the relative sense in Known Result 4? This is indeed possible, as the following Corollary, which follows directly from Known Result 4, indicates: where Q is the set of all possible probability measures and K is the set of all p ∈ Q that satisfy the feature constraints, (79).
This Corollary states that, by choosing p * , a rational (expected-utility optimizing) investor maximizes his (model-based optimal) expected utility in the most adverse environment consistent with the feature constraints.Therefore, an investor with a general utility function, who bets on this horse race and wants to maximize his worst case (in the sense of ( 91)) expected utility can set p 0 to the risk neutral pricing measure determined by the odds ratios and solve the following problem:14 Problem 3 (MRUE Problem) Find where each f j represents a feature (i.e., a function that maps X to R), p 0 represents the prior measure, p represents the empirical measure, and Q is the set of all probability measures.
As before, p 0 , here, has a specific and non-traditional interpretation.It no longer represents prior beliefs about the real world measure that we estimate with p * .Rather, it represents the risk neutral pricing measure consistent with the odds ratios.If an investor wants the measure that he estimates to have the optimality property (91), he is not free to choose p 0 to represent his prior beliefs, in general.To attain the optimality property (91), he can set p 0 to represent the risk neutral pricing measure determined by the odds ratios.The MRUE measure, p * , is a function of the odds ratios (which are incorporated in the measure p 0 ).In order to obtain the dual to Problem 3, we specialize, in Problem 3 of Friedman and Sandow (2003a), relative (U, O)-entropy to relative U −entropy by setting the "prior" measure to the risk neutral pricing measure generated by the odds ratios, p 0 (x) = 1 O(x) .We obtain and where µ * solves 1 = Problem 4 is easy to interpret. 15The objective function of Problem 4 is the utility (of the expected utility maximizing investor) averaged over the training sample.Thus, our dual problem is a maximization of the training-sample averaged utility, where the utility function, U , is the utility function on which the U −entropy depends.We note that the primal problem (Problem 3) and the dual problem (Problem 4) have the same solution, in the sense that p * = pβ * .This problem is a J -dimensional (J is the number of features), unconstrained, concave maximization problems (see Boyd and Vandenberghe (2004), p. 159 for the concavity).The primal problem, Problem 3, on the other hand, is an m-dimensional (m is the number of states) convex minimization with linear constraints.The dual problem, Problem 4, may be easier to solve than the primal problem, Problem 3, if m > J .For conditional probability models, the dual problem will always be easier to solve than the primal problem.
As we have seen above, in cases where odds ratios are available, the MRUE problem yields a solution with the optimality property (91).However, in real statistical learning applications, as mentioned above, it is often the case that odds ratios are not observable.In this case, the builder of a statistical learning model can use assumed odds ratios, on which the model will depend.Given the relation as a perhaps more convenient alternative, the model-builder can directly specify a risk neutral pricing measure consistent with the assumed odds ratios.Either way, the model will possess the optimality property (91) under the odds ratios consistent with the assumption.The necessity of providing a risk neutral pricing measure, perhaps, imposes an onus on the MRUE modeler comparable to the onus of finding a prior for the MRE modeler.However, we note that, as for MRE models, the importance of p 0 will diminish as the number of feature constraints grows.For logarithmic-family utility functions, as a practical matter, even though the MRE and MRUE models require different interpretations and, possibly, different choices for p 0 , the numerical implementations will be the same under the appropriate assumptions.
) H U,O (p) ≥ 0, and (iv) H U,O (p) is a strictly concave function of p.
) 3.2 Properties of U −Entropy and Relative U −Entropy As a special case of Known Result 1, we have Corollary 1 The generalized relative entropy, D U (p q), and the generalized entropy, H U (p), have the following properties (i) D U (p q) ≥ 0 with equality if and only if p = q, (ii) D U (p q) is a strictly convex function of p, (iii) H U (p) ≥ 0, and (iv) H U (p) is a strictly concave function of p.

2
Theorem 7 (Data processing inequality): If the random variables X,Y,Z form a Markov chain X → Y → Z, then I U (X; Y ) ≥ I U (X; Z).