Next Article in Journal
An Adaption of the Jaynes Decision Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Utility-Based Approach to Some Information Measures

Standard & Poor’s, 55 Water Street, 46th Floor, New York, NY 10041, USA
*
Author to whom correspondence should be addressed.
Entropy 2007, 9(1), 1-26; https://doi.org/10.3390/e9010001
Submission received: 23 May 2006 / Accepted: 5 January 2007 / Published: 20 January 2007

Abstract

:
We review a decision theoretic, i.e., utility-based, motivation for entropy and Kullback- Leibler relative entropy, the natural generalizations that follow, and various properties of these generalized quantities. We then consider these generalized quantities in an easily interpreted spe- cial case. We show that the resulting quantities, share many of the properties of entropy and relative entropy, such as the data processing inequality and the second law of thermodynamics. We formulate an important statistical learning problem – probability estimation – in terms of a generalized relative entropy. The solution of this problem reflects general risk preferences via the utility function; moreover, the solution is optimal in a sense of robust absolute performance.

1 Introduction

It is well known that there are a number of ways to motivate the fundamental quantities of information theory, for example:
  • Entropy can be defined as essentially the only quantity that is consistent with a plausible set of information axioms (the approach taken by Shannon (1948) for a communication system–see also Csisz´ar and K¨orner (1997) for additional axiomatic approaches).
  • The definition of entropy is related to the definition of entropy from thermodynamics (see, for example, Brillouin (1962) and Jaynes (1957)).
There is a substantial literature on various generalizations of entropy, for example, the Tsal- lis entropy (with an extensive biblioigraphy in Group of Statistical Physics (2006)), which was introduced by Cressie and Read (1984) and Cressie and Read (1989) and used for statistical de- cisions. There is a literature on income inequality which generalizes entropy (see, for example, Cowell (1981)). A generalization of relative entropy, relative Tsallis entropy (alpha-divergence), was introduced by Liese and Vajda (1987). Österreicher and Vajda (May, 1993), Stummer (1999), Stummer (2001), and Stummer (2004) discussed Tsallis entropy and optimal Bayesian decisions. A number of these generalization are closely related to the material in this paper. In this paper, we discuss entropy and Kullback-Leibler relative entropy from a decision theoretic point of view; in particular, we review utility-based motivations and generalizations of these information theoretic quantities, based on fundamental principles from decision theory applied in a particular ”market” context. As we shall see, some generalizations share important properties with entropy and relative entropy. We then formulate a statistical learning problem (estimation of probability measures) in terms of these generalized quantities and discuss optimality properties of the solution.
The connection between information theoretic quantities and gambling is well known. Cover and Thomas (1991) (see Theorem 6.1.2), for example, show that in the horse race setting, an expected wealth growth rate maximizing investor, who correctly understands the probabilities, has expected wealth growth rate equal to that of a clairvoyant (someone who wins every bet) minus the entropy of the horse race. Thus, the entropy can be interpreted as the difference between the wealth growth rates attained by a clairvoyant and an expected wealth growth rate maximizing investor who allocates according to the true measure p. It follows from Cover and Thomas (1991), Theorem 6.1.2 and Equation (6.13) that an investor who allocates according to the measure q rather than p, suffers a loss in expected wealth growth rate of Kullback-Leibler relative entropy D(pq).
Investors who maximize expected wealth growth rates can be viewed as expected logarithmic utility maximizers; thus, the entropy can be interpreted as the difference in expected logarithmic utility attained by a clairvoyant and expected logarithmic utility maximizing investor who allocates according to the true measure p. By similar reasoning, an investor who allocates according to the measure q rather than p, suffers a loss in expected logarithmic utility of D(pq). Friedman and Sandow (2003b) extend these ideas and define a generalized entropy and a generalized Kullback- Leibler relative entropy for investors with general utility functions.
In Section 2, we review the generalizations and various properties of the generalized quantities introduced in Friedman and Sandow (2003b). In Section 3 we consider these quantities in a new and natural special case. We show that the resulting quantities, which are still more general than entropy and Kullback-Leibler relative entropy, share many of the properties of entropy and relative entropy, for example, the data processing inequality and the second law of thermodynamics. We note that one of the most popular families of utility functions from finance leads to a monotone transformation of the Tsallis entropy. In Section 4 we formulate a decision-theoretic approach to the problem of learning a probability measure from data. This approach, formulated in terms of the quantities described in Section 3, generalizes the minimum relative entropy approach to estimating probabilistic models. This more general formulation specifically incorporates the risk preferences of an investor who would use the probability measure that is to be learned in order to make decisions in a particular market setting. Moreover, we show that the probability measure that is “learned” is robust in the sense that an investor allocating according to the “learned” measure maximizes his expected utility in the most adverse environment consistent with certain data consistency constraints.

2 Entropy 09 00001 i018-Entropy

According to utility theory (see, for example, Theorem 3, p. 31 of Ingersoll (1987)), a rational investor acts to maximize his expected utility based on the model he believes. Friedman and Sandow (2003b) use this fact to construct general information theoretic quantities that depend on the probability measures associated with potential states of a horse race (defined precisely below). In this section, we review the horse race setting, notions from utility theory and definitions and some of the basic properties of the generalized entropy ( Entropy 09 00001 i018−entropy) and the generalized relative entropy (relative ( Entropy 09 00001 i018-entropy) from Friedman and Sandow (2003b).

2.1 Probabilistic Model and Horse Race

We consider the random variable X with values, x, in a finite set Entropy 09 00001 i019.1 Let q(x) denote prob{X = x} under the probability measure q. We adopt the viewpoint of an investor who uses the model to place bets on a horse race, which we define as follows:2
Definition 1
A horse race is a market characterized by the discrete random variable X with possible states Entropy 09 00001 i020, where an investor can place a bet that X = x, which pays the odds ratio Entropy 09 00001 i021 for each dollar wagered if X = x, and 0, otherwise.
We assume that our investor allocates b(x) to the event X = x, where
Entropy 09 00001 i001
We note that we have not required b(x) to be nonnegative. For some utility functions, the investor may choose to “short” a particular horse. We also note that we can ease the constraint (1) so as to allow our investor to leverage or withhold cash. This will not lead to any significant change in the final results of this paper (see Friedman and Sandow (2004)).
The investor’s wealth after the bet is
Entropy 09 00001 i002
where Entropy 09 00001 i003 denotes the winning horse.
We note that an investor who allocates $1 of capital, investing Entropy 09 00001 i004 to each state x, where
Entropy 09 00001 i005
receives the payoff B with certainty. This motivates the following definition:
Definition 2 
The riskless bank account payoff, B, is given by
Entropy 09 00001 i006
In a horse race with superfair odds,3 B > 1; in a horse race with subfair odds (there is a “track take”), B < 1. In financial markets, with positive “interest rates,” B > 1. If the “interest rate” is zero, B = 1.

2.2 Utility and Optimal Betting Weights

In order to quantify the benefits, to an investor, of the information in the model q, we consider the investor’s utility function, U.
Assumption 1 
Our investor subscribes to the axioms of utility theory.4 The investor has a utility function, U : (Wb,Ws) → R, that
(i) 
is strictly concave,
(ii) 
is twice differentiable,
(iii) 
is strictly monotone increasing,
(iv) 
has the property (0, ∞) ⊆ range(U ′), i.e., there exists a ’blow-up point’, Wb, with
Entropy 09 00001 i007
and a ’saturation point’, Wb, with
Entropy 09 00001 i008
and
(v) 
is compatible with the market in the sense that Wb < B < Ws.
One can see easily that condition (v) just means that the wealth associated with the bank account lies in the domain of U.
We note that many popular utility functions (for example, the logarithmic, exponential and power utilities (see Luenberger (1998)) are consistent with these conditions.
According to Utility Theory, a rational investor who believes the measure q allocates so as to maximize the expectation, under q, of his utility function (as applied to his post-bet wealth). In conjunction with (2), this means that this investor allocates with
Entropy 09 00001 i009
where
Entropy 09 00001 i010
and
Entropy 09 00001 i011
The following lemma states how the optimal betting weights can be computed:
Lemma 1 
An expected utility maximizing investor who believes the probability measure q and invests in a horse race with odds ratios O will allocate
Entropy 09 00001 i012
where Entropy 09 00001 i013 denotes the inverse of the function U′, and λ is the solution of the following equation:
Entropy 09 00001 i014
Under the assumptions of this paper, the solution to (11) is exists, is unique, and the optimal allocation, b*(q), depends continuously on q.
Proof
See, for example, Friedman and Sandow (2003b), Theorem 1.

2.3 Generalization of Entropy and Relative Entropy

We define a utility-based generalization of Kullback Leibler relative entropy via:
Definition 3
The relative Entropy 09 00001 i018-entropy from the probability measure p to the probability measure q is given by:
Entropy 09 00001 i015
Interpretation: relative Entropy 09 00001 i018-entropy is the difference between
(i) 
the expected (under the measure p) utility of the payoffs if we allocate optimally according to p, and,
(ii) 
the expected (under the measure p) utility of the payoffs if we allocate optimally according to the misspecified model, q.
Definition 4
The Entropy 09 00001 i018-entropy of the probability measure p is given by:
Entropy 09 00001 i016
Interpretation: Entropy 09 00001 i018-entropy is the difference between the expected utility of the payoffs for a clairvoyant who wins every race and the expected utility under the optimal allocation.
Entropy 09 00001 i022 and Entropy 09 00001 i023 have important properties summarized in the following theorem.
Known Result 1 
Entropy 09 00001 i017 and Entropy 09 00001 i024 have the following properties
(i) 
Entropy 09 00001 i017 ≥ 0 with equality if and only if p = q,
(ii) 
Entropy 09 00001 i017 is a strictly convex function of p,
(iii) 
Entropy 09 00001 i024 ≥ 0, and
(iv) 
Entropy 09 00001 i024 is a strictly concave function of p.
Proof: 
See Theorem 2 in Friedman and Sandow (2003b).
Note that for U (W ) = log(W), we recover the entropy and Kullback-Leibler relative entropy under any system of odds ratios.
We note that, solving (12) for Entropy 09 00001 i025 and substituting from (13), one can obtain the following decomposition for the expected utility, under the measure p, for an investor who allocates under the (misspecified measure) q:
Entropy 09 00001 i026
On the right hand side of (14), the first term is the expected utility of an investor who wins every bet (a clairvoyant), the second term is the expected utility loss the investor incurs because the world (characterized by p) is uncertain, and the third term is the expected utility loss related to the investor’s model misspecification.

2.4 Connection with Kullback-Leibler Relative Entropy

It so happens that relative Entropy 09 00001 i018-entropy essentially reduces to Kullback-Leibler relative entropy for a logarithmic family of utilities:
Known Result 2 
The relative Entropy 09 00001 i018-entropy, Entropy 09 00001 i027, is independent of the odds ratios, O for any candidate model p and prior measure, q, if and only if the utility function, U, is a member of the logarithmic family
Entropy 09 00001 i029
where γ1 > 0, γ2 and γ < B are constants. In this case,
Entropy 09 00001 i030
Proof
See Friedman and Sandow (2003a), Theorem 3. It follows that relative Entropy 09 00001 i018-entropy reduces, up to a multiplicative constant, to Kullback-Leibler relative entropy, if and only if the utility is a member of the logarithmic family (16).

3 U −Entropy

We have seen that Entropy 09 00001 i031 represents the expected gain in utility from allocating under the true measure p, rather than allocating according to the misspecified measure q under market odds Entropy 09 00001 i032 for an investor with utility U . One of the main goals of this paper is to explore what happens when we set q(x) equal to the homogeneous expected return measure,
Entropy 09 00001 i033
Readers familiar with finance will recognize this as the risk neutral pricing measure5 generated by the odds ratios. In finance, a risk neutral pricing measure is a measure under which the price of any perfectly hedgeable contingent claim is equal to the discounted (by the bank account) expectation of the payoff of the contingent claim. In the horse race setting, there is only one such measure, given by (17). We shall see in Section 4 that there are compelling reasons to consider this case. In this case,
Entropy 09 00001 i036
and
Entropy 09 00001 i034
so
Entropy 09 00001 i035
In this special case, Entropy 09 00001 i037 can be interpreted as the excess performance from allocating according to the real world measure p over the (risk-neutral) measure q derived from the odds ratios.6
Likewise, under (18), we have
Entropy 09 00001 i038
In the special case where q is the uniform distribution, Entropy 09 00001 i039 which we denote by Entropy 09 00001 i040, we obtain
Entropy 09 00001 i041
For simplicity, we assume that B = 1 and U (B) = 0 from now on.7

3.1 Definitions

Motivated by (19), we make the following definition:
Definition 5 
The U-relative entropy from the probability measure p to the probability measure q is given by
Entropy 09 00001 i042
Motivated by (20), we define the U−entropy:8
Definition 6 
The U-entropy of the probability measure p is given by
Entropy 09 00001 i043
Of course, these specializations of Entropy 09 00001 i018−entropy and relative
Entropy 09 00001 i018−entropy inherit all of the properties stated in Section 2. It is easy to show that for logarithmic utilities, they reduce to entropy and Kullback-Leibler relative entropy, respectively, so these quantities are generalizations of entropy and Kullback-Leibler relative entropy, respectively. Next, we generalize the above definitions to conditional probability measures. To keep our notation simple, we use p(x) and p(y) to represent the probability distributions for the random variables X and Y and use both HU (p) and HU (X) to represent the U-entropy for the random variable X which has probability measure p.
Let
Entropy 09 00001 i044
Definition 7 
The conditional relative U-entropy from the probability measure p to the probability measure q is given by
Entropy 09 00001 i045
Definition 8 
The conditional U−entropy from the probability measure p to the probability measure q is given by
Entropy 09 00001 i046
We note that we could have stated the preceding definitions as special cases of conditional Entropy 09 00001 i018−entropy and conditional relative Entropy 09 00001 i018−entropy as in Friedman and Sandow (2003b). The latter quantities can be interpreted in the context of a conditional horse race.
The following definition generalizes mutual information.
Definition 9 
The mutual U-information between the probability measures p and q is defined as
Entropy 09 00001 i047

3.2 Properties of U −Entropy and Relative U −Entropy

As a special case of Known Result 1, we have
Corollary 1 
The generalized relative entropy, DU (p||q), and the generalized entropy, HU (p), have the following properties
(i) 
Entropy 09 00001 i048 ≥ 0 with equality if and only if p = q,
(ii) 
Entropy 09 00001 i048 is a strictly convex function of p,
(iii) 
Entropy 09 00001 i049 ≥ 0, and
(iv) 
Entropy 09 00001 i049 is a strictly concave function of p.
We now establish that many properties that hold for the classical quantities of information theory also hold in this more general setting.
Theorem 1 
Entropy 09 00001 i050
Proof. 
By definition,
Entropy 09 00001 i051
Entropy 09 00001 i052
Entropy 09 00001 i053
Entropy 09 00001 i054
Entropy 09 00001 i055
Entropy 09 00001 i056
Entropy 09 00001 i057
Corollary 2 
HU (Y|X) ≤ HU (Y) (Conditioning Reduces Entropy)
Corollary 3 
(Shuffles increase entropy). If T is a random shuffle (permutation) of a deck of cards and X is the initial position of the cards in the deck and if the choice of shuffle T is independent of X , then
HU (X) ≤ HU (TY)
where TX is the permutation of the deck induced by the shuffle T.
Proof. 
We follow the proof sketched on Page 48 of Cover and Thomas (1991). First, notice that for any fixed permutation T = t, from the definition of HU, we have
HU (tX) ≤ HU (X)
(since we have only reordered the states, but we have not changed the probabilities associated with the states). So
Entropy 09 00001 i058
Theorem 2 
If the random variables X, Y, Z form a Markov chain X → Y → Z, then HU (Z|X, Y) = HU (Z|Y)
Proof. 
Entropy 09 00001 i059
Corollary 4 
If the random variables X, Y, Z form a Markov chain X → Y → Z, then HU (Z|X) ≥ HU (Z|Y) and HU (X|Z) ≥ HU (X|Y)
Proof. 
We prove the first inequality by noting that by Corollary 2,
HU (Z|X) ≥ HU (Z|X, Y),
and by Theorem 2,
HU (Z|X, Y) = HU (Z|Y),
The second inequality follows from the fact that XY Z is equivalent to ZYX.   ☐
Corollary 5 
The conditional entropy HU (Xn|X1) increases with n for a stationary Markov process.
Proof. 
The proof is essentially the same as that given on 36 of Cover and Thomas (1991).    ☐
In general, the chain rule for relative entropy does not hold, i.e.
Entropy 09 00001 i060
However, we do have the following two inequalities.
Theorem 3 
(i) 
Entropy 09 00001 i061, and
(ii) 
Entropy 09 00001 i062
Proof. 
By definition,
Entropy 09 00001 i063
Entropy 09 00001 i064
Entropy 09 00001 i065
Entropy 09 00001 i066
Entropy 09 00001 i067
and
Entropy 09 00001 i068
Entropy 09 00001 i069
Entropy 09 00001 i070
Entropy 09 00001 i071
Entropy 09 00001 i072
   ☐
Theorem 4 
(More refined horse races have higher relative U − entropies) Let T : Entropy 09 00001 i073 be any onto function and Entropy 09 00001 i074 be the induced probabilities on Entropy 09 00001 i075,i.e.,
Entropy 09 00001 i076
Then
Entropy 09 00001 i077
Proof. 
Entropy 09 00001 i078
Entropy 09 00001 i079
Define Entropy 09 00001 i080 Then Entropy 09 00001 i081 and
Entropy 09 00001 i082
Entropy 09 00001 i083
   ☐
Theorem 4 can also be regarded as a special case of Theorem 3, part (i).
Theorem 5 
(secondlaw of thermodynamics) Let Entropy 09 00001 i084 and Entropy 09 00001 i085 be the probability distributions (at time n) of two different Markov chains arising from the same transition mechanism but from different starting distributions. Then Entropy 09 00001 i086 decreases with n.
Proof. 
Let the corresponding joint mass function be denoted by p(xn, xn+1) and q(xn, xn+1) and let r(·|·) denote the probability transition function for the Markov chain. Then p(xn, xn+1) = p(xn)r(xn+1|xn) and q(xn, xn+1) = q(xn)r(xn+1|xn).
Entropy 09 00001 i01087
By Theorem 3,
Entropy 09 00001 i088
hence, Entropy 09 00001 i089  ☐
Corollary 6 
The relative U-entropy Entropy 09 00001 i090 betweena distribution µn on the states at time n and a stationary distribution µ decreases with n.
Proof: 
This is a special case of Theorem 5, where Entropy 09 00001 i091 for any n.   ☐
Corollary 7 
If the uniform distribution Entropy 09 00001 i092 is a stationary distribution, then entropy Entropy 09 00001 i093 increases with n.
Theorem 6 
(i) 
Entropy 09 00001 i094, and
(ii) 
Entropy 09 00001 i095
Proof: 
(i) is obvious. By definition,
Entropy 09 00001 i096
Entropy 09 00001 i097
Entropy 09 00001 i098
Entropy 09 00001 i099
   ☐
Theorem 7 
(Data processing inequality): If the random variables X,Y,Z form a Markov chain X → Y → Z, then IU(X; Y) ≥ IU(X;Z).
Proof. 
We will first show that IU(X; Y) = IU(X; Y,Z). From the previous result,
IU(X; Y)IU(X; Y,Z)
On the other hand,
Entropy 09 00001 i100
Entropy 09 00001 i101
Entropy 09 00001 i102
Entropy 09 00001 i103
Entropy 09 00001 i104
Entropy 09 00001 i105
Entropy 09 00001 i106
so Entropy 09 00001 i107. ☐
Corollary 8 
If Z = g(Y), we have IU(X; Y) ≥ IU(X; g(Y)).

3.3 Power Utility

Consider the power utility, given by
Entropy 09 00001 i108
We note that for each fixed wealth level, taking limits as κ → 1, the power utility approaches the logarithmic utility.
In this section, we compute the power utility’s U−entropy and relative U−entropy, explore limiting behavior and make comparisons with Tsallis entropy. We note that the power utility is commonly used (see, for example, Morningstar (2002)), since it has desirable optimality properties (see Stutzer (2003)) and the constant relative risk aversion property. The latter property follows from the fact that, for the power utility, the relative risk aversion coefficient (see, for example, Luenberger (1998)) is
Entropy 09 00001 i109
From Friedman and Sandow (2003b), with Entropy 09 00001 i111, we have
Entropy 09 00001 i110
where
Entropy 09 00001 i112
After some algebra, we obtain
Entropy 09 00001 i113

3.3.1 U −Entropy for Large Relative Risk Aversion

Let us consider the limit of infinite risk aversion, which for the power utility corresponds to κ → ∞, as can be seen from (66). It follows from (67) and (68), that a power utility investor with infinite risk aversion invests all his money in the bank account, i.e., allocates according to
Entropy 09 00001 i114
no matter what his belief-measure, p, is, so that his after-bet wealth is B = 1 with certainty. Such an investor makes no use of the information provided by the model, so no model can ourperform another and, therefore,
Entropy 09 00001 i115
for any measures p, q and any utility function U.
What happens if the relative risk aversion of a power-utility investor, i.e., κ, is large but finite?
To answer this question, we expand (69) as follows.
Entropy 09 00001 i01117
where the last equality follows from the fact that
Entropy 09 00001 i117
So
Entropy 09 00001 i118
Thus, we have related, for large κ, Entropy 09 00001 i119 to Kullback-Leibler relative entropy.

3.3.2 Relation with Tsallis Entropy

Another generalization of entropy, which recently has been a topic of increasing interest in physics, is the Tsallis entropy (see Tsallis (1988)). The Tsallis entropy provides a theoretical framework for a nonextensive thermodynamics, i.e., for a thermodynamics for which the entropy of a system made up of two independent subsystems is not simply the sum of the entropies of the subsystems. Examples for physical application of such a theory might be astronomical self-gravitating systems (see, for example, Plastino and Plastino (1999)).
The Tsallis entropy can be generalized to the Tsallis relative entropy (see, for example, Tsallis and Brigatti (2004)) as follows:
Entropy 09 00001 i120
This entropy recovers the standard Boltzmann-Gibbs entropy α → 1. It turns out that there is a simple monotonic relationship between our relative U−entropy and the relative Tsallis entropies:
Entropy 09 00001 i121
Also note that
Entropy 09 00001 i122
Comparing this to the Tsallis entropy:
Entropy 09 00001 i123
we have the following simple monotonic relationship between the two entropies
Entropy 09 00001 i124

4 Application: Probability Estimation via Relative U −Entropy Minimization

Information theoretic ideas such as the maximum entropy principle (see, for example, Jaynes (1957) and Golan et al. (1996)) or the closely related minimum relative entropy (MRE) principle (see, for example Lebanon and Lafferty (2001)) have assumed major roles in statistical learning. We discuss these principles from the point of view of a model user who would use a probabilistic model to make bets in the horse race setting of Section 2.9 As we shall see, probability models estimated by these principles are robust in the sense that they maximize “worst case” model performance, in terms of expected utility, for a horse race investor with the three parameter logarithmic utility (15) in a particular market (horse race).
As such, probability models estimated by MRE methods are special in the sense that they are tailored to the risk preferences of logarithmic-family-utility investors. However, not all risk prefer- ences can be expressed by utility functions in the logarithmic family. In the financial community, a substantial percentage, if not a majority, of practitioners implement utility functions outside the logarithmic family (15) (see, for example, Morningstar (2002)). This is not surprising – other utility functions may more accurately reflect the risk preferences of certain investors, or possess important defining properties or optimality properties.
In order to tailor the statistical learning problem to the specific utility function of the investor, one can replace the usual MRE problem with a generalization, based on relative U -entropy, as indicated below. Before stating results for minimum relative U−entropy (MRUE) probability estimation, we review results for MRE probability estimation.

4.1 MRE and Dual Formulation

The usual MRE formulation (see, for example, Lebanon and Lafferty (2001) and Kitamura and Stutzer (1997)) is given by:10
Problem 1 
Find
Entropy 09 00001 i125
Entropy 09 00001 i126
where each fj represents a feature (i.e., a function that maps Entropy 09 00001 i127 to Entropy 09 00001 i128), p0 is a probability measure, Entropy 09 00001 i129 represents the empirical measure, and Q is the set of all probability measures.
The feature constraints (79) can be viewed as a mechanism to enforce ”consistency” between the model and the data. Typically, p0 is interpreted as representing prior beliefs, before modification in light of model training data. Below, we discuss what happens when p0 is a benchmark measure or the pricing measure generated by the odds ratios. The solution to Problem 1 will be the measure satisfying the feature constraints that is closest to p0 (in the sense of relative entropy).
In many settings, the above (primal) problem is not as easy to solve numerically as the following dual problem:11
Problem 2 
Find
Entropy 09 00001 i130
Entropy 09 00001 i131
is the log-likelihood function,
Entropy 09 00001 i132
Entropy 09 00001 i133
and Entropy 09 00001 i134 denotes the empirical probability that X = x.
We note that the primal problem (Problem 1) and the dual problem (Problem 2) have the same solution, in the sense that Entropy 09 00001 i135.

4.2 Robustness of MRE when the Prior Measure is the Odds Ratio Pricing Measure

In a certain circumstance, the MRE measure (i.e., the solution of Problem 1) has a desirable robustness property, which we discuss in this section. First we state the following minimax result from Grünwald and Dawid (2004).
Known Result 3 
If there exists a probability measure πK, with πx > 0, Entropy 09 00001 i136, then
Entropy 09 00001 i137
where Q is the set of all possible probability measures, K is a closed convex set, such as the set of all p ∈ Q that satisfy the feature constraints, (79). Moreover, a saddle point is attained.
Replacing Entropy 09 00001 i138 with Entropy 09 00001 i139 in (84) doesn’t change the finiteness or the convexity properties of the function under the infsup or the supinf. Since the proof of Known Result 3 relies only on those properties of this function, we have
Entropy 09 00001 i140
and a saddle point is attained here as well. Further using the fact that
Entropy 09 00001 i141
which follows from the information inequality (see, for example, Cover and Thomas (1991)), we obtain
Entropy 09 00001 i142
Using further b*(p) = p,12 for U (W) = log W , we see that (87) leads to the following corollary:
Corollary 9 
If U (W) = log W, and
Entropy 09 00001 i143
then
Entropy 09 00001 i144
where Q is the set of all possible probability measures and K is the set of all pQ that satisfy the feature constraints, (79).13
Thus, if p0 is given by pricing measure generated by the odds ratios, then the MRE measure, p*, is robust in the following sense: by choosing p* a rational (expected-utility optimizing) investor maximizes his (model-based optimal) expected utility in the most adverse environment consistent with the feature constraints.
Here, p0 has a specific and non-traditional interpretation. It no longer represents prior beliefs about the real world measure that we estimate with p*. Rather, it represents the risk neutral pricing measure consistent with the odds ratios. If a logarithmic utility investor wants the measure that he estimates to have the optimality property (89), he is not free to choose p0 to represent his prior beliefs, in general. However, if he sets p0 to the risk neutral pricing measure determined by the odds ratios, then the optimality property (89) will hold. In this case, p*, can be thought of as a function of the odds ratios, rather than an update of prior beliefs.
It is this property, (89), that we seek to generalize to accommodate risk preferences more general than the ones compatible with a logarithmic utility function.

4.3 General Risk Preferences and Robust Relative Performance

In this section, we state a known result that we will use below to generalize the robustness property (89).
The MRE problem is generalized in Friedman and Sandow (2003a) to a minimum relative Entropy 09 00001 i018 entropy problem. The solution to this generalized problem is robust in the following sense:
Known Result 4 
Entropy 09 00001 i145
where Q is the set of all possible probability measures, K is the set of all pQ that satisfy the feature constraints, (79).
This is only a slight modification of a result from Grünwald and Dawid (2002) and is based on the logic from Topsøe (1979). A proof can be found in Friedman and Sandow (2003a).
Known Result 4 states the following: minimizing Entropy 09 00001 i146 with respect to pK is equivalent to searching for the measure p* ∈ Q that maximizes the worst-case (with respect to the potential true measures, p′ ∈ K) relative model performance (over the benchmark model, p0), in the sense of expected utility. The optimal model, p*, is robust in the sense that for any other model, p, the worst (over potential true measures p′ ∈ K) relative performance is even worse than the worst-case relative performance under p*. We do not know the true measure; an investor who makes allocation decisions based on p* is prepared for the worst that “nature” can offer.

4.4 Robust Absolute Performance and MRUE

We now address the following question: Is it possible to formulate a generalized MRE problem for which the solution is robust in the absolute sense of (89), as opposed to the relative sense in Known Result 4? This is indeed possible, as the following Corollary, which follows directly from Known Result 4, indicates:
Corollary 10 
If
Entropy 09 00001 i147
then
Entropy 09 00001 i148
where Q is the set of all possible probability measures and K is the set of all pQ that satisfy the feature constraints, (79).
This Corollary states that, by choosing p*, a rational (expected-utility optimizing) investor maxi-mizes his (model-based optimal) expected utility in the most adverse environment consistent with the feature constraints. Therefore, an investor with a general utility function, who bets on this horse race and wants to maximize his worst case (in the sense of (91)) expected utility can set p0 to the risk neutral pricing measure determined by the odds ratios and solve the following problem:14
Problem 3 
(MRUE Problem)
Find
Entropy 09 00001 i149
Entropy 09 00001 i150
where each fj represents a feature (i.e., a function that maps Entropy 09 00001 i151 to Entropy 09 00001 i152), p0 representsthe prior measure, Entropy 09 00001 i153 represents the empirical measure, and Q is the set of all probability measures.
As before, p0, here, has a specific and non-traditional interpretation. It no longer represents prior beliefs about the real world measure that we estimate with p*. Rather, it represents the risk neutral pricing measure consistent with the odds ratios. If an investor wants the measure that he estimates to have the optimality property (91), he is not free to choose p0 to represent his prior beliefs, in general. To attain the optimality property (91), he can set p0 to represent the risk neutral pricing measure determined by the odds ratios. The MRUE measure, p*, is a function of the odds ratios (which are incorporated in the measure p0).
In order to obtain the dual to Problem 3, we specialize, in Problem 3 of Friedman and Sandow (2003a), relative Entropy 09 00001 i018-entropy to relative U−entropy by setting the “prior” measure to the risk neutral pricing measure generated by the odds ratios, Entropy 09 00001 i154. We obtain
Problem 4 
(Dual of MRUE Problem)
Find
Entropy 09 00001 i155
Entropy 09 00001 i156
Entropy 09 00001 i157
Entropy 09 00001 i158
Entropy 09 00001 i159
Problem 4 is easy to interpret.15 The objective function of Problem 4 is the utility (of the expected utility maximizing investor) averaged over the training sample. Thus, our dual problem is a maximization of the training-sample averaged utility, where the utility function, U , is the utility function on which the U−entropy depends.
We note that the primal problem (Problem 3) and the dual problem (Problem 4) have the same solution, in the sense that Entropy 09 00001 i160.
This problem is a J -dimensional (J is the number of features), unconstrained, concave maxi-mization problems (see Boyd and Vandenberghe (2004), p. 159 for the concavity). The primal problem, Problem 3, on the other hand, is an m-dimensional (m is the number of states) convex minimization with linear constraints. The dual problem, Problem 4, may be easier to solve than the primal problem, Problem 3, if m > J . For conditional probability models, the dual problem will always be easier to solve than the primal problem.
As we have seen above, in cases where odds ratios are available, the MRUE problem yields a solution with the optimality property (91). However, in real statistical learning applications, as mentioned above, it is often the case that odds ratios are not observable. In this case, the builder of a statistical learning model can use assumed odds ratios, on which the model will depend.
Given the relation
Entropy 09 00001 i161
as a perhaps more convenient alternative, the model-builder can directly specify a risk neutral pricing measure consistent with the assumed odds ratios. Either way, the model will possess the optimality property (91) under the odds ratios consistent with the assumption. The necessity of providing a risk neutral pricing measure, perhaps, imposes an onus on the MRUE modeler comparable to the onus of finding a prior for the MRE modeler. However, we note that, as for MRE models, the importance of p0 will diminish as the number of feature constraints grows.
For logarithmic-family utility functions, as a practical matter, even though the MRE and MRUE models require different interpretations and, possibly, different choices for p0, the numerical implementations will be the same under the appropriate assumptions.

5 Conclusions

Motivated by decision theory, we have defined a generalization of entropy (U−entropy) and a generalization of relative entropy (relative U−entropy). We have shown that these generaliza- tions retain a number of properties of entropy and relative entropy that are not retained by the generalizations Entropy 09 00001 i162 and Entropy 09 00001 i163 presented in Friedman and Sandow (2003a).
We have used relative U−entropy to formulate an approach to the probability measure estimation problem (Problem 3). Problem 3 generalizes the absolute robustness properties of the solution to the MRE problem, incorporating risk preferences more general than those that can be expressed via the logarithmic family (15).
Depending on the situation, the model builder may prefer one of the methods listed below:
I
General Risk Preferences (not expressible by (15)), real or assumed odds ratios available
(i)
MRUE, Problem 3: If p0 is the pricing measure generated by the odds ratios, we get robust absolute performance (in the market described by the odds ratios) in the sense of Corollary 10.
(ii)
Minimum Relative Entropy 09 00001 i018entropy Problem from Friedman and Sandow (2003a): If p0 represents a benchmark model (possibly prior beliefs), we get robust relative outperformance (relative to the benchmark model, in the market described by the odds ratios) in the sense of Known Result 4.
II
Special Case: Logarithmic Family Risk Preferences (15), odds ratios need not be available
  • MRE, Problem 1: If p0 represents a benchmark model (possibly prior beliefs), we get robust relative outperformance with respect to the benchmark model, under any odds ratios.

6 Appendix

If we specialize relative Entropy 09 00001 i018-entropy to relative U−entropy, substituting
Entropy 09 00001 i164
in Problem 4 of Friedman and Sandow (2003a), we obtain a version of the dual problem that is easier to implement, though the objective function is not as easy to interpret as the objective function in Problem 4.
Problem 5 
(Easily Implemented Version of Dual Problem)
Entropy 09 00001 i165
 
Entropy 09 00001 i166
Once Problem 5 is solved, the solution to Problem 3 is simply,
Entropy 09 00001 i167
where λ* is the positive constant that makes Entropy 09 00001 i168.

References

  1. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge, 2004. [Google Scholar]
  2. Brillouin, L. Science and Information Theory; Academic Press: New York, 1962. [Google Scholar]
  3. Cover, T.; Thomas, J. Elements of Information Theory; Wiley: New York, 1991. [Google Scholar]
  4. Cowell, F. Additivity and the entropy concept: An axiomatic approach to inequality measurement. Journal of Economic Theory 1981, 25, 131–143. [Google Scholar]
  5. Cressie, N.; Read, T. Multinomial goodness of fit tests. journal of the royal statistical society b 1984, 46(3), 440–464. [Google Scholar]
  6. Cressie, N.; Read, T. Pearson’s x2 and the loglikelihood ratio statistic g2: a comparative review. International statistical review 1989, 57(1), 19–43. [Google Scholar]
  7. Csiszár, I.; Körner, J. Information theory: Coding Theorems for Discrete Memoryless Systems; Academic Press: New York, 1997. [Google Scholar]
  8. Duffie, D. Dynamic Asset Pricing Theory; Princeton University Press: Princeton, 1996. [Google Scholar]
  9. Friedman, C.; Sandow, S. Learning probabilistic models: An expected utility maximization approach. Journal of Machine Learning Research 2003a, 4, 291. [Google Scholar]
  10. Friedman, C.; Sandow, S. Model performance measures for expected utility maximizing investors. International Journal of Theoretical and Applied Finance 2003b, 6(4), 355. [Google Scholar]
  11. Friedman, C.; Sandow, S. Model performance measures for leveraged investors. International Journal of Theoretical and Applied Finance 2004, 7(5), 541. [Google Scholar]
  12. Golan, A.; Judge, G.; Miller, D. Maximum Entropy Econometrics; Wiley: New York, 1996. [Google Scholar]
  13. Group of Statistical Physics. Nonextensive statistical mechanics and thermodynamics: Bibliography. Working Paper. http://tsallis.cat.cbpf.br/TEMUCO.pdf2006. [Google Scholar]
  14. Grünwald, P.; Dawid, A. Game theory, maximum generalized entropy, minimum discrepancy, robust bayes and pythagoras. Proceedings ITW 2002. [Google Scholar]
  15. Grünwald, P.; Dawid, A. Game theory, maximum generalized entropy, minimum discrepancy, and robust bayesian decision theory. Annals of Statistics 2004, 32(4), 1367–1433. [Google Scholar]
  16. Ingersoll, J. Theory of Financial Decision Making; Rowman and Littlefield: New York, 1987. [Google Scholar]
  17. Jaynes, E. T. Information theory and statistical mechanics. Physical Review 1957, 106, 620. [Google Scholar]
  18. Kitamura, Y.; Stutzer, M. An information-theoretic alternative to generalized method of moments. Econometrica 1997, 65(4), 861–874. [Google Scholar]
  19. Lebanon, G.; Lafferty, J. Boosting and maximum likelihood for exponential models. Technical Report CMU-CS-01-144 School of Computer Science Carnegie Mellon University, 2001. [Google Scholar]
  20. Liese, F.; Vajda, I. Convex Statistical Distances; Teubner, Leipzig, 1987. [Google Scholar]
  21. Luenberger, D. Investment Science; Oxford University Press: New York, 1998. [Google Scholar]
  22. Morningstar. The new morningstar ratingTM methodology. 2002. http://www.morningstar.dk/downloads/MRARdefined.pdf.
  23. Österreicher, F.; Vajda, I. Statistical information and discrimination. IEEE transactions on information theory, May 1993; 39, 3, 1036–1039. [Google Scholar]
  24. Plastino, A.; Plastino, A. R. Tsallis entropy and Jaynes’ informnation theory formalism. Brazilian Journal of Physics 1999, 29(1), 50. [Google Scholar]
  25. Shannon, C. E. A mathematical theory of communication. Bell System Technical Journal, Jul and Oct 1948; 27, 379–423 and 623–656379–423 and 623–656. [Google Scholar]
  26. Slomczyński, W.; Zastawniak, T. Utility maximizing entropy and the second law of thermodynamics. Annals of Probability 2004, 32(3A), 2261. [Google Scholar]
  27. Stummer, W. On a statistical information measure of diffusion processes. statistics and decisions 1999, 17, 359–376. [Google Scholar]
  28. Stummer, W. On a statistical information measure for a Samuelson-Black-Scholes model. statistics and decisions 2001, 19, 289–313. [Google Scholar]
  29. Stummer, W. Exponentials, Diffusions, Finance, Entropy and Information, Shaker. 2004.
  30. Stutzer, M. A bayesian approach to diagnosis of asset pricing models. Journal of Econometrics 1995, 68, 367–397. [Google Scholar]
  31. Stutzer, M. Portfolio choice with endogenous utility: A large deviations approach. Journal of Econometrics 2003, 116, 365–386. [Google Scholar]
  32. Topsøe, F. Information theoretical optimization techniques. Kybernetika 1979, 15(1), 8. [Google Scholar]
  33. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. Journal of Statistical Physics 1988, 52, 479. [Google Scholar]
  34. Tsallis, C.; Brigatti, E. Nonextensive statistical mechanics: A brief introduction. Continuum Mechechanics and Thermodynanmics 2004, 16, 223–235. [Google Scholar]
  • 1We have chosen this setting for the sake of simplicity. One can generalize the ideas in this paper to continuous random variables and to conditional probability models; for details, see Friedman and Sandow (2003b).
  • 2See, for example, Cover and Thomas (1991), Chapter 6.
  • 3See, for example, Cover and Thomas (1991).
  • 4Such an investor maximizes his expected utility with respect to the probability measure that he believes (see, for example, Luenberger (1998)).
  • 5See, for example Duffie (1996). We note that the risk neutral pricing measure generated by the odds ratios need not coincide with any “real world” measure.
  • 6Stutzer (1995), provides a similar interpretation in a slightly different setting.
  • 7It is straightforward to develop the material below under more general assumptions.
  • 8We note that this definition of U−entropy is quite similar to, but not the same as, the definition of u−entropy in Slomczy´nski and Zastawniak (2004).
  • 9It is possible to consider more general settings, such as incomplete markets, but a number of results depend in a fundamental way on the horse race setting.
  • 10MRE problems can be stated for conditional probability estimation and with regularization. We keep the context and notation as simple as possible by confining our discussion to unconditional estimation without regularization. Extensions are straightforward.
  • 11See, for example, Lebanon and Lafferty (2001).
  • 12See, for example, Cover and Thomas (1991), Theorem 6.1.2.
  • 13For ease of exposition, we have proved this result for U(·) = log(·). The same result holds for utilities in the generalized logarithmic family (15).
  • 14As noted above, we keep the context and notation as simple as possible by confining our discussion to unconditional estimation without regularization. Extensions are straightforward.
  • 15A version that is more easily implemented is given in the Appendix.

Share and Cite

MDPI and ACS Style

Friedman, C.; Huang, J.; Sandow, S. A Utility-Based Approach to Some Information Measures. Entropy 2007, 9, 1-26. https://doi.org/10.3390/e9010001

AMA Style

Friedman C, Huang J, Sandow S. A Utility-Based Approach to Some Information Measures. Entropy. 2007; 9(1):1-26. https://doi.org/10.3390/e9010001

Chicago/Turabian Style

Friedman, Craig, Jinggang Huang, and Sven Sandow. 2007. "A Utility-Based Approach to Some Information Measures" Entropy 9, no. 1: 1-26. https://doi.org/10.3390/e9010001

Article Metrics

Back to TopTop