Next Article in Journal
A Discrete Meta-Control Procedure for Approximating Solutions to Binary Programs
Next Article in Special Issue
Consistency and Generalization Bounds for Maximum Entropy Density Estimation
Previous Article in Journal
The Measurement of Information Transmitted by a Neural Population: Promises and Challenges
Previous Article in Special Issue
Relative Entropy Derivative Bounds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Objective Bayesianism and the Maximum Entropy Principle

Department of Philosophy, School of European Culture and Languages, University of Kent, Canterbury CT2 7NF, UK
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(9), 3528-3591; https://doi.org/10.3390/e15093528
Submission received: 28 June 2013 / Revised: 21 August 2013 / Accepted: 21 August 2013 / Published: 4 September 2013
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)

Abstract

:
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

1. Introduction

Objective Bayesian epistemology is a theory about the strength of belief. As formulated by Williamson [1], it invokes three norms:
  • Probability: The strengths of an agent’s beliefs should satisfy the axioms of probability. That is, there should be a probability function, P E : S L [ 0 , 1 ] , such that for each sentence θ of the agent’s language L , P E ( θ ) measures the degree to which the agent with evidence E believes sentence θ. (Here, L will be construed as a finite propositional language and S L as the set of sentences of L , formed by recursively applying the usual connectives.)
  • Calibration: The strengths of an agent’s beliefs should satisfy constraints imposed by her evidence E. In particular, if the evidence determines just that physical probability (aka chance), P * , is in some set P * of probability functions defined on S L , then P E should be calibrated to physical probability insofar as it should lie in the convex hull, E = P * , of the set P * . (We assume throughout this paper that chance is probabilistic, i.e., that P * is a probability function.)
  • Equivocation: The agent should not adopt beliefs that are more extreme than is demanded by her evidence E. That is, P E should be a member of E that is sufficiently close to the equivocator function, P = , which gives the same probability to each ω Ω , where the state descriptions or states, ω, are sentences describing the most fine-grained possibilities expressible in the agent’s language.
One way of explicating these norms proceeds as follows. Measure closeness of P E to the equivocator by Kullback-Leibler divergence, d ( P E , P = ) = ω Ω P E ( ω ) log P E ( ω ) P = ( ω ) . Then, if there is some function in E that is closest to the equivocator, P E should be such a function. If E is closed, then there is guaranteed to be some function in E closest to the equivocator; as E is convex, there is at most one such function. Then we have the maximum entropy principle [2]: P E is the function in E that has maximum entropy H, where H ( P ) = ω Ω P ( ω ) log P ( ω ) .
The question arises as to how the three norms of objective Bayesianism should be justified, and whether the maximum entropy principle provides a satisfactory explication of the norms.
The probability norm is usually justified by a Dutch book argument. Interpret the strength of an agent’s belief in θ to be a betting quotient, i.e., a number x, such that the agent is prepared to bet x S on θ with return S if θ is true, where S is an unknown stake, positive or negative. Then the only way to avoid the possibility that stakes may be chosen so as to force the agent to lose money, whatever the true state of the world, is to ensure that the betting quotients satisfy the axioms of probability (see e.g., Theorem 3.2 in [1]).
The calibration norm may be justified by a different sort of betting argument. If the agent bets repeatedly on sentences with known chance y with some fixed betting quotient x, then she is sure to lose money in the long run unless x = y (see, e.g., pp. 40–41 in [1]). Alternatively, on a single bet with known chance y, the agent’s expected loss is positive unless her betting quotient x = y , where the expectation is determined with respect to the chance function P * (pp. 41–42 in [1]). More generally, if evidence E determines that P * P * and the agent makes such bets, then sure loss/positive expected loss can be forced unless P E P * .
The equivocation norm may be justified by appealing to a third notion of loss. In the absence of any particular information about the loss L ( ω , P ) one incurs when one’s strengths of beliefs are represented by P and ω turns out to be the true state, one can argue that one should take the loss function L to be logarithmic, L ( ω , P ) = log P ( ω ) (pp. 64–65 in [1]). Then the probability function P that minimises the worst case expected loss, subject to the information that P * E , where E is closed and convex, is simply the probability function P E closest to the equivocator—equivalently, the probability function in E that has maximum entropy [3,4].
The advantage of these three lines of justification is that they make use of the rather natural connection between strength of belief and betting. This connection was highlighted by Frank Ramsey:
All our lives, we are in a sense betting. Whenever we go to the station, we are betting that a train will really run, and if we had not a sufficient degree of belief in this, we should decline the bet and stay at home.
(p. 183 in [5])
The problem is that the three norms are justified in rather different ways. The probability norm is motivated by avoiding sure loss. The calibration norm is motivated by avoiding sure long-run loss or by avoiding positive expected loss. The equivocation norm is motived by minimising worst-case expected loss. In particular, the loss function appealed to in the justification of the equivocation norm differs from that invoked by the justifications of the probability and calibration norms.
In this paper, we seek to rectify this problem. That is, we seek a single justification of the three norms of objective Bayesian epistemology.
The approach we take is to generalise the justification of the equivocation norm, outlined above, in order to show that only the strengths of beliefs that are probabilistic, calibrated and equivocal minimise worst-case expected loss. We shall adopt the following starting point: as discussed above, E = P * is taken to be convex and non-empty throughout this paper; we shall also assume that the strengths of the agent’s beliefs can be measured by non-negative real numbers—an assumption that is rejected by advocates of imprecise probability, a position that we will discuss separately in Section 5.3. We do not assume throughout that E is such that it admits some function that has maximum entropy—e.g., that E is closed—but we will be particularly interested in the case in which E does contain its entropy maximiser, in order to see whether some version of the maximum entropy principle is justifiable in that case.
In Section 2, we shall consider the scenario in which the agent’s belief function, bel , is defined over propositions, i.e., sets of possible worlds. Using ω to denote a possible world as well as the state of L that picks out that possible world, we have that bel is a function from the power set of a finite set Ω of possible worlds ω to the non-negative real numbers, bel : P Ω R 0 . When it comes to justifying the probability norm, this will give us enough structure to show that degrees of belief should be additive. Then, in Section 3, we shall consider the richer framework in which the belief function is defined over sentences, i.e., bel : S L R 0 . This will allow us to go further by showing that different sentences that express the same proposition should be believed to the same extent. In Section 4, we shall explain how the preceding results can be used to motivate a version of the maximum entropy principle. In Section 5, we draw out some of the consequences of our results for Bayes’ theorem. In particular, conditional probabilities and Bayes’ theorem play a less central role under this approach than they do under subjective Bayesianism. Furthermore, in Section 5 we relate our work to the imprecise probability approach and suggest that the justification of the norms of objective Bayesianism presented here can be reinterpreted in a non-pragmatic way.
The key results of the paper are intended to demonstrate the following points. Theorem 1 (which deals with beliefs defined over propositions) and Theorem 4 (respectively, belief over sentences) show that only a logarithmic loss function satisfies certain desiderata that, we suggest, any default loss function should satisfy. This allows us to focus our attention on logarithmic loss. Theorems 2, 3 (for propositions) and Theorems 5, 6 (for sentences) show that minimising worst-case expected logarithmic loss corresponds to maximising a generalised notion of entropy. Theorem 7 justifies maximising standard entropy, by viewing this maximiser as a limit of generalised entropy maximisers. Theorem 9 demonstrates a level of agreement between updating beliefs by Bayesian conditionalisation and updating by maximising generalised entropy. Theorem 10 shows that the generalised notion of entropy considered in this paper is pitched at precisely the right level of generalisation.
Three appendices to the paper help to shed light on the generalised notion of entropy introduced in this paper. Appendix A motivates the notion by offering justifications of generalised entropy that mirror Shannon’s original justification of standard entropy. Appendix B explores some of the properties of the functions that maximise generalised entropy. Appendix C justifies the level of generalisation of entropy to which we appeal.

2. Belief over Propositions

In this section, we shall show that if a belief function defined on propositions is to minimise worst-case expected loss, then it should be a probability function, calibrated to physical probability, which maximises a generalised notion of entropy. The argument will proceed in several steps. As a technical convenience, in Section 2.1, we shall normalise the belief functions under consideration. In Section 2.2, we introduce the appropriate generalisation of entropy. In Section 2.3, we argue that, by default, loss should be taken to be logarithmic. Then, in Section 2.4, we introduce scoring rules, which measure expected loss. Finally, in Section 2.5, we show that worst-case expected loss is minimised just when generalised entropy is maximised.
For the sake of concreteness, we will take Ω to be generated by a propositional language, L = { A 1 , , A n } , with propositional variables, A 1 , , A n . The states, ω, take the form ± A 1 ± A n , where + A i is just A i and A i is ¬ A i . Thus, there are 2 n states, ω Ω = { ± A 1 ± A n } . We can think of each such state as representing a possible world. A proposition (or, in the terminology of the mathematical theory of probability, an ‘event’) may be thought of as a subset of Ω, and a belief function, bel : P Ω R 0 , thus assigns a degree of belief to each proposition that can be expressed in the agent’s language. For a proposition F Ω , we will use F ¯ to denote Ω F . | F | denotes the size of proposition F Ω , i.e., the number of states under which it is true.
Let Π be the set of partitions of Ω; a partition π Π is a set of mutually exclusive and jointly exhaustive propositions. To control the proliferation of partitions, we shall take the empty set, ∅, to be contained only in one partition, namely { Ω , } .

2.1. Normalisation

There are finitely many propositions ( P Ω has 2 2 n members), so any particular belief function, bel , takes values in some interval [ 0 , M ] R 0 . It is just a matter of convention as to the scale on which belief is measured, i.e., as to what upper bound M we might consider. For convenience, we shall normalise the scale to the unit interval, [ 0 , 1 ] , so that all belief functions are considered on the same scale.
Definition 1 (Normalised belief function on propositions). Let M = max π Π F π bel ( F ) . Given a belief function, bel : P Ω R 0 , that is not zero everywhere, its normalisation, B : P Ω [ 0 , 1 ] , is defined by setting B ( F ) = bel ( F ) / M for each F Ω . We shall denote the set of normalised belief functions by B , so:
B = { B : P Ω [ 0 , 1 ] : F π B ( F ) 1  for all  π Π  and  F π B ( F ) = 1  for some  π }
Without loss of generality, we rule out of consideration the non-normalised belief function that gives zero degree of belief to each proposition; it will become clear in Section 2.4 that this belief function is of little interest, as it can never minimise worst-case expected loss. For purely technical convenience, we will often consider the convex hull, B , of B . In which case, we rule into consideration certain belief functions that are not normalised, but which are convex combinations of normalised belief functions. Henceforth, then, we shall focus our attention on belief functions in B and B .
Note that we do not impose any further restrictions on the agent’s belief function—such as additivity; or the requirement that B ( G ) B ( F ) whenever G F ; or that the empty proposition, ∅, has belief zero; or the sure proposition, Ω, is assigned belief of one. Our aim is to show that belief functions that do not satisfy such conditions will expose the agent to avoidable loss.
For any B B and every F Ω , we have B ( F ) + B ( F ¯ ) 1 , because { F , F ¯ } is a partition. Indeed:
F Ω B ( F ) = 1 2 · F Ω B ( F ) + F Ω B ( F ¯ ) 1 2 · | P Ω | = 2 2 n 1
Recall that a subset of R N is compact, if and only if it is closed and bounded.
Lemma 1 (Compactness). B and B are compact.
Proof: B R | P Ω | is bounded, where ⊂ denotes strict subset inclusion. Now, consider a sequence, ( B t ) t N B which converges to some B R | P Ω | . Then, for all π Π , we find F π B ( F ) 1 . Assume that B B . Thus for all π Π , we have F π B ( F ) < 1 . However, then there has to exist a t 0 N , such that for all t t 0 and all π Π , F π B t ( F ) < 1 . This contradicts B t B . Thus, B is closed and, hence, compact.
B is the convex hull of a compact set. Hence, B R | P Ω | is closed and bounded and so compact. ■
We will be particularly interested in the subset, P B , of belief functions defined by:
P = { B : P Ω [ 0 , 1 ] : F π B ( F ) = 1  for all  π Π }
P is the set of probability functions:
Proposition 1. P P if and only if P : P Ω [ 0 , 1 ] satisfies the axioms of probability:
P1:
P ( Ω ) = 1 and P ( ) = 0 .
P2:
If F G = , then P ( F ) + P ( G ) = P ( F G ) .
Proof: Suppose P P . P ( Ω ) = 1 , because { Ω } is a partition. P ( ) = 0 , because { Ω , } is a partition and P ( Ω ) = 1 . If F , G Ω are disjoint, then P ( F ) + P ( G ) = P ( F G ) , because { F , G , F G ¯ } and { F G , F G ¯ } are both partitions, so P ( F ) + P ( G ) = 1 P ( F G ¯ ) = P ( F G ) .
On the other hand, suppose P1 and P2 hold. F π P ( F ) = 1 can be seen by induction on the size of π. If | π | = 1 , then π = { Ω } and P ( Ω ) = 1 by P1. Suppose, then, that π = { F 1 , , F k + 1 } for k 1 . Now, i = 1 k 1 P ( F i ) + P ( F k F k + 1 ) = 1 by the induction hypothesis and P ( F k F k + 1 ) = P ( F k ) + P ( F k + 1 ) by P2, so F π P ( F ) = 1 , as required.  ■
Example 1 (Contrasting B with P ). Using Equation (1), we find F Ω P ( F ) = | P Ω | 2 F Ω B ( F ) for all P P and B B . For probability functions, P P , probability is evenly distributed among the propositions of fixed size in the following sense:
F Ω | F | = t P ( F ) = ω Ω P ( ω ) · | { F Ω : | F | = t  and  ω F } |
= ω Ω P ( ω ) | Ω | 1 t 1 = | Ω | 1 t 1
where P ( ω ) abbreviates P ( { ω } ) . For B B and t > | Ω | 2 2 , we have, in general, only the following inequality:
0 F Ω | F | = t B ( F ) | { F Ω : | F | = t } | = | Ω | t
For B 1 B , defined as B 1 ( ω ) = 1 , for some specific ω, and B 1 ( F ) = 0 for all other F Ω , we have that the lower bound is tight. For B 2 B , defined as B 2 ( F ) = 1 , for | F | = t , and B 2 ( F ) = 0 , for all other F Ω , the upper bound is tight.
To illustrate the potentially uneven distribution of beliefs for a B B , let A 1 , A 2 be the propositional variables in L , so Ω contains four elements. Now, consider the B B such that B ( ) = 0 , B ( F ) = 1 100 for | F | = 1 , B ( F ) = 1 2 for | F | = 2 , B ( F ) = 99 100 for | F | = 3 and B ( Ω ) = 1 . Note, in particular, that there is no P P such that B ( F ) P ( F ) for all F Ω .

2.2. Entropy

The entropy of a probability function is standardly defined as:
H Ω ( P ) : = ω Ω P ( ω ) log P ( ω )
We shall adopt the usual convention that x log 0 = x · = , if x > 0 and 0 log 0 = 0 .
We will need to extend the standard notion of entropy to apply to normalised belief functions, not just to probability functions. Note that the standard entropy only takes into account those propositions that are in the partition { { ω } : ω Ω } which partitions Ω into states. This is appropriate when entropy is applied to probability functions, because a probability function is determined by its values on the states. However, this is not appropriate if entropy is to be applied to belief functions: in that case, one cannot simply disregard all those propositions that are not in the partition of Ω into states—one needs to consider propositions in other partitions, too. In fact, there are a range of entropies of a belief function, according to how much weight is given to each partition π in the entropy sum:
Definition 2 (g-entropy). Given a weighting function g : Π R 0 , the generalised entropy or g-entropy of a normalised belief function is defined as:
H g ( B ) : = π Π g ( π ) F π B ( F ) log B ( F )
The standard entropy, H Ω , corresponds to g Ω -entropy, where
g Ω ( π ) = 1 : π = { { ω } : ω Ω } 0 : otherwise
We can define the partition entropy, H Π , to be the g Π -entropy, where g Π ( π ) = 1 for all π Π . Then:
H Π ( B ) = π Π F π B ( F ) log B ( F ) = F Ω par ( F ) B ( F ) log B ( F )
where par ( F ) is the number of partitions in which F occurs. Note that according to our convention, par ( ) = 1 and par ( Ω ) = 2 , because Ω occurs in partitions, { , Ω } and { Ω } . Otherwise, par ( F ) = b | F ¯ | , where b k : = i = 1 k 1 / i ! j = 0 i ( 1 ) i j i j j k is the k’th Bell number, i.e., the number of partitions of a set of k elements.
We can define the proposition entropy H P Ω to be the g P Ω -entropy, where
g P Ω ( π ) = 1 : | π | = 2 0 : otherwise
Then:
H P Ω ( B ) = π Π F π | π | = 2 B ( F ) log B ( F ) = F Ω B ( F ) log B ( F )
In general, we can express H g ( B ) in the following way, which reverses the order of the summations:
H g ( B ) = F Ω π Π F π g ( π ) B ( F ) log B ( F )
As noted above, one might reasonably demand of a measure of the entropy of a belief function that each belief should contribute to the entropy sum, i.e., for each F Ω , π Π F π g ( π ) 0 :
Definition 3 (Inclusive weighting function). A weighting function g : Π R 0 is inclusive if for all F Ω , there is some partition π containing F such that g ( π ) > 0 .
This desideratum rules out the standard entropy in favour of other candidate measures, such as the partition entropy and the proposition entropy.
We have seen so far that g-entropy is a natural generalisation of standard entropy from probability functions to belief functions. In Section 2.5, we shall see that g-entropy is of particular interest, because maximising g-entropy corresponds to minimising worst-case expected loss—this is our main reason for introducing the concept. However, there is a third reason why g-entropy is of interest. Shannon (§6 in [6]) provided an axiomatic justification of standard entropy as a measure of the uncertainty encapsulated in a probability function. Interestingly, as we show in Appendix A, Shannon’s argument can be adapted to give a justification of our generalised entropy measure. Thus, g-entropy can also be thought of as a measure of the uncertainty of a belief function.
In the remainder of this section, we will examine some of the properties of g-entropy.
Lemma 2. The function log : [ 0 , 1 ] [ 0 , ] is continuous in the standard topology on R 0 { + } .
Proof: To obtain the standard topology on R 0 { + } , take as open sets infinite unions and finite intersections over the open sets of R 0 and sets of the form, ( r , ] , where r R . In this topology on [ 0 , ] , a set M R 0 is open if and only if it is open in the standard topology in R 0 . Hence, log is continuous in this topology on ( 0 , 1 ] .
Let ( a t ) t N be a sequence in [ 0 , 1 ] with limit zero. For all ϵ > 0 , there exists a T N such that log a t > 1 ϵ for all t > T . Hence, for all open sets U containing + , there exists a K such that log a m U if m > K . Therefore, log a t converges to + . Thus, lim t log a t = + = log lim t a t .  ■
Proposition 2. g-entropy is non-negative and, for inclusive g, strictly concave on B .
Proof: B ( F ) [ 0 , 1 ] for all F, so log B ( F ) 0 , and g ( π ) F π B ( F ) log B ( F ) 0 . Hence, π Π g ( π ) F π B ( F ) log B ( F ) 0 , i.e., g-entropy is non-negative.
Take distinct B 1 , B 2 B and λ ( 0 , 1 ) , and let B = λ B 1 + ( 1 λ ) B 2 . Now, x log x is strictly convex on [ 0 , 1 ] , i.e.,
B ( F ) log B ( F ) λ B 1 ( F ) log B 1 ( F ) + ( 1 λ ) B 2 ( F ) log B 2 ( F )
with equality just when B 1 ( F ) = B 2 ( F ) .
Consider an inclusive weighting function, g.
H g ( λ B 1 + ( 1 λ ) B 2 ) = π Π g ( π ) F π B ( F ) log B ( F ) π Π g ( π ) F π λ B 1 ( F ) log B 1 ( F ) + ( 1 λ ) B 2 ( F ) log B 2 ( F ) = λ H g ( B 1 ) + ( 1 λ ) H g ( B 2 )
with equality iff for all F, B 1 ( F ) = B 2 ( F ) , since g is inclusive. However, B 1 and B 2 are distinct, so equality does not obtain. In other words, g-entropy is strictly concave.  ■
Corollary 1. For inclusive g, if g-entropy is maximised by a function P in convex E P , it is uniquely maximised by P in E .
Corollary 2. For inclusive g, g-entropy is uniquely maximised in the closure, [ E ] , of E .
If g is not inclusive, concavity is not strict. For example, if the standard entropy, H Ω , is maximised by B , then it is also maximised by any belief function C that agrees with B on the states ω Ω .
Note that different g-entropy measures can have different maximisers on a convex subset E of probability functions. For example, when Ω = { ω 1 , ω 2 , ω 3 , ω 4 } and E = { P P : P ( ω 1 ) + 2 . 75 P ( ω 2 ) + 7 . 1 P ( ω 3 ) = 1 . 7 , P ( ω 4 ) = 0 } , then the proposition entropy maximiser, the standard entropy maximiser and the partition entropy maximiser are all different, as can be seen from Figure 1.
Figure 1. Plotted are the partition entropy, the standard entropy and the proposition entropy under the constraints, P ( ω 1 ) + P ( ω 2 ) + P ( ω 3 ) + P ( ω 4 ) = 1 , P ( ω 1 ) + 2 . 75 P ( ω 2 ) + 7 . 1 P ( ω 3 ) = 1 . 7 , P ( ω 4 ) = 0 , as a function of P ( ω 2 ) . The dotted lines indicate the respective maxima, which obtain for different values of P ( ω 2 ) .
Figure 1. Plotted are the partition entropy, the standard entropy and the proposition entropy under the constraints, P ( ω 1 ) + P ( ω 2 ) + P ( ω 3 ) + P ( ω 4 ) = 1 , P ( ω 1 ) + 2 . 75 P ( ω 2 ) + 7 . 1 P ( ω 3 ) = 1 . 7 , P ( ω 4 ) = 0 , as a function of P ( ω 2 ) . The dotted lines indicate the respective maxima, which obtain for different values of P ( ω 2 ) .
Entropy 15 03528 g001

2.3. Loss

As Ramsey observed, all our lives, we are, in a sense, betting. The strengths of our beliefs guide our actions and expose us to possible losses. If we go to the station when the train happens not to run, we incur a loss: a wasted journey to the station and a delay in getting to where we want to go. Normally, when we are deliberating about how strongly to believe a proposition, we have no realistic idea as to the losses to which that belief will expose us. That is, when determining a belief function B, we do not know the true loss function, L * .
Now, a loss function L is standardly defined as a function L : Ω × P ( , ] , where L ( ω , P ) is the loss one incurs by adopting probability function P P when ω is the true state of the world. Note that a standard loss function will only evaluate an agent’s beliefs about the states, not the extent to which she believes other propositions. This is appropriate when belief is assumed to be probabilistic, because a probability function is determined by its values on the states. But we are concerned with justifying the probability norm here and, hence, need to consider the full range of the agent’s beliefs, in order to show that they should satisfy the axioms of probability. Hence, we need to extend the concept of a loss function to evaluate all of the agent’s beliefs:
Definition 4 (Loss function). A loss function is a function L : P Ω × B ( , ] .
L ( F , B ) is the loss incurred by a belief function B, when proposition F turns out to be true. We shall interpret this loss as the loss that is attributable to F in isolation from all other propositions, rather than the total loss incurred when proposition F turns out to be true. When F turns out to be true, so does any proposition G, for F G . Thus, the total loss when F turns out to be true includes L ( G , B ) , as well as L ( F , B ) . The total loss on F turning out to be true might therefore be represented by G F L ( G , B ) , with L ( F , B ) being the loss distinctive to F, i.e., the loss on F turning out to be true over and above the loss incurred by G F .
Is there anything that one can presume about a loss function in the absence of any information about the true loss function, L * ? Plausibly:
L1. 
L ( F , B ) = 0 if B ( F ) = 1 .
L2. 
L ( F , B ) strictly increases as B ( F ) decreases from one towards zero.
L3. 
L ( F , B ) depends only on B ( F ) .
To express the next condition, we need some notation. Suppose L = L 1 L 2 : say that L = { A 1 , . . . , A n } , L 1 = { A 1 , . . . , A m } , L 2 = { A m + 1 , . . . , A n } for some 1 < m < n . Then, ω Ω takes the form ω 1 ω 2 , where ω 1 Ω 1 is a state of L 1 , and ω 2 Ω 2 is a state of L 2 . Given propositions, F 1 Ω 1 and F 2 Ω 2 , we can define F 1 × F 2 : = { ω = ω 1 ω 2 : ω 1 F 1 , ω 2 F 2 } , a proposition of L . Given a fixed belief function, B, such that B ( Ω ) = 1 , L 1 and L 2 are independent sublanguages, written L 1 B L 2 , if B ( F 1 × F 2 ) = B ( F 1 ) · B ( F 2 ) for all F 1 Ω 1 and F 2 Ω 2 , where B ( F 1 ) : = B ( F 1 × Ω 2 ) and B ( F 2 ) : = B ( Ω 1 × F 2 ) . The restriction B L 1 of B to L 1 is a belief function on L 1 defined by B L 1 ( F 1 ) = B ( F 1 ) = B ( F 1 × Ω 2 ) and similarly for L 2 .
L4. 
Losses are additive when the language is composed of independent sublanguages: if L = L 1 L 2 for L 1 B L 2 , then L ( F 1 × F 2 , B ) = L 1 ( F 1 , B L 1 ) + L 2 ( F 2 , B L 2 ) , where L 1 , L 2 are loss functions defined on L 1 , L 2 , respectively.
L1 says that one should presume that fully believing a true proposition will not incur loss. L2 says that one should presume that the less one believes a true proposition, the more loss will result. L3 expresses the interpretation of L ( F , B ) as the loss attributable to F in isolation of all other propositions. This condition, which is sometimes called locality, rules out that L ( F , B ) depends on B ( F ) for F F ; it also rules out a dependence on | F | , for instance. L4 expresses the intuition that, at least if one supposes two propositions to be unrelated, one should presume that the loss on both turning out to be true is the sum of the losses on each. (These four conditions correspond to conditions L1–4 of pp. 64–65 in [1], which were put forward in the special case of loss functions defined over probability functions, as opposed to belief functions.)
The four conditions taken together tightly constrain the form of a presumed loss function, L:
Theorem 1. If loss functions are assumed to satisfy L1–4, then L ( F , B ) = k log B ( F ) for some constant, k > 0 , that does not depend on L .
Proof: We shall first focus on a loss function, L, defined with respect to a language, L , that contains at least two propositional variables.
L3 implies that L ( F , B ) = f L ( B ( F ) ) , for some function, f L : [ 0 , 1 ] ( , ] .
For our fixed L and each x , y [ 0 , 1 ] , choose some particular B B , L 1 , L 2 , F 1 Ω 1 , F 2 Ω 2 , such that L = L 1 L 2 , where L 1 B L 2 , B ( F 1 ) = x and B ( F 2 ) = y . This is possible, because L has at least two propositional variables. Note in particular that since L 1 and L 2 are independent sublanguages, we have B ( Ω ) = 1 .
Note that
1 = B ( Ω ) = B ( Ω 1 × Ω 2 ) = B L 1 ( Ω 1 )
and, similarly, B L 2 ( Ω 2 ) = 1 . By L1, then, L 1 ( Ω 1 , B L 1 ) = L 2 ( Ω 2 , B L 2 ) = 0 .
Therefore, by applying L4 twice:
f L ( x y ) = f L ( B ( F 1 ) · B ( F 2 ) ) = L ( F 1 × F 2 , B ) = L 1 ( F 1 , B L 1 ) + L 2 ( F 2 , B L 2 ) = [ L ( F 1 × Ω 2 , B ) L 2 ( Ω 2 , B L 2 ) ] + [ L ( Ω 1 × F 2 , B ) L 1 ( Ω 1 , B L 1 ) ] = L ( F 1 × Ω 2 , B ) + L ( Ω 1 × F 2 , B ) = f L ( x ) + f L ( y )
The negative logarithm on ( 0 , 1 ] is characterisable up to a multiplicative constant, k L , in terms of this additivity, together with the condition that f L ( x ) 0 , which is implied by L1–2 (see, e.g., Theorem 0.2.5 in [7]). L2 ensures that f L is not zero everywhere, so k L > 0 .
We thus know that f L ( x ) = k L log x for x ( 0 , 1 ] . Now, note that for all y ( 0 , 1 ] , it needs to be the case that f L ( 0 ) = f L ( 0 · y ) = f L ( 0 ) + f L ( y ) , if f L is to satisfy f L ( x · y ) = f L ( x ) + f L ( y ) for all x , y [ 0 , 1 ] . Since f L takes values in ( , + ] , it follows that f L ( 0 ) = + .
Thus far, we have shown that for a fixed language, L , with at least two propositional variables, L ( F , B ) = k L log B ( F ) on [ 0 , 1 ] .
Now consider an arbitrary language, L 1 , and a loss function L 1 on L 1 which satisfies L1–L4. There exists some other language, L 2 , and a belief function B on L = L 1 L 2 such that L 1 B L 2 . By the above, for the loss function L on L , it holds that L ( F , B ) = k L log B ( F ) on [ 0 , 1 ] . By reasoning analogous to that above:
L 1 ( F 1 , B L 1 ) = L ( F 1 × Ω 2 , B ) = f L ( B ( F 1 × Ω 2 ) ) = f L ( B L 1 ( F 1 ) )
Therefore, the loss function for L 1 is L 1 ( F 1 , B L 1 ) = k L log B L 1 ( F 1 ) . Thus, the constant k L does not depend on the particular language L after all.
In general, then, L ( F , B ) = k log B ( F ) for some positive k.  ■
Since multiplication by a constant is equivalent to a change of base, we can take log to be the natural logarithm. Since we will be interested in the belief functions that minimise loss, rather than in the absolute value of any particular losses, we can take k = 1 without loss of generality. Theorem 1 thus allows us to focus on the logarithmic loss function:
L log ( F , B ) : = log B ( F )

2.4. Score

In this paper, we are concerned with showing that the norms of objective Bayesianism must hold if an agent is to control her worst-case expected loss. Now, an expected loss function or scoring rule is standardly defined as S Ω L : P × P [ , ] such that S Ω L ( P , Q ) = ω Ω P ( ω ) L Ω ( ω , Q ) . This is interpretable as the expected loss incurred by adopting probability function Q as one’s belief function, when the probabilities are actually determined by P. (This is the statistical notion of a scoring rule as defined in [8]. More recently, a different, ‘epistemic’ notion of a scoring rule has been considered in the literature on non-pragmatic justifications of Bayesian norms; see, e.g., [9,10] and, also, a forthcoming paper by Landes, where similarities and differences of these two notions of a scoring rule are discussed. One difference that is significant to our purposes is that Predd et al.’s result in [11]—that for every epistemic scoring rule that is continuous and strictly proper, the set of non-dominated belief functions is the set P of probability functions—does not apply to statistical scoring rules. Furthermore, Predd et al. are only interested in justifying the probability norm by appealing to dominance as a decision theoretic norm. We are concerned with justifying three norms at once using worst-case loss avoidance as a desideratum. The epistemic approach is considered further in Section 5.4.)
While this standard definition of scoring rule is entirely appropriate when belief is assumed to be probabilistic, we make no such assumption here and need to consider scoring rules that evaluate all the agent’s beliefs, not just those concerning the states. In line with our discussion of entropy in Section 2.2, we shall consider the following generalisation:
Definition 5 (g-score). Given a loss function, L, and an inclusive weighting function, g : Π R 0 , the g-expected loss function or g-scoring rule or, simply, g-score is S g L : P × B [ , ] , such that
S g L ( P , B ) = π Π g ( π ) F π P ( F ) L ( F , B )
Clearly, S Ω L corresponds to S g Ω L , where g Ω , which is not inclusive, is defined as in Section 2.2. We require that g be inclusive in Definition 5, since only in that case does the g-score genuinely evaluate all the agent’s beliefs. We will focus on S g log ( P * , B ) , i.e., the case in which the loss function is logarithmic and the expectation is taken with respect to the chance function, P * , in order to show that an agent should satisfy the norms of objective Bayesianism if she is to control her worst-case g-expected logarithmic loss when her evidence determines that the chance function, P * , is in E .
For example, with the logarithmic loss function, the partition Π-score is defined by setting g = g Π :
S Π log ( P , B ) = π Π F π P ( F ) log B ( F )
Similarly, the proposition P Ω -score is defined by setting g = g P Ω :
S P Ω log ( P , B ) = F Ω P ( F ) log B ( F )
It turns out that the various logarithmic scoring rules have the following useful property:
Definition 6 (Strictly proper g-score). A scoring rule, S g L : P × B [ , ] , is strictly proper, if for all P P , the function S g L ( P , · ) : B [ , ] has a unique global minimum at B = P .
Definition 6 can be generalised: a scoring rule is strictly X -proper if it is strictly proper for belief functions taken to be from a set X . In Definition 6, X = B . The logarithmic scoring rule in the standard sense, i.e., ω Ω P ( ω ) L ( ω , Q ) , is well known to be the only strictly P -proper local scoring rule—see McCarthy [12] (p. 654), who credits Andrew Gleason for the uniqueness result; Shuford et al. [13] (p. 136) for the case of continuous scoring rules; Aczel and Pfanzagl [14] (Theorem 3, p. 101) for the case of differentiable scoring rules; and Savage [15] (§9.4). The logarithmic score in our sense, i.e., F Ω P ( F ) L ( F , B ) , is not strictly Y -proper when Y is the set of non-normalised belief functions: S ( P , bel ) is a global minimum, where bel is the belief function such that bel ( F ) = 1 for all F. (While Joyce [9] (p. 276) suggests that logarithmic score is strictly Y -proper for Y a set of non-normalised belief functions, he is referring to a logarithmic scoring rule that is different to the usual one considered above and that does not satisfy the locality condition, L3.)
On the way to showing that logarithmic g-scores are strictly proper, it will be useful to consider the following natural generalisation of Kullback-Leibler divergence to our framework:
Definition 7 (g-divergence). For a weighting function, g : Π R 0 , the g-divergence is the function, d g : P × B [ , ] , defined by:
d g ( P , B ) = π Π g ( π ) F π P ( F ) log P ( F ) B ( F )
Here, we adopt the usual convention that 0 log 0 0 = 0 and x log x 0 = + for x ( 0 , 1 ] .
We shall see that d g ( P , B ) is a sensible notion of the divergence of P from B by appealing to the following useful inequality (see, e.g., Theorem 2.7.1 in [16]):
Lemma 3 (Log sum inequality). For x i , y i R 0 , i , j = 1 , , k ,
( i = 1 n x i ) log i = 1 n x i i = 1 n y i i = 1 n x i log x i y i
with equality, iff x i = c y i for some constant, c, and i = 1 , , k .
Proposition 3. The following are equivalent:
  • d g ( P , B ) 0 with equality iff B = P .
  • g is inclusive.
Proof: First we shall see that if g is inclusive, then d g ( P , B ) 0 with equality iff B = P .
d g ( P , B ) = π Π g ( π ) F π P ( F ) log P ( F ) B ( F ) π Π g ( π ) F π P ( F ) log F π P ( F ) F π B ( F ) π Π g ( π ) 1 log 1 1 = 0
where the first inequality is an application of the log-sum inequality and the second inequality is a consequence of B being in B . There is equality at the first inequality iff for all F Ω and all π such that F π and g ( π ) > 0 , P ( F ) = c π B ( F ) for all F π , and equality at the second inequality iff for all π such that g ( π ) > 0 , F π B ( F ) = 1 .
Clearly, if B ( F ) = P ( F ) for all F, then these two equalities obtain. Conversely, suppose the two equalities obtain. Then, for each F, there is some π = { F = F 1 , F 2 , , F k } such that g ( π ) > 0 , because g is inclusive. The first equality condition implies that P ( F i ) = c π B ( F i ) for i = 1 , , k . The second equality implies that i = 1 k B ( F i ) = 1 . Hence, 1 = i = 1 k P ( F i ) = c π i = 1 k B ( F i ) = c π , and so, P ( F i ) = B ( F i ) for i = 1 , , k . In particular, B ( F ) = P ( F ) .
Next, we shall see that the condition that g is inclusive is essential.
If g were not inclusive, then there would be some F Ω such that g ( π ) = 0 , for all π Π such that F π . There are two cases.
(i)
F Ω . Take some P P such that P ( F ) > 0 . Now, define B ( F ) : = 0 , and B ( F ) : = P ( F ) for all other F . Then, B ( Ω ) = 1 and G π B ( G ) 1 for all other π Π , so B B B . Furthermore, d g ( P , P ) = d g ( P , B ) = 0 .
(ii)
F = or F = Ω . Define B ( ) : = B ( Ω ) : = 0 . 5 and B ( F ) : = P ( F ) for all F P Ω . Then, B ( ) + B ( Ω ) = 1 and G π B ( G ) 1 for all other π Π , so B B B . Furthermore, d g ( P , P ) = d g ( P , B ) = 0 .
In either case, then, d g ( P , B ) is not uniquely minimised by B = P .  ■
Corollary 3. The logarithmic g-score is strictly proper.
Proof: Recall that in the context of a g-score, g is inclusive.
S g log ( P , B ) S g log ( P , P ) = π Π g ( π ) F π P ( F ) log B ( F ) P ( F ) = π Π g ( π ) F π P ( F ) log P ( F ) B ( F ) = d g ( P , B )
Proposition 3 then implies that S g log ( P , B ) S g log ( P , P ) 0 with equality, iff B = P , i.e., S g log is strictly proper.  ■
Finally, logarithmic g-scores are non-negative strictly convex functions in the following qualified sense:
Proposition 4. The logarithmic g-score S g log ( P , B ) is non-negative and convex as a function of B B . Convexity is strict, i.e., S g log ( P , λ B 1 + ( 1 λ ) B 2 ) < λ S g log ( P , B 1 ) + ( 1 λ ) S g log ( P , B 2 ) for λ ( 0 , 1 ) , unless B 1 and B 2 agree everywhere, except where P ( F ) = 0 .
Proof: The logarithmic g-score is non-negative, because B ( F ) , P ( F ) [ 0 , 1 ] for all F; so, log B ( F ) 0 , P ( F ) log B ( F ) 0 , and g ( π ) > 0 .
That S g log ( P , B ) is strictly convex as a function of B follows from the strict concavity of log x . Take distinct B 1 , B 2 B and λ ( 0 , 1 ) , and let B = λ B 1 + ( 1 λ ) B 2 . Now:
P ( F ) log B ( F ) = P ( F ) log ( λ · B 1 ( F ) + ( 1 λ ) B 2 ( F ) ) P ( F ) λ log B 1 ( F ) + ( 1 λ ) log B 2 ( F ) = λ P ( F ) log B 1 ( F ) + ( 1 λ ) P ( F ) log B 2 ( F )
with equality iff either P ( F ) = 0 or B 1 ( F ) = B 2 ( F ) .
Hence:
S g log ( P , B ) = π Π g ( π ) F π P ( F ) log B ( F ) λ S g log ( P , B 1 ) + ( 1 λ ) S g log ( P , B 2 )
with equality iff B 1 and B 2 agree everywhere, except possibly where P ( F ) = 0 .  ■

2.5. Minimising the Worst-Case Logarithmic g-Score

In this section, we shall show that the g-entropy maximiser minimises the worst-case logarithmic g-score.
In order to prove our main result (Theorem 2), we would like to apply a game-theoretic minimax theorem, which will allow us to conclude that:
inf B B sup P E S g log ( P , B ) = sup P E inf B B S g log ( P , B )
Note that the expression on the left-hand side describes the minimising worst-case g-score, where the worst case refers to P ranging in E . Speaking in game-theoretic lingo: the player playing first on the left-hand side aims to find the belief function(s) that minimises worst-case g-expected loss; again, the worst case is taken with respect to varying P .
For this approach to work, we would normally need B to be some set of mixed strategies. It is not obvious how B could be represented as a mixing of finitely many pure strategies. However, there exists a broad literature on minimax theorems [17], and we shall apply a theorem proven in König [18]. This theorem requires that certain level sets, in the set of functions in which the player aiming to minimise may chose his functions, are connected. To apply König’s result, we will thus allow the belief functions, B, to range in B , which has this property. It will follow that the B B B are never good choices for the minimising player playing first: the best choice is in E , which is a subset of B .
Having established that the inf and the sup commute, the rest is straightforward. Since the scoring rule we employ, S g log , is strictly proper, we have that the best strategy for the minimising player, answering a move by the maximising player, is to select the same function as the maximising player. Thus, it is best for the maximising player playing first to choose a/the function that maximises S g log ( P , P ) . We will thus find that:
sup P E inf B B S g log ( P , B ) = sup P E inf B { P } S g log ( P , B ) = sup P E S g log ( P , P ) = sup P E H g ( P )
Thus, worst-case g-expected loss and g-entropy have the same value. In game-theoretic terms: we find that our zero-sum g-log-loss game has a value. It remains to be shown that both players, when playing first, have a unique best choice, P .
First, then, we shall apply König’s result.
Definition 8 (König [18], p. 56). For F : X × Y [ , ] , we call I R a border interval of F, if and only if I is an interval of the form I = ( sup x X inf y Y F ( x , y ) , + ) . Λ R is called a border set of F if and only if inf Λ = sup x X inf y Y F ( x , y ) .
For λ R and K Y , define s λ and σ λ to consist of X and of subsets of X of the form:
y K [ F ( · , y ) > λ ] respectively y K [ F ( · , y ) λ ]
For λ R and finite H X , define t λ and τ λ to consist of subsets of Y of the form:
x H [ F ( x , · ) < λ ] respectively x H [ F ( x , · ) λ ]
The following may be found in König [18] (Theorem 1.3, p. 57):
Lemma 4 (König’s Minimax). Let X , Y be topological spaces, Y be compact and Hausdorff and let F : X × Y [ , ] be lower semicontinuous. Then, if Λ is some border set, I some border interval of F and if at least one of the following conditions holds:
  • for all λ Λ , all members of s λ and τ λ are connected;
  • for all λ Λ , all members of s λ are connected and all λ I all t λ are connected;
  • for all λ Λ , all members of σ λ and t λ are connected;
  • for all λ Λ , all members of σ λ are connected and all λ I all τ λ are connected;
then:
inf y Y sup x X F ( x , y ) = sup x X inf y Y F ( x , y )
Lemma 5. S g log : E × B [ 0 , ] is lower semicontinuous.
Proof: It suffices to show that { ( P , B ) E × B | S g log ( P , B ) r } is closed for all r R . For r R consider a sequence ( P t , B t ) t N with lim t ( P t , B t ) = ( P , B ) , such that S g log ( P t , B t ) r for all t. Then:
S g log ( P , B ) = π Π g ( π ) F π P ( F ) log B ( F ) = π Π F π g ( π ) P ( F ) > 0 g ( π ) P ( F ) log B ( F ) .
If g ( π ) P ( F ) > 0 and B t ( F ) converges to zero, then there is an T N such that for all t T , g ( π ) P t ( F ) log B t ( F ) > r + 1 . Thus, B t ( F ) cannot converge to zero if P ( F ) > 0 . Since ( B t ) converges, it has to converge to some B ( F ) > 0 . Thus, when g ( π ) P ( F ) > 0 we have that g ( π ) P ( F ) log B ( F ) = lim t g ( π ) P t ( F ) log B t ( F ) r . From S g log ( P t , B t ) r , we conclude that:
π Π F π g ( π ) P ( F ) > 0 g ( π ) P ( F ) log B ( F ) = lim t π Π F π g ( π ) P ( F ) > 0 g ( π ) P t ( F ) log B t ( F ) r
 ■
Proposition 5. For all E :
inf B B sup P E S g log ( P , B ) = sup P E inf B B S g log ( P , B )
Proof: It suffices to verify that the conditions of Lemma 4 are satisfied.
E , B are subsets of R | Ω | , R | P Ω | respectively, thus naturally equipped with the induced topology. B is compact and Hausdorff (see Lemma 1). S g log : E × B [ 0 , ] is lower semicontinuous (see Lemma 5).
We need to show that one of the connectivity conditions holds. In fact, they all hold, as we shall see.
Note that E , B are connected, since they are convex.
For the s λ and σ λ consider any B B and suppose that P , P E are such that S g log ( P , B ) > λ and S g log ( P , B ) > λ . Then for η ( 0 , 1 ) we have:
S g log ( η P + ( 1 η ) P , B ) = π Π g ( π ) F π ( η P + ( 1 η ) P ) ( F ) log B ( F ) = η S g log ( P , B ) + ( 1 η ) S g log ( P , B ) > λ
Thus,
{ P E | S g log ( P , B ) > λ }
is convex for all B B .
Thus, every intersection of such sets is convex. Hence, these intersections are connected. (If any such intersection is empty, then it is trivially connected.)
For the t λ and τ λ , note that for every P P , we have that
{ B B | S g log ( P , B ) < λ }
is convex, which follows from Proposition 4 by noting that for a convex function (here, S g log ( P , · ) ) on a convex set (here, B ), the set of elements in the domain that are mapped to a number (strictly) less than λ is convex for all λ R .
Thus, every intersection of such sets is convex. Hence, these intersections are connected.  ■
The suprema and infima referred to in Proposition 5 may not be achieved at points of E . If not, they will be achieved instead at points in the closure, [ E ] , of E . We shall use arg sup P E (and arg inf P E ) to refer to the points in [ E ] that achieve the supremum (respectively, infimum), whether or not these points are in E .
Theorem 2. As usual, E is taken to be convex and g inclusive. We have that:
arg sup P E H g ( P ) = arg inf B B sup P E S g log ( P , B )
Proof: We shall prove the following slightly stronger equality, allowing B to range in B , instead of B :
arg sup P E H g ( P ) = arg inf B B sup P E S g log ( P , B )
The theorem then follows from the following fact. The right-hand side of Equation (34) is an optimization problem, where the optimum (here, we look for the infimum of sup P E S g log ( P , · ) ) uniquely obtains for a certain value (here, P ). Restricting the domain of the variables (here, from B to B ) in the optimization problem, to a subdomain that contains optimum P [ E ] B B , does not change where the optimum obtains nor the value of the optimum.
Note that:
sup P E H g ( P ) = sup P E S g log ( P , P ) = sup P E inf B B S g log ( P , B ) = inf B B sup P E S g log ( P , B )
The first equality is simply the definition of H g . The second equality follows directly from strict propriety (Corollary 3). To obtain the third line, we apply Proposition 5.
It remains to show that we can introduce arg on both sides of Equation (33).
The following sort of argument seems to be folklore in game theory; we here adapt (Lemma 4.1 on p. 1384 in [3]) for our purposes. We have:
P : = arg sup P E S g log ( P , P )
= arg sup P E inf B B S g log ( P , B )
The arg sup in Equation (36) is unique (Corollary 2). Equation (37) follows from strict propriety of S g log (Corollary 3). Now let:
B arg inf B B sup P E S g log ( P , B )
Then:
S g log ( P , P ) = sup P E inf B B S g log ( P , B ) = inf B B S g log ( P , B ) S g log ( P , B ) sup P E S g log ( P , B )
= inf B B sup P E S g log ( P , B )
The first equality follows from the definition of P ; see Equations (36) and (37). That we may drop the sup again follows from the definition of P , since P maximises inf B B S g log ( · , B ) . The inequalities hold, since dropping a minimisation and introducing a maximisation can only lead to an increase. The final inequality is immediate from the definition of B minimising sup P E S g log ( P , · ) .
By Proposition 5, all inequalities above are in fact equalities. From S g log ( P , P ) = S g log ( P , B ) and strict propriety, we may now infer that B = P .  ■
In sum, then, if an agent is to minimise her worst-case g-score, then her belief function needs to be the probability function in E that maximises g-entropy, as long as this entropy maximiser is in E . That the belief function is to be a probability function is the content of the probability norm; that it is to be in E is the content of the calibration norm; that it is to maximise g-entropy is related to the equivocation norm. We shall defer a full discussion of the equivocation norm to Section 4. In the next section, we shall show that the arguments of this section generalise to belief as defined over sentences rather than propositions. This will imply that logically equivalent sentences should be believed to the same extent—an important component of the probability norm in the sentential framework.
We shall conclude this section by providing a slight generalisation of the previous result. Note that, thus far, when considering worst-case g-score, this worst case is with respect to a chance function taken to be in E = P * . However, the evidence determines something more precise, namely that the chance function is in P * , which is not assumed to be convex. The following result indicates that our main argument will carry over to this more precise setting.
Theorem 3. Suppose P * P is such that the unique g-entropy maximiser, P , for [ E ] = [ P * ] , is in [ P * ] . Then:
P = arg sup P E H g ( P ) = arg inf B B sup P P * S g log ( P , B )
Proof: As in the previous proof, we shall prove a slightly stronger equality:
P = arg sup P E H g ( P ) = arg inf B B sup P P * S g log ( P , B )
The result follows for the same reasons given in the proof of Theorem 2.
From the strict propriety of S g log , we have:
S g log ( P , P ) = inf B B S g log ( P , B ) inf B B sup P P * S g log ( P , B ) inf B B sup P P * S g log ( P , B ) = sup P P * S g log ( P , P ) = S g log ( P , P )
where the last two equalities are simply Theorem 2. Hence:
inf B B sup P P * S g log ( P , B ) = S g log ( P , P ) = sup P E H g ( P ) = sup P P * H g ( P )
That is, the lowest worst case expected loss is the same for P [ P * ] and P [ P * ] .
Furthermore, since S g log ( P , P ) = sup P [ P * ] S g log ( P , P ) and since P [ P * ] , we have S g log ( P , P ) = sup P P * S g log ( P , P ) . Thus, B = P minimises sup P P * S g log ( P , B ) .
Now, suppose that B B is different from P . Then:
sup P P * S g log ( P , B ) S g log ( P , B ) > S g log ( P , P )
where the strict inequality follows from strict propriety. This shows that adopting B P leads to an avoidably bad score.
Hence, B = P is the unique function in B which minimises sup P P * S g log ( P , B ) .  ■

3. Belief over Sentences

Armed with our results for beliefs defined over propositions, we now tackle the case of beliefs defined over sentences, S L , of a propositional language, L . The plan is as follows. First, we normalise the belief functions in Section 3.1. In Section 3.2, we motivate the use of logarithmic loss as a default loss function. We are able to define our logarithmic scoring rule in Section 3.3, and we show there that, with respect to our scoring rule, the generalised entropy maximiser is the unique belief function that minimises the worst-case expected loss.
Again, we shall not impose any restriction—such as additivity—on the agent’s belief function, now defined on the sentences of the propositional language L . In particular, we do not assume that the agent’s belief function assigns logically equivalent sentences the same degree of belief. We shall show that any belief function violating this property incurs an avoidable loss. Thus, the results of this section allow us to show more than we could in the case of belief functions defined over propositions.
Several of the proofs in this section are analogous to the proofs of corresponding results presented in Section 2. They are included here in full for the sake of completeness; the reader may wish to skim over those details that are already familiar.

3.1. Normalisation

S L is the set of sentences of propositional language L , formed as usual by recursively applying the connectives, ¬ , , , , , to the propositional variables, A 1 , , A n . A non-normalised belief function, bel : S L R 0 , is thus a function that maps any sentence of the language to a non-negative real number. As in Section 2.1, for technical convenience, we shall focus our attention on normalised belief functions.
Definition 9 (Representation). A sentence, θ S L , represents the proposition F = { ω : ω θ } . Let F be a set of pairwise distinct propositions. We say that Θ S L is a set of representatives of F , if and only if each sentence in Θ represents some proposition in F and each proposition in F is represented by a unique sentence in Θ. A set, ρ, of representatives of P Ω will be called a representation. We denote by ϱ the set of all representations. For a set of pairwise distinct propositions, F , and a representation, ρ ϱ , we denote by ρ ( F ) S L the set of sentences in ρ that represent the propositions in F .
We call π L S L a partition of S L , if and only if it is a set of representatives of some partition π Π of propositions. We denote by Π L the set of these π L .
Definition 10 (Normalised belief function on sentences). Define the set of normalized belief functions on S L as:
B L : = { B L : S L [ 0 , 1 ] : φ π L B L ( φ ) 1  for all  π L Π L  and  φ π L B L ( φ ) = 1  for some  π L Π L }
The set of probability functions is defined as:
P L : = { P L : S L [ 0 , 1 ] : φ π L P L ( φ ) = 1  for all  π L Π L }
As in the proposition case, we have:
Proposition 6. P L P L iff P L : S L [ 0 , 1 ] satisfies the axioms of probability:
P1:
P L ( τ ) = 1 for all tautologies τ .
P2:
If ¬ ( φ ψ ) then P L ( φ ψ ) = P L ( φ ) + P L ( ψ ) .
Proof: Suppose P L P L . For any tautology, τ S L , it holds that P L ( τ ) = 1 , because { τ } is a partition in Π L . P L ( ¬ τ ) = 0 , because { τ , ¬ τ } is a partition in Π L and P L ( τ ) = 1 .
Suppose that φ , ψ S L are such that ¬ ( φ ψ ) . We shall proceed by cases to show that P L ( φ ψ ) = P L ( φ ) + P L ( ψ ) . In the first three cases, one of the sentences is a contradiction, in the last two cases, there are no contradictions.
(i)
φ and ¬ ψ , then φ ψ . Thus, by the above P L ( φ ) = 1 and P L ( ψ ) = 0 , and hence, P L ( φ ψ ) = 1 = P L ( φ ) + P L ( ψ ) .
(ii)
¬ φ and ¬ ψ , then ¬ φ ¬ ψ . Thus, P L ( φ ψ ) = 0 = P L ( φ ) + P L ( ψ ) .
(iii)
¬ φ , φ , and ¬ ψ , then { φ ψ , ¬ φ ψ } and { φ , ¬ φ ψ } are both partitions in Π L . Thus, P L ( φ ψ ) + P L ( ¬ φ ψ ) = 1 = P L ( φ ) + P L ( ¬ φ ψ ) . Putting these observations together, we now find P L ( φ ψ ) = P L ( φ ) = P L ( φ ) + P L ( ψ ) .
(iv)
¬ φ , ¬ ψ and φ ¬ ψ , then { φ , ψ } is a partition and φ ψ is a tautology. Hence, P L ( φ ) + P L ( ψ ) = 1 and P L ( φ ψ ) = 1 . This now yields P L ( φ ) + P L ( ψ ) = P L ( φ ψ ) .
(v)
¬ φ , ¬ ψ and φ ¬ ψ , then none of the following sentences is a tautology or a contradiction: φ , ψ , φ ψ , ¬ ( φ ψ ) . Since { φ , ψ , ¬ ( φ ψ ) } and { φ ψ , ¬ ( φ ψ ) } are both partitions in Π L , we obtain P L ( φ ) + P L ( ψ ) = 1 P L ( ¬ ( φ ψ ) ) = P L ( φ ψ ) . So, P L ( φ ) + P L ( ψ ) = P L ( φ ψ ) .
On the other hand, suppose P1 and P2 hold. That φ π L P L ( φ ) = 1 holds for all π L Π L can be seen by induction on the size of π L . If | π L | = 1 , then π = { τ } for some tautology τ S L , and P L ( τ ) = 1 by P1. Suppose then that π L = { φ 1 , , φ k + 1 } for k 1 . Now, i = 1 k 1 P L ( φ i ) + P L ( φ k φ k + 1 ) = 1 by the induction hypothesis. Furthermore, P L ( φ k φ k + 1 ) = P L ( φ k ) + P L ( φ k + 1 ) by P2, so φ π L P L ( φ ) = 1 , as required.  ■
Definition 11 (Respects logical equivalence). We say that a belief function B L B L respects logical equivalence if and only if φ ψ implies B L ( φ ) = B L ( ψ ) .
Proposition 7. The probability functions P L P L respect logical equivalence.
Proof: Suppose P L P L and assume that φ , ψ S L are logically equivalent. Note that ψ ¬ φ A 1 ¬ A 1 , ψ ¬ φ A 1 ¬ A 1 and that { φ , ¬ φ } and { ψ , ¬ φ } are partitions in Π L . Hence:
P L ( φ ) + P L ( ¬ φ ) = 1 = P L ( ψ ) + P L ( ¬ φ )
Therefore, P L ( φ ) = P L ( ψ ) .
Thus, the P L P L assign logically equivalent formulae the same probability.  ■

3.2. Loss

By analogy with the line of argument of Section 2.3, we shall suppose that a default loss function, L : S L × B L ( , ] , satisfies the following requirements:
L1. 
L ( φ , B L ) = 0 , if B L ( φ ) = 1 .
L2. 
L ( φ , B L ) strictly increases as B L ( φ ) decreases from one towards zero.
L3. 
L ( φ , B L ) only depends on B L ( φ ) .
Suppose we have a fixed belief function, B L B L , such that B L ( τ ) = 1 for any tautology, τ, and L = L 1 L 2 , where L 1 and L 2 are independent sublanguages, written L 1 B L L 2 , i.e., B L ( ϕ 1 ϕ 2 ) = B L ( ϕ 1 ) · B L ( ϕ 2 ) for all ϕ 1 S L 1 and ϕ 2 S L 2 . Let B L 1 ( ϕ 1 ) : = B L ( ϕ 1 ) , B L 2 ( ϕ 2 ) : = B L ( ϕ 2 ) .
L4. 
Losses are additive when the language is composed of independent sublanguages: if L = L 1 L 2 for L 1 B L L 2 , then L ( ϕ 1 ϕ 2 , B L ) = L 1 ( ϕ 1 , B L 1 ) + L 2 ( ϕ 2 , B L 2 ) , where L 1 , L 2 are loss functions defined on L 1 , L 2 , respectively.
Theorem 4. If a loss function, L, on S L × B L satisfies L1–4, then L ( φ , B L ) = k log B L ( φ ) , where the constant, k > 0 , does not depend on the language, L .
Proof: We shall first focus on a loss function, L, defined with respect to a language, L , that contains at least two propositional variables.
L3 implies that L ( φ , B L ) = f L ( B L ( φ ) ) for some function, f L : [ 0 , 1 ] ( , ] . For our fixed L and all x , y [ 0 , 1 ] , choose some B L B L such that L = L 1 L 2 , L 1 B L , L 2 B L ( ϕ 1 ) = x , and B L ( ϕ 2 ) = y for some ϕ 1 S L 1 , ϕ 2 S L 2 . This is possible, because L contains at least two propositional variables.
Note that since L 1 and L 2 are independent sublanguages, given some specific tautology, τ 1 , of L 1 :
1 = B L ( τ 1 ) = B L 1 ( τ 1 )
B L ( τ 1 ) is well defined, since τ 1 is a tautology of S L 1 , and every sentence in S L 1 is a sentence in S L . Similarly, B L 2 ( τ 2 ) = 1 for some specific tautology τ 2 of L 2 . By L1, then, L 1 ( τ 1 , B L 1 ) = L 2 ( τ 2 , B L 2 ) = 0 , where L 1 , respectively, L 2 , are the loss functions with respect to S L 1 and S L 2 satisfying L1–4. Thus:
f L ( x · y ) = f L ( B L ( ϕ 1 ) · B L ( ϕ 2 ) ) = L 3 L ( ϕ 1 ϕ 2 , B L ) = L 4 L 1 ( ϕ 1 , B L 1 ) + L 2 ( ϕ 2 , B L 2 ) = L 4 [ L ( ϕ 1 τ 2 , B L ) L 2 ( τ 2 , B L 2 ) ] + [ L ( τ 1 ϕ 2 , B L ) L 1 ( τ 1 , B L 1 ) ] = L 1 L ( ϕ 1 τ 2 , B L ) + L ( τ 1 ϕ 2 , B L ) = L 3 f L ( B L ( ϕ 1 τ 2 ) ) + f L ( B L ( ϕ 2 τ 1 ) ) = f L ( B L 1 ( ϕ 1 ) · B L 2 ( τ 2 ) ) + f L ( B L 1 ( τ 1 ) · B L 2 ( ϕ 2 ) ) = ( 46 ) f L ( B L 1 ( ϕ 1 ) ) + f L ( B L 2 ( ϕ 2 ) ) = f L ( B L ( ϕ 1 ) ) + f L ( B L ( ϕ 2 ) ) = f L ( x ) + f L ( y )
The negative logarithm on ( 0 , 1 ] is characterisable up to a multiplicative constant, k L , in terms of this additivity, together with the condition that f L ( x ) 0 , which is implied by L1–2 (see, e.g., Theorem 0.2.5 in [7]). L2 ensures that f L is not zero everywhere, so k L > 0 . As in the corresponding proof for propositions, it follows that f L ( 0 ) = + .
Thus far, we have shown that for a fixed language, L , with at least two propositional variables, L ( F , B L ) = k L log B L ( F ) on [ 0 , 1 ] .
Now, focus on an arbitrary language, L 1 , and a corresponding loss function, L 1 . We can choose L 2 , L , B L such that L is composed of independent sublanguages, L 1 and L 2 . By reasoning analogous to that above:
f L 1 ( B L 1 ( ϕ 1 ) ) = L 1 ( ϕ 1 , B L 1 ) = L ( ϕ 1 τ 2 , B L ) = f L ( B L ( ϕ 1 τ 2 ) ) = f L ( B L ( ϕ 1 ) · 1 ) = k L log B L 1 ( ϕ 1 )
Therefore, the loss function for L 1 is L 1 ( ϕ 1 , B L 1 ) = k L log B L 1 ( ϕ 1 ) . Thus, the constant, k L , does not depend on L after all.
In general, then, L ( F , B L ) = k log B L ( F ) for some positive k.  ■
Since multiplication by a constant is equivalent to a change of base, we can take log to be the natural logarithm. Since we will be interested in the belief functions that minimise loss, rather than in the absolute value of any particular losses, we can take k = 1 without loss of generality. Theorem 4 thus allows us to focus on the logarithmic loss function:
L log ( F , B L ) : = log B L ( F )

3.3. Score, Entropy and Their Connection

In the case of belief over sentences, the expected loss varies according to which sentences are used to represent the various partitions of propositions. We can define the g-score to be the worst-case expected loss, where this worst case is taken over all possible representations:
Definition 12 (g-score). Given a loss function, L , an inclusive weighting function, g : Π R 0 , and a representation, ρ ϱ , we define the representation-relative g-score S g , ρ L : P L × B L [ , ] by
S g , ρ L ( P L , B L ) : = π Π g ( π ) φ ρ ( π ) P L ( φ ) L ( φ , B L )
and the (representation-independent) g-score S g , L L : P L × B L [ , ] by
S g , L L ( P L , B L ) : = sup ρ ϱ S g , ρ L ( P L , B L )
In particular, for the logarithmic loss function under consideration here, we have:
S g , ρ log ( P L , B L ) : = π Π g ( π ) φ ρ ( π ) P L ( φ ) log B L ( φ )
and:
S g , L log ( P L , B L ) : = sup ρ ϱ S g , ρ log ( P L , B L )
We can thus define the g-entropy of a belief function on S L as:
H g , L ( B L ) : = S g , L log ( B L , B L )
There is a canonical one-to-one correspondence between the B L B L which respect logical equivalence and the B B . In particular, P L can be identified with P . Moreover, any convex E P is in one-to-one correspondence with a convex E L P L . In the following, we shall make frequent use of this correspondence. For a B L B L which respects logical equivalence, we denote by B the function in B with which it stands in one-to-one correspondence.
Lemma 6. If B L B L respects logical equivalence, then for all ρ ϱ , we have S g , L log ( P L , B L ) = sup ρ ϱ S g , ρ log ( P L , B L ) = S g log ( P , B ) .
Proof: Simply note that S g , ρ log ( P L , B L ) does not depend on ρ .  ■
Lemma 7. For all convex E L P L :
B L arg inf B L B L sup P L E L sup ρ ϱ S g , ρ log ( P L , B L )
respects logical equivalence.
Proof: Suppose that:
B L arg inf B L B L sup P L E L sup ρ ϱ S g , ρ log ( P L , B L )
and assume that B L does not respect logical equivalence. Then, define:
B L inf ( φ ) : = inf θ S L θ φ B L ( θ )
Since B L does not respect logical equivalence, there are logically equivalent φ , ψ such that B L ( φ ) B L ( ψ ) . Hence, B L inf ( φ ) < max { B L ( φ ) , B L ( ψ ) } . Thus, for every π L Π L with φ π L , we have χ π L B L inf ( χ ) < 1 . Thus, B L inf P L . B L inf respects logical equivalence by definition.
Now, consider the function B inf : P Ω [ 0 , 1 ] which is determined by B L inf . Clearly, B inf P . There are two cases to consider.
(a) B inf B P . Since B inf P , by Theorem 2, we have that:
sup P E S g log ( P , B inf ) > inf B B sup P E S g log ( P , B )
(b) B inf B . Then, define B by B ( F ) : = B inf ( F ) + δ for all F Ω , where δ ( 0 , 1 ] is minimal such that B B . In particular, B ( ) δ > 0 , thus B P . Moreover, whenever P ( F ) > 0 , it holds that P ( F ) log B inf ( F ) > P ( F ) log B ( F ) < + . For the remainder of this proof, we shall extend the definition of the logarithmic g-score S g log ( P , B ) by allowing the belief function, B, to be any non-negative function defined on P Ω , rather than just B B —if B B , we shall be careful not to appeal to results that assume B B . We thus find for all P P that S g log ( P , B inf ) > S g log ( P , B ) < + . Thus, by Theorem 2, we obtain the sharp inequality in the following:
sup P E S g log ( P , B inf ) sup P E S g log ( P , B ) > inf B B sup P E S g log ( P , B )
For both cases, we will obtain a contradiction:
S g log ( P , P ) = sup P E S g log ( P , P )
= sup P L E L sup ρ ϱ S g , ρ log ( P L , P L )
inf B L B L sup P L E L sup ρ ϱ S g , ρ log ( P L , B L )
= ( 55 ) sup P L E L sup ρ ϱ S g , ρ log ( P L , B L )
= sup P L E L π Π g ( π ) φ ρ ( π ) P L ( φ ) inf θ S L θ φ log B L ( θ ) for all ρ ϱ
= sup P L E L S g , ρ log ( P L , B L inf ) for all ρ ϱ
= sup P E S g log ( P , B inf )
> inf B B sup P E S g log ( P , B )
= S g log ( P , P )
We obtain Equation (59) by noticing that P is the unique function minimising worst-case g-expected loss (Theorem 2) and recalling that the expressions in Equation (39) and Equation (40) are equal.
Equation (60) is immediate, as the probability functions respect logical equivalence. For Equation (63), note that P L respects logical equivalence. Furthermore, since log ( · ) is strictly decreasing, a smaller value of B L ( φ ) leads to a greater score.
Equation (64) follows from Equation (56) and Lemma 6, since B L inf respects logical equivalence. Hence, S g , ρ log ( P , B L inf ) does not depend on the partition ρ .
The inequality (66), we have seen above in the two cases, Equation (57) and Equation (58). Equation (67) is again implied by Theorem 2.
We have thus found a contradiction. Hence, the
B L arg inf B L B L sup P L E L sup ρ ϱ S g , ρ log ( P L , B L )
have to respect logical equivalence.  ■
Theorem 2, the key result in the case of belief over propositions, generalises to the case of belief over sentences:
Theorem 5. As usual, E L P L is taken to be convex and g inclusive. We have that:
arg sup P L E L H g , L ( P L ) = arg inf B L B L sup P L E L S g , L log ( P L , B L )
Proof: As in the corresponding theorem for the proposition (Theorem 2), we shall prove a slightly stronger equality:
arg sup P L E L H g , L ( P L ) = arg inf B L B L sup P L E L S g , L log ( P L , B L )
Theorem 5 then follows for the same reasons given in the previous section.
Denote by B L l e B L the convex hull of functions B L B L that respect logical equivalence. Let R e p : B B L l e be the bijective map that assigns to any B B the unique B L B L which represents it (i.e., B ( F ) = B L ( φ ) , whenever F Ω is represented by φ S L ).
arg inf B L B L sup P L E L S g , L log ( P L , B L ) = arg inf B L B L l e sup P L E L S g , L log ( P L , B L )
= R e p ( arg inf B B sup P E S g log ( P , B ) )
= R e p ( P )
= P L
Equation (69) is simply Lemma 7. Equation (70) follows directly from applying Lemma 6, and Equation (71) is simply Theorem 2.  ■
In the above, we used P L to denote the probability function in E L which represents the g-entropy maximiser, P E . Now, note that H g , L ( P L ) = H g ( P ) . Thus, P L is not only the function representing P ; it is also the unique function in E L which maximises g-entropy H g , L .
Theorem 3 also extends to the sentence framework. As we shall now see, the worst-case g-score can be taken with respect to a chance function in P L * , rather than E L = P L * .
Theorem 6. If P L * P L is such that the unique g-entropy maximiser, P L , of [ E L ] = [ P L * ] , is in [ P L * ] , then:
P L = arg sup P L E L H g , L ( P L ) = arg inf B B L sup P L P L * S g , L log ( P L , B L )
Proof: Again, we shall prove a slightly stronger statement with B L ranging in B L .
Since g is inclusive, we have that S g log is a strictly proper scoring rule. Hence, for a fixed ρ ϱ , S g , ρ log ( P L , · ) is minimal if and only if P L ( φ ) = B L ( φ ) for all φ ρ .
Now, suppose B L B L is different from a fixed P L P L . Then, there is some φ S L such that B L ( φ ) P L ( φ ) . Now, pick some ρ ϱ such that φ ρ . Then, strict propriety implies the sharp inequality below:
S g , L log ( P L , B L ) = sup ρ ϱ S g , ρ log ( P L , B L ) S g , ρ log ( P L , B L ) > S g , ρ log ( P L , P L ) = sup ρ ϱ S g , ρ log ( P L , P L ) = S g , L log ( P L , P L )
The second equality follows since the P L P L respect logical equivalence, and hence, S g , ρ L ( P L , P L ) does not depend on ρ . Thus, for all P L P L , we find arg inf B L B L S g log ( P L , B L ) = P L . Hence, for P L = P L , we obtain:
S g , L log ( P L , P L ) = inf B L B L S g , L log ( P L , B L ) inf B L B L sup P L P L * S g , L log ( P L , B L ) inf B L B L sup P L P L * S g , L log ( P L , B L ) = sup P L P L * S g , L log ( P L , P L ) = S g , L log ( P L , P L )
where the last two equalities are simply Theorem 5. Hence:
inf B L B L sup P L P L * S g , L log ( P L , B L ) = S g , L log ( P L , P L ) = sup P L P L * H g , L ( P )
That is, the lowest worst-case expected loss is the same for P L [ P L * ] and P L [ P L * ] .
Furthermore, since S g , L log ( P L , P L ) = sup P L P L * S g , L log ( P L , P L ) and since P L [ P L * ] , we have S g , L log ( P L , P L ) = sup P L P L * S g , L log ( P L , P L ) . Thus, B L = P L minimises sup P L P L * S g , L log ( P L , B L ) .
Now, suppose that B L B L is different from P L . Then:
sup P L P L * S g , L log ( P L , B L ) S g , L log ( P L , B L ) > S g , L log ( P L , P L )
where the strict inequality follows as seen above. This now shows that adopting B L P L leads to an avoidably bad score.
Hence, B L = P L is the unique function in B L which minimises sup P L P L * S g , L log ( P L , B L ) .  ■
We see, then, that the results of Section 2 concerning beliefs defined on propositions extend naturally to beliefs defined on the sentences of a propositional language. In light of these findings, our subsequent discussions will, for ease of exposition, solely focus on propositions. It should be clear how our remarks generalise to sentences.

4. Relationship to Standard Entropy Maximisation

We have seen so far that there is a sense in which our notions of entropy and expected loss depend on the weight given to each partition under consideration—i.e., on the weighting function, g. It is natural to demand that no proposition should be entirely dismissed from consideration by being given zero weight—that g be inclusive. In which case, the belief function that minimises worst-case g-expected loss is just the probability function in E that maximises g-entropy, if there is such a function. This result provides a single justification of the three norms of objective Bayesianism: the belief function should be a probability function, it should be in E , i.e., calibrated to evidence of physical probability, and it should otherwise be equivocal, where the degree to which a belief function is equivocal can be measured by its g-entropy.
This line of argument gives rise to two questions. Which g-entropy should be maximised? Does the standard entropy maximiser count as a rational belief function?
On the former question, the task is to isolate some set, G , of appropriate weighting functions. Thus far, the only restriction imposed on a weighting function, g, has been that it should be inclusive; this is required in order that scoring rules evaluate all beliefs, rather than just a select few. We shall put forward two further conditions that can help to narrow down a proper subclass, G , of weighting functions.
A second natural desideratum is the following:
Definition 13 (Symmetric weighting function). A weighting function, g, is symmetric, if and only if whenever π can be obtained from π by permuting the ω i in π , then g ( π ) = g ( π ) .
For example, for | Ω | = 4 and symmetric g, we have that g ( { { ω 1 , ω 2 } , { ω 3 } , { ω 4 } } ) = g ( { { ω 1 , ω 4 } , { ω 2 } , { ω 3 } } ) . Note that g Ω , g P Ω and g Π are all symmetric. The symmetry condition can also be stated as follows: g ( π ) is only a function of the spectrum of π, i.e., of the multi-set of sizes of the members of π. In the above example, the spectrum of both partitions is { 2 , 1 , 1 } .
It turns out that inclusive and symmetric weighting functions lead to g-entropy maximisers that satisfy a variety of intuitive and plausible properties—see Appendix B.
In addition, it is natural to suppose that if π is a refinement of partition π, then g should not give any less weight to π than it does to π—there are no grounds to favour coarser partitions over more fine-grained partitions; although, as Keynes (Chapter 4 in [19]) argued, there may be grounds to prefer finer-grained partitions over coarser partitions.
Definition 14 (Refined weighting function). A weighting function, g, is refined, if and only if whenever π refines π, then g ( π ) g ( π ) .
g Π and g Ω are refined, but g P Ω is not.
Let G 0 be the set of weighting functions that are inclusive, symmetric and refined. One might plausibly set G = G 0 . We would at least suggest that all the weighting functions in G 0 are appropriate weighting functions for scoring rules; we shall leave it open as to whether G should contain some weighting functions—such as the proposition weighting, g P Ω —that lie outside G 0 . We shall thus suppose in what follows that the set G of appropriate weighting functions is such that G 0 G G inc , where G inc is the set of inclusive weighting functions.
One might think that the second question posed above—does the standard entropy maximiser count as a rational belief function?—should be answered in the negative. We saw in Section 2.2 that the standard entropy, g Ω -entropy, has a weighting function, g Ω , that is not inclusive. Therefore, there is no guarantee that the standard entropy maximiser minimises worst-case g-expected loss for some g G . Indeed, Figure 1 showed that the standard entropy maximiser need neither coincide with the partition entropy maximiser nor the proposition entropy maximiser.
However, it would be too hasty to conclude that the standard entropy maximiser fails to qualify as a rational belief function. Recall that the equivocation norm says that an agent’s belief function should be sufficiently equivocal, rather than maximally equivocal. This qualification is essential to cope with the situation in which there is no maximally equivocal function in E , i.e., the situation in which for any function in E , there is another function in E that is more equivocal. This arises, for instance, when one has evidence that a coin is biased in favour of tails, E = P * = { P : P ( Tails ) > 1 / 2 } . In this case, sup P E H g ( P ) is achieved by the probability function which gives probability 1 / 2 to tails, which is outside E . This situation also arises in certain cases when evidence is determined by quantified propositions (§2 in [20]). The best one can do in such a situation is adopt a probability function in E that is sufficiently equivocal, where what counts as sufficiently equivocal may depend on pragmatic factors, such as the required numerical accuracy of predictions and the computational resources available to isolate a suitable function.
Let E be the set of belief functions that are sufficiently equivocal. Plausibly:
E1:
E . An agent is always entitled to hold some beliefs.
E2:
E E . Sufficiently equivocal belief functions are calibrated with evidence.
E3:
For all g G , there is some ϵ > inf B B sup P E S g log ( P , B ) such that if R E and sup P E S g ( P , R ) < ϵ , then R E . , i.e., if R has sufficiently low worst-case g-expected loss for some appropriate g, then R is sufficiently equivocal.
E4:
E = E . Any function, from those that are calibrated with evidence, that is sufficiently equivocal, is a function, from those that are calibrated with evidence and are sufficiently equivocal, that is sufficiently equivocal.
E5:
If P is a limit point of E and P E , then P E .
A closely related set of conditions was put forward in [20]. Note that we will not need to appeal to E4 in this paper. E1 is a consequence of the other principles together with the fact that E .
Conditions E2, E3 and E5 allow us to answer our two questions. Which g-entropy should be maximised? By E3, it is rational to adopt any g-entropy maximiser that is in E , for g G G 0 . Does the standard entropy maximiser count as a rational belief function? Yes, if it is in E (which is the case, for instance, if E is closed):
Theorem 7 (Justification of maxent). If E contains its standard entropy maximiser, P Ω : = arg sup E H Ω , then P Ω E .
Proof: We shall first see that there is a sequence of ( g t ) t N in G such that the g t -entropy maximisers P t [ E ] converge to P Ω . All respective entropy maximisers are unique, due to Corollary 2.
Let g t ( { { ω } : ω Ω } ) = 1 , and put g t ( π ) : = 1 t for all other π Π . The g t are in G , because they are inclusive, symmetric and refined. g t -entropy has the following form:
H t : = sup P E H g t ( P ) = sup P E π Π g t ( π ) F π P ( F ) log P ( F )
Now note that g t ( π ) converges to g Ω ( π ) and that P ( F ) log P ( F ) is finite for all F Ω . Thus, for all P P , H t ( P ) converges to H Ω ( P ) as t approaches infinity. Hence, sup P E H g t ( P ) = H t tends to sup P E H Ω ( P ) = H Ω .
Let us now compute:
| H Ω ( P t ) H Ω ( P Ω ) | = | H Ω ( P t ) H g t ( P t ) + H g t ( P t ) H Ω ( P Ω ) | | H Ω ( P t ) H g t ( P t ) | + | H g t ( P t ) H Ω ( P Ω ) | = | H Ω ( P t ) H g t ( P t ) | + | H t H Ω |
As we noted above, g t converges to g Ω . Furthermore, ( P t ) t N is a bounded sequence. Hence, H g t ( P t ) converges to H Ω ( P t ) . Furthermore, recall that H t tends to H Ω . Overall, we find that lim t H Ω ( P t ) = H Ω ( P Ω ) .
Since H Ω ( · ) is a strictly concave function on [ E ] and [ E ] is convex, it follows that P t converges to P Ω .
Note that the P t are not necessarily in E . However, they are in [ E ] , and there will be some sequence of P t E close to P t such that lim t P t = P Ω , as we shall now see.
If P t E , then simply let P t = P t , which is in E by E3.
If P t E , then there exists a P E which is different from P t , such that all the points on the line segment between P t and P are in E ; with the exception of P t . Now define P t , δ t ( ω ) = ( 1 δ t ) P t ( ω ) + δ t P ( ω ) = P t ( ω ) + δ t ( P ( ω ) P t ( ω ) ) . Note that for 0 < δ t < 1 , we have, for all ω Ω , that P t ( ω ) > 0 implies P t , δ t ( ω ) > 0 .
Then, with
m t : = min ω Ω P t ( ω ) > 0 { P t ( ω ) }
and 0 < δ t < m t , it follows from Proposition 18 that for all F Ω and all P E , P ( F ) > 0 implies P t ( F ) > 0 . Thus, for such an F, we have P t ( F ) m t > δ t > 0 .
Adopting the purely notational convention that 0 ( log 0 log 0 ) = 0 , we find for P [ E ] and m t > δ t that:
| S g t log ( P , P t , δ t ) S g t log ( P , P t ) | π Π g t ( π ) | F π P ( F ) log P t , δ t ( F ) log P t ( F ) | π Π g t ( π ) F π P ( F ) > 0 P ( F ) | log P t , δ t ( F ) log P t ( F ) | π Π g t ( π ) F π P ( F ) > 0 P ( F ) | log P t ( F ) δ t · | P ( F ) P t ( F ) | P t ( F ) | π Π g t ( π ) F π P ( F ) > 0 P ( F ) | log P t ( F ) δ t P t ( F ) | π Π g t ( π ) F π P ( F ) > 0 P ( F ) | log m t δ t m t | = | log m t δ t m t | π Π g t ( π )
For fixed g t and all P [ E ] , | S g t log ( P , P t , δ t ) S g t log ( P , P t ) | becomes arbitrarily small for small δ t ; moreover, the upper bound we established does not depend on P . In particular, for all χ t > 0 , there exists a T N such that for all U t > T and all P [ E ] , it holds that | S g t log ( P , P t , 1 U t ) S g t log ( P , P t ) | < χ t .
Now, let ϵ t > inf B B sup P E S g t log ( P , B ) = H t . Then, with χ t = ϵ t H t 2 > 0 , we have for big enough U t that:
sup P E S g t log ( P , P t , 1 U t ) sup P E S g t log ( P , P t ) χ t
Thus:
sup P E S g t log ( P , P t , 1 U t ) χ t + sup P E S g t log ( P , P t ) = ϵ t H t 2 + H t < ϵ t
Hence, P t , δ t E by E3 for small enough δ t , since worst-case g t -expected loss of P t , δ t becomes arbitrarily close to H t .
Now, pick a sequence δ t 0 , such that δ t is small enough to ensure that for every t, it holds that P t , δ t E . Clearly, the sequence ( P t , δ t ) t N converges to the limit of the sequence P t , and this limit is P Ω . Therefore, the sequence P t , δ t converges to P Ω , which is, by our assumption, in E .
By E5, we have P Ω E .  ■
So far, we have seen that, as long as the standard entropy maximiser is not ruled out by the available evidence, it is sufficiently equivocal, and hence, it is rational for an agent to adopt this function as her belief function. On the other hand, the above considerations also imply that if the entropy maximiser P Ω is ruled out by the available evidence (i.e., P Ω [ E ] E ), it is rational to adopt some function P close enough to P Ω , because such a function will be sufficiently equivocal:
Corollary 4. For all ϵ > 0 , there exists a P E such that | P ( ω ) P Ω ( ω ) | < ϵ for all ω Ω .
Proof: Consider the same sequence, g t , as in the above proof. Recall that P t converges to P Ω . Now, pick a t such that | P t ( ω ) P Ω ( ω ) | < ϵ 2 for all ω Ω . For this t, it holds that P t , δ t E for small enough δ t and that P t , δ t converges to P t . Thus, for small enough δ t , we have | P t ( ω ) P t , δ t ( ω ) | < ϵ 2 for all ω Ω . Thus, | P t , δ t ( ω ) P Ω ( ω ) | < ϵ for all ω Ω .  ■
Is there anything that makes the standard entropy maximiser stand out among all those functions that are sufficiently equivocal? One consideration is language invariance. Suppose g L is a family of weighting functions, defined for each L . g L is language invariant, as long as merely adding new propositional variables to the language does not undermine the g L -entropy maximiser:
Definition 15 (Language invariant family of weighting functions). Suppose we are given, as usual, a set E of probability functions on a fixed language L . For any L extending L , let E = E × P L L be the translation of E into the richer language L . A family of weighting functions is language invariant, if for any such E , L , any P arg sup P E H g L ( P ) on L , and for any language L extending L , there is some P arg sup P E H g L ( P ) on L such that P L = P , i.e., P ( ω ) = P ( ω ) for each state ω of L .
It turns out that many families of weighting functions—including the partition weightings and the proposition weightings—are not language invariant:
Proposition 8. The family of partition weightings, g Π , and the family of proposition weightings, g P Ω , are not language invariant.
Proof: Let L = { A 1 , A 2 } and E = { P P : P ( ω 1 ) + 2 P ( ω 2 ) + 3 P ( ω 3 ) + 4 P ( ω 4 ) = 1 . 7 } . The partition entropy maximiser P Π and the proposition entropy maximiser P P Ω for this language and this set E of calibrated functions are given in the first two rows of the table below.
Table 1. Partition entropy and proposition entropy maximisers on L and L .
Table 1. Partition entropy and proposition entropy maximisers on L and L .
ω 1 ω 2 ω 3 ω 4
P Π 0 . 5331 0 . 2841 0 . 1324 0 . 0504
P P Ω 0 . 5192 0 . 3008 0 . 1408 0 . 0392
χ 1 χ 2 χ 3 χ 4 χ 5 χ 6 χ 7 χ 8
P Π 0 . 2649 0 . 2649 0 . 1441 0 . 1441 0 . 0671 0 . 0671 0 . 0239 0 . 0239
P P Ω 0 . 2510 0 . 2510 0 . 1594 0 . 1594 0 . 0783 0 . 0783 0 . 0113 0 . 0113
We now add one propositional variable, A 3 , to L and, thus, obtain L . Denote the states of L by χ 1 = ω 1 ¬ A 3 , χ 2 = ω 1 A 3 , and so on. Assuming that we have no information at all concerning A 3 , the set of calibrated probability functions is given by the solutions of the constraint, ( P ( χ 1 ) + P ( χ 2 ) ) + 2 ( P ( χ 3 ) + P ( χ 4 ) ) + 3 ( P ( χ 5 ) + P ( χ 6 ) ) + 4 ( P ( χ 7 ) + P ( χ 8 ) ) = 1 . 7 . Language invariance would now entail that P ( ω 1 ) = P ( χ 1 ) + P ( χ 2 ) , P ( ω 2 ) = P ( χ 3 ) + P ( χ 4 ) , P ( ω 3 ) = P ( χ 5 ) + P ( χ 6 ) , P ( ω 4 ) = P ( χ 7 ) + P ( χ 8 ) . However, neither the partition entropy maximisers nor the proposition entropy maximisers form a language invariant family, as can be seen from the last two rows of the above table.  ■
On the other hand, it is well known that standard entropy maximisation is language invariant (p. 76 in [21]). This can be seen to follow from the fact that certain families of weighting functions that only assign positive weight to a single partition are language invariant:
Lemma 8. Suppose a function f picks out a partition π for any language L , in such a way that if L L , then f ( L ) is a refinement of f ( L ) , with each F f ( L ) being refined into the same number k of members F 1 , , F k f ( L ) , for k 1 . Suppose g L is such that for any L , g L ( f ( L ) ) = c > 0 , but g L ( π ) = 0 for all other partitions π. Then, g L is language invariant.
Proof: Let P denote a g L -entropy maximiser (in [ E ] ), and let P denote a g L -entropy maximiser in [ E ] × P L L . Since g L and g L need not be inclusive, H g , L and H g , L need not be strictly concave. Thus, there need not be unique entropy maximisers. Given F Ω refined into subsets F 1 , , F k of Ω , F Ω is defined by F : = F 1 F k . One can restrict P to L by setting P ( ω ) = ω Ω , ω ω P ( ω ) for ω Ω , so, in particular, P ( F ) = P ( F ) = P ( F 1 ) + + P ( F k ) for F Ω .
The g L -entropy of P is closely related to the g L -entropy of P :
c F f ( L ) P ( F ) log P ( F ) c F f ( L ) P ( F ) log P ( F ) = c F f ( L ) ( P ( F 1 ) + + P ( F k ) ) log ( P ( F 1 ) + + P ( F k ) ) = c F f ( L ) ( P ( F 1 ) + + P ( F k ) ) log k + log P ( F 1 ) + + P ( F k ) k L S I c log k c F f ( L ) P ( F 1 ) log P ( F 1 ) + + P ( F k ) log P ( F k ) = c log k c G f ( L ) P ( G ) log P ( G ) = c log k c F f ( L ) P ( F 1 ) log P ( F 1 ) + + P ( F k ) log P ( F k ) c log k c F f ( L ) P ( F ) k log P ( F ) k + + P ( F ) k log P ( F ) k = c log k c F f ( L ) P ( F ) log P ( F ) k = c F f ( L ) P ( F ) log P ( F )
LSI refers to the log sum inequality introduced in Lemma 3. The first and last inequality above follow from the fact that P and P are entropy maximisers over L , L , respectively. Hence, all inequalities are indeed equalities. These entropy maximisers are unique on f ( L ) , f ( L ) , so P ( F ) = k · P ( F 1 ) = = k · P ( F k ) = P ( F ) for F f ( L ) .
Now, take an arbitrary P arg sup P E H g L ( P ) , and suppose ω Ω . Any P such that P ( ω ) = P ( ω ) and P ( F 1 ) = = P ( F k ) = P ( F ) / k will be a g L -entropy maximiser on L . Thus, g L is language invariant.
Note that if, for some L , f ( L ) = { Ω L , } , where Ω L denotes the set of states of L , then H g L ( P ) = P ( Ω L ) log P ( Ω L ) P ( ) log P ( ) = 0 0 = 0 . Likewise, if f ( L ) = { Ω L } , then H g L ( P ) = 0 . For such g-entropies, every probability maximises g-entropy trivially, since all probability functions have the same g-entropy.  ■
Taking f ( L ) = { { ω } : ω Ω } and c = 1 , we have the language invariance of standard entropy maximisation:
Corollary 5. The family of weighting functions g Ω is language invariant.
While giving weight in this way to just one partition is sufficient for language invariance, it is not necessary, as we shall now see. Define a family of weighting functions, the substate weighting functions, by giving weight to just those partitions that are partitions of states of sublanguages. For any sublanguage, L L = { A 1 , , A n } , let Ω be the set of states of L , and let π be the partition of propositions of L that represents the partition of states of the sublanguage, L , i.e., π = { { ω Ω : ω ω } : ω Ω } . Then,
g L ( π ) = 1 : π = π  for some  L L 0 : otherwise
Example 2. For L = { A 1 , A 2 } , there are three sublanguages: L itself and the two proper sublanguages, { A 1 } , { A 2 } . Then, g L assigns the following three partitions of Ω the same positive weight: { { A 1 A 2 , A 1 ¬ A 2 } , { ¬ A 1 A 2 , ¬ A 1 ¬ A 2 } } , { { A 1 A 2 , ¬ A 1 A 2 } , { A 1 ¬ A 2 , ¬ A 1 ¬ A 2 } } , { { A 1 A 2 } , { A 1 ¬ A 2 } , { ¬ A 1 A 2 } , { ¬ A 1 ¬ A 2 } } . g L assigns all other π Π weight zero.
Note that there are 2 n 1 non-empty sublanguages of L , so g L gives positive weight to 2 n 1 partitions.
Proposition 9. The family of substate weighting functions is language invariant.
Proof: Consider an extension, L = { A 1 , , A n , A n + 1 } , of L . Let P , P be g -entropy maximisers on L , L , respectively. For simplicity of exposition, we shall view these functions as defined over sentences, so that we can talk of P ( A n + 1 ω ) , etc. For the purposes of the following calculation we shall consider the empty language to be a language. Entropies over the empty language vanish. Summing over the empty language ensures, for example, that the expression P ( A n + 1 ) log P ( A n + 1 ) appears in Equation (81).
2 H g L ( P ) = 2 L L ω Ω P ( ω ) log P ( ω ) 2 L L ω Ω P ( ω ) log P ( ω ) = L L ω Ω P ( ω ) log P ( ω ) L L ω Ω P ( A n + 1 ω ) + P ( ¬ A n + 1 ω ) × log P ( A n + 1 ω ) + P ( ¬ A n + 1 ω ) = L L ω Ω P ( ω ) log P ( ω ) L L ω Ω P ( A n + 1 ω ) + P ( ¬ A n + 1 ω ) × log 2 · P ( A n + 1 ω ) + P ( ¬ A n + 1 ω ) 1 + 1 L L ω Ω P ( ω ) log P ( ω ) L L ω Ω [ log 2 + P ( A n + 1 ω ) log P ( A n + 1 ω ) + P ( ¬ A n + 1 ω ) log P ( ¬ A n + 1 ω ) ]
= c log 2 L L { A n + 1 } L ω Ω P ( ω ) log P ( ω ) L L { A n + 1 } L ω Ω P ( ω ) log P ( ω ) = c log 2 L L ω Ω P ( ω ) log P ( ω ) = c log 2 + H g L ( P ) = c log 2 L L ω Ω P ( ω ) log P ( ω ) L L ω Ω [ P ( A n + 1 ω ) log P ( A n + 1 ω ) + P ( ¬ A n + 1 ω ) log P ( ¬ A n + 1 ω ) ] c log 2 L L ω Ω P ( ω ) log P ( ω ) L L ω Ω P ( ω ) log P ( ω ) 2 = 2 L L ω Ω P ( ω ) log P ( ω ) = 2 H g L ( P )
where c is some constant and where the second inequality is an application of the log-sum inequality. As in the previous proof, all inequalities are thus equalities, P ( ± A n + 1 ω ) = P ( ω ) / 2 and P extends P , as required.  ■
In general the substate entropy maximisers differ from the standard entropy maximisers, as well as the partition entropy maximisers and the proposition entropy maximisers:
Example 3. For L = { A 1 , A 2 } and the substate weighting function, g L on L (see Example 2), we find for E = { P P : P ( A 1 A 2 ) + 2 P ( A 1 ¬ A 2 ) = 0 . 1 } that the standard entropy maximiser, the partition entropy maximiser, the proposition entropy maximiser and the substate weighting entropy maximiser are pairwise different.
Table 2. Standard, partition, proposition and substate entropy maximisers.
Table 2. Standard, partition, proposition and substate entropy maximisers.
A 1 A 2 A 1 ¬ A 2 ¬ A 1 A 2 ¬ A 1 ¬ A 2
P Ω 0 . 0752 0 . 0124 0 . 4562 0 . 4562
P Π 0 . 0856 0 . 0072 0 . 4536 0 . 4536
P P Ω 0 . 0950 0 . 0025 0 . 4513 0 . 4513
P g L 0 . 0950 0 . 0025 0 . 4293 0 . 4732
Observe that the standard entropy maximiser, the partition entropy maximiser and the proposition entropy maximiser are all symmetric in ¬ A 1 A 2 and ¬ A 1 ¬ A 2 , while the substate weighting entropy maximiser is not. This break of symmetry is caused by the fact that g L is not symmetric in ¬ A 1 A 2 and ¬ A 1 ¬ A 2 .
We have seen that the substate weighting functions are not symmetric. Neither are they inclusive nor refined. We conjecture that if G = G 0 , the set of inclusive, symmetric and refined g, then the only language invariant family, g L , that gives rise to entropy maximisers that are sufficiently equivocal is the family that underwrites standard entropy maximisation: if g L is language invariant and the g L -entropy maximiser is in E , then g L = g Ω .
In sum, there is a compelling reason to prefer the standard entropy maximiser over other g-entropy maximisers: the standard entropy maximiser is language invariant, while other—perhaps, all other—appropriate g-entropy maximisers are not. In Appendix B.3, we show that there are three further ways in which the standard entropy maximiser differs from other g-entropy maximisers: it satisfies the principles of irrelevance, relativisation and independence.

5. Discussion

5.1. Summary

In this paper, we have seen how the standard concept of entropy generalises rather naturally to the notion of g-entropy, where g is a function that weights the partitions that contribute to the entropy sum. If loss is taken to be logarithmic, as is forced by desiderata L1–4 for a default loss function, then the belief function that minimises worst-case g-expected loss, where the expectation is taken with respect to a chance function known to lie in a convex set E , is the probability function in E that maximises g-entropy, if there is such a function. This applies whether belief functions are thought of as defined over the sentences of an agent’s language or over the propositions picked out by those sentences.
This fact suggests a justification of the three norms of objective Bayesianism: a belief function should be a probability function, it should lie in the set E of potential chance functions and it should otherwise be equivocal in that it should have maximum g-entropy.
However, the probability function with maximum g-entropy may lie outside E , on its boundary, in which case that function is ruled out of contention by available evidence. Therefore, objective Bayesianism only requires that a belief function be sufficiently equivocal—not that it be maximally equivocal. Principles E1–5 can be used to constrain the set E , of sufficiently equivocal functions. Arguably, if the standard entropy maximiser is in E , then it is also in E . Moreover, the standard entropy maximiser stands out as being language invariant. This then provides a qualified justification of the standard maximum entropy principle: while an agent is rationally entitled to adopt any sufficiently equivocal probability function in E as her belief function, if the standard entropy maximiser is in E , then that function is a natural choice.
Some questions arise. First, what are the consequences of this sort of account for conditionalisation and Bayes’ theorem? Second, how does this account relate to imprecise probability, advocates of which reject our starting assumption that the strengths of an agent’s beliefs are representable by a single belief function? Third, the arguments of this paper are overtly pragmatic; can they be reformulated in a non-pragmatic way? We shall tackle these questions in turn.

5.2. Conditionalisation, Conditional Probabilities and Bayes’ Theorem

Subjective Bayesians endorse the probability norm and often also some sort of calibration norm, but do not go so far as to insist on equivocation. This leads to relatively weak constraints on degrees of belief, so subjective Bayesians typically appeal to Bayesian conditionalisation as a means to tightly constrain the way in which degrees of belief change in the light of new evidence. Objective Bayesians do not need to invoke Bayesian conditionalisation as a norm of belief change, because the three norms of objective Bayesianism already tightly constrain any new belief function that an agent can adopt. In fact, if the objective Bayesian adopts the policy of adopting the standard entropy maximiser as her belief function, then objective Bayesian updating often agrees with updating by conditionalisation, as shown by Seidenfeld (Result 1 in [22]):
Theorem 8. Suppose that E is the set of probability functions calibrated with evidence E, and that E can be written as the set of probability functions which satisfy finitely many constraints of the form, c i = ω Ω d i , ω P ( ω ) . Suppose E is the set of probability functions calibrated with evidence E { G } , and that P E , P E { G } are functions in E , E , respectively, that maximise standard entropy. If:
(i) 
G Ω ,
(ii) 
the only constraints imposed by E { G } are the constraints c i = ω Ω d i , ω P ( ω ) imposed by E together with the constraint P ( G ) = 1 ,
(iii) 
the constraints in (ii) are consistent, and
(iv) 
P E ( · | G ) E ,
then P E { G } ( F ) = P E ( F | G ) for all F Ω .
This fact has various consequences. First, it provides a qualified justification of Bayesian conditionalisation: a standard entropy maximiser can be thought of as applying Bayesian conditionalisation in many natural situations. Second, if conditions (i)–(iv) of Theorem 8 hold, then there is no need to maximise standard entropy to compute the agent’s new degrees of belief—instead, Bayesian conditionalisation can be used to calculate these degrees of belief. Third, conditions (i)–(iv) of Theorem 8 can each fail, so the two forms of updating do not always agree, and Bayesian conditionalisation is less central to an objective Bayesian who maximises standard entropy than it is to a subjective Bayesian. As pointed out in Williamson [1] (Chapter 4) and Williamson [23] (§§8,9), standard entropy maximisation is to be preferred over Bayesian conditionalisation where any of these conditions fail. Fourth, conditional probabilities, which are crucial to subjective Bayesianism on account of their use in Bayesian conditionalisation, are less central to the objective Bayesian, because conditionalisation is only employed in a qualified way. For the objective Bayesian, conditional probabilities are merely ratios of unconditional probabilities—they are not generally interpretable as conditional degrees of belief (§4.4.1 in [1]). Fifth, Bayes’ theorem, which is an important tool for calculating conditional probabilities, used routinely in Bayesian statistics, for example, is less central to objective Bayesianism, because of the less significant role played by conditional probabilities.
Interestingly, while Theorem 8 appeals to standard entropy maximisation, an analogous result holds for g-entropy maximisation, for any inclusive g, as we show in Appendix B.2:
Theorem 9. Suppose that convex and closed E is the set of probability functions calibrated with evidence E, and E is the set of probability functions calibrated with evidence E { G } . Furthermore, suppose that P E , P E { G } are functions in E , E , respectively, that maximise g-entropy for some fixed g G inc { g Ω } . If:
(i) 
G Ω ,
(ii) 
the only constraints imposed by E { G } are the constraints imposed by E together with the constraint P ( G ) = 1 ,
(iii) 
the constraints in (ii) are consistent, and
(iv) 
P E ( · | G ) E ,
then P E { G } ( F ) = P E ( F | G ) for all F Ω .
Thus, the preceding comments apply equally in the more general context of this paper.

5.3. Imprecise Probability

Advocates of imprecise probability argue that an agent’s belief state is better represented by a set of probability functions—for example, by the set E of probability functions calibrated with evidence—than by a single belief function [24]. This makes decision making harder. An agent whose degrees of belief are represented by a single probability function can use that probability function to determine which of the available acts maximises expected utility. However, an imprecise agent will typically find that the acts that maximise expected utility vary according to which probability function in her imprecise belief state is used to determine the expectation. The question then arises, with respect to which probability function in her belief state should such expectations be taken?
This question motivates a two-step procedure for imprecise probability: first, isolate a set of probability functions as one’s belief state; then, choose a probability function from within this set for decision making—this might be done in advance of any particular decision problem arising—and use that function to make decisions by maximising expected utility. While this sort of procedure is not the only way of thinking about imprecise probability, it does have some adherents. It is a component of the transferrable belief model of Smets and Kennes [25], for instance, and Keynes advocated a similar sort of view:
the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.
(p. 214 in [26])
(We are very grateful to an anonymous referee for pointing out that Smets and Kennes adopt this sort of position, and to Hykel Hosni for alerting us to this view of Keynes.)
The results of this paper can be applied at the second step of this two-step procedure. If one wants a probability function for decision making that controls worst-case g-expected default loss, then one should choose a function in one’s belief state with sufficiently high g-entropy (or a limit point of such functions), where g is in G , the set of appropriate weighting functions. The resulting approach to imprecise probability is conceptually different to objective Bayesian epistemology, but the two approaches are formally equivalent, with the decision function for imprecise probability corresponding to the belief function for objective Bayesian epistemology.

5.4. A Non-Pragmatic Justification

The line of argument in this paper is thoroughly pragmatic: one ought to satisfy the norms of objective Bayesianism in order to control worst-case expected loss. However, the question has recently arisen as to whether one can adapt arguments that appeal to scoring rules to provide a non-pragmatic justification of the norms of rational belief—see, e.g., Joyce [9]. There appears to be some scope for reinterpreting the arguments of this paper in non-pragmatic terms, along the following lines. Instead of viewing L1–4 as isolating an appropriate default loss function, one can view them as postulates on a measure of the inaccuracy of one’s belief in a true proposition: believing a true proposition does not expose one to inaccuracy; inaccuracy strictly increases as the degree of belief in the true proposition decreases; inaccuracy with respect to a proposition only depends on the degree of belief in that proposition; inaccuracy is additive over independent sublanguages. (L4 would need to be changed insofar as that it would need to be physical probability, P * , rather than the agent’s belief function, B, that determines whether sublanguages are independent. This change does not affect the formal results.) A g-scoring rule then measures expected inaccuracy. Strict propriety implies that the physical probability function has minimum expected inaccuracy. (If P * is deterministic, i.e., P * ( ω ) = 1 for some ω Ω , then the unique probability function that puts all mass on ω has minimum expected inaccuracy. In this sense, we can say that strictly proper scoring rules are truth-tracking, which is an important epistemic good.) In order to minimise worst-case g-expected inaccuracy, one would need degrees of belief that are probabilities, that are calibrated to physical probability and that maximise g-entropy.
The main difference between the pragmatic and the non-pragmatic interpretations of the arguments of this paper appears to lie in the default nature of the conclusions under a pragmatic interpretation. It is argued here that loss should be taken to be logarithmic in the absence of knowledge of the true loss function. If one does know the true loss function, L * , and this loss function turns out not to be logarithmic, then one should arguably do something other than minimising worst-case expected logarithmic loss—one should minimise worst-case expected L * -loss. Under a non-pragmatic interpretation, on the other hand, one might argue that L1-4 characterises the correct measure of the inaccuracy of a belief in a true proposition, not a measure that is provisional in the sense that logarithmic loss is. Thus, the conclusions of this paper are arguably firmer—less provisional—under a non-pragmatic construal.

5.5. Questions for Further Research

We noted above that if one knows the true loss function, L * , then one should arguably minimise worst-case expected L * -loss. [3] generalise standard entropy in a different direction to that pursued in this paper, in order to argue that minimising worst-case expected L * -loss requires maximising entropy in their generalised sense. One interesting question for further research is whether one can generalise the notion of g-entropy in an analogous way, to try to show that minimising worst-case g-expected L * -loss requires maximising g-entropy in this further generalised sense.
A second question concerns whether one can extend the discussion of belief over sentences in Section 3 to predicate, rather than propositional, languages. A third question is whether other justifications of the logarithmic score can be used to the justify logarithmic g-score—for example, is the logarithmic g-score the only local strictly proper g-score? Fourth, we suspect that Theorem 3 can be further generalised. Finally, it would be interesting to investigate language invariance in more detail in order to test the conjecture at the end of Section 4.

Acknowledgements

This research was conducted as a part of the project, From objective Bayesian epistemology to inductive logic. We are grateful to the UK Arts and Humanities Research Council for funding this research and to Teddy Groves, Jeff Paris and three anonymous referees for very helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

A. Entropy of Belief Functions

Axiomatic characterizations of standard entropy on probability functions have featured heavily in the literature—see [27]. In this appendix, we provide two characterizations of g-entropy on belief functions, which closely resemble the original axiomatisation provided by Shannon (§6 in [6]). (We appeal to these characterisations in the proof of Proposition 12 in Section B.2.)
We shall need some new notation. Let k N and x R , then denote by x @ k the tuple x , x , , x R k . For x R and y R l , we denote by x · y the vector, x · y 1 , , x · y l R l . For a vector x R k , let | x | 1 = x 1 + + x k . Assume in the following that all x i and all y i j are in [ 0 , 1 ] . Furthermore, let k , l henceforth denote the number of components in x , respectively, y .
Proposition 10 (First characterisation). Let H ( B ) = π Π g ( π ) f ( π , B ) , where f ( π , B ) : = h ( B ( F 1 ) , . . . , B ( F k ) ) for π = { F 1 , . . . , F k } and:
h : k 1 { x 1 , , x k : x i 0 & i = 1 k x i 1 } [ 0 , )
Suppose also that the following conditions hold:
H1: 
h is continuous;
H2: 
if 1 t 1 < t 2 N then h ( 1 t 1 @ t 1 ) < h ( 1 t 2 @ t 2 ) ;
H3: 
if 0 < | x | 1 1 and if | y i | 1 = 1 for 1 i k , then
h ( x 1 · y 1 , , x k · y k ) = h ( x 1 , , x k ) + i = 1 k x i h ( y i )
H4: 
q h ( 1 t ) = h ( 1 t @ q ) for 1 q t N ;
then, H ( B ) = π Π g ( π ) F π B ( F ) log B ( F ) .
Proof: We first apply the proof of Paris [21] (pp. 77–78), which implies (using only H1, H2 and H3) that:
h ( x ) = c i = 1 k x i log x i
for all x with | x | 1 = 1 , where c R > 0 is some constant.
Now suppose 0 < | x | 1 < 1 . Then, with y i : = x i | x | 1 , we have x = | x | 1 · y and | y | 1 = 1 . Thus:
h ( x ) = h ( | x | 1 · y ) = H 3 h ( | x | 1 ) + | x | 1 h ( y ) = (83) h ( | x | 1 ) | x | 1 c i = 1 l y i log y i .
We will next show that h ( x ) = c x log x for x [ 0 , 1 ) . Thus, note that h ( 1 t ) = H 4 1 t h ( 1 t @ t ) = (83) 1 t ( c t 1 t log 1 t ) = c 1 t log 1 t . For 1 q t N , we now find:
c 1 t log 1 t = h ( 1 t ) = H 4 1 q h ( 1 t @ q ) = 1 q h ( q t · 1 q @ q ) = H 3 1 q h ( q t ) + q t h ( 1 q @ q ) = (83) 1 q h ( q t ) + q t ( c q 1 q log 1 q )
Thus:
h ( q t ) = c q t log ( 1 t ) log ( 1 q ) = c q t log q t
Hence, h is of the claimed form for rational numbers in ( 0 , 1 ] . The continuity axiom, H1, now guarantees that h ( x ) = c x log x for all x [ 0 , 1 ] R . Putting our results together, we obtain:
h ( x ) = c | x | 1 log | x | 1 c | x | 1 i = 1 l y i log y i = c | x | 1 ( i = 1 l y i log | x | 1 + i = 1 l y i log y i ) = c | x | 1 i = 1 l y i log ( | x | 1 · y i ) = c i = 1 l x i log x i
Finally, note that h does satisfy all the axioms. The constant, c, can then be absorbed into the weighting function, g, to give H ( B ) = π Π g ( π ) F π B ( F ) log B ( F ) , as required.  ■
A tighter analysis reveals that the axiomatic characterization above may be weakened. We may replace H3 by the following two instances of H3:
A: If | x | 1 = 1 and if | y i | 1 = 1 for 1 i k , then
h ( x 1 · y 1 , , x k · y k ) = h ( x 1 , , x k ) + i = 1 k x i h ( y i )
B: If 0 < x < 1 and if | y | 1 = 1 , then
h ( x · y ) = h ( x ) + x h ( y )
Property A is, of course, Shannon’s original axiom H3. The axiom H3 used above is the straightforward generalization of Shannon’s H3 to vectors x summing to less than one.
Proposition 11 (Second characterisation). Let H ( B ) = π Π g ( π ) f ( π , B ) , where f ( π , B ) : = h ( B ( F 1 ) , . . . , B ( F k ) ) for π = { F 1 , . . . , F k } and:
h : k 1 { x 1 , , x k : x i 0 & i = 1 k x i 1 } [ 0 , )
Suppose also that the following conditions hold:
H1: 
h is continuous;
H2: 
if 1 t 1 < t 2 N then h ( 1 t 1 @ t 1 ) < h ( 1 t 2 @ t 2 ) ;
A: 
if | x | 1 = 1 and if | y i | 1 = 1 for 1 i k , then
h ( x 1 · y 1 , , x k · y k ) = h ( x 1 , , x k ) + i = 1 k x i h ( y i )
B: 
: if 0 < x < 1 and if | y | 1 = 1 , then
h ( x · y ) = h ( x ) + x h ( y )
C: 
for 0 < x , y < 1 , it holds that h ( x · y ) = x h ( y ) + y h ( x ) ;
D: 
for 0 < x < 1 , it holds that h ( x ) = h ( x , 1 x ) h ( 1 x ) ;
then, H ( B ) = π Π g ( π ) F π B ( F ) log B ( F ) .
Proof: We shall again invoke the proof in Paris [21] (pp. 77–78) to show (using only H1, H2 and A) that:
h ( x ) = c i = 1 k x i log x i
for all x with | x | 1 = 1 and some constant, c R > 0 .
Now suppose 0 < | x | 1 < 1 . Then, with y i : = x i | x | 1 , we have x = | x | 1 · y and | y | 1 = 1 . Thus:
h ( x ) = h ( | x | 1 · y ) = H 3 h ( | x | 1 ) + | x | 1 h ( y ) = (88) h ( | x | 1 ) | x | 1 c i = 1 l y i log y i
As we have seen in the previous proof, it now only remains to show that h ( x ) = c x log x for x [ 0 , 1 ] R .
We next show by induction that for all non-zero t N , h ( 1 2 t ) = c 1 2 t log 1 2 t .
The base case is immediate, observe that:
h ( 1 2 ) = D 1 2 h ( 1 2 , 1 2 ) = (88) c 1 2 log 1 2
Using the induction hypothesis (IH), the inductive step is straightforward too:
h ( 1 2 t ) = C 1 2 t 1 h ( 1 2 ) + 1 2 h ( 1 2 t 1 ) = I H c 1 2 t log ( 1 2 ) + 1 2 t log ( 1 2 t 1 ) = c 1 2 t log 1 2 t
We next show by induction on t 1 that for all non-zero natural numbers m < 2 t , h ( m 2 t ) = c m 2 t log m 2 t .
For the base case, simply note that t = m = 1 , and thus:
h ( 1 2 1 ) = c 1 2 log 1 2
The inductive step follows for m < 2 t 1 :
h ( m 2 t ) = C m 2 t 1 h ( 1 2 ) + 1 2 h ( m 2 t 1 ) = I H c m 2 t 1 1 2 log ( 1 2 ) c 1 2 m 2 t 1 log ( m 2 t 1 ) = c m 2 t log m 2 t
For 2 t 1 < m < 2 t , we find:
h ( m 2 t ) = D h ( m 2 t , 2 t m 2 t ) h ( 2 t m 2 t ) = (88) c m 2 t log ( m 2 t ) + 2 t m 2 t log ( 2 t m 2 t ) h ( 2 t m 2 t ) = I H c m 2 t log ( m 2 t ) + 2 t m 2 t log ( 2 t m 2 t ) + c 2 t m 2 t log ( 2 t m 2 t ) = c m 2 t log m 2 t
Since rational numbers of the form m 2 t are dense in [ 0 , 1 ] R , we can use the continuity axiom, H1, to conclude that h has to be of the desired form.
Finally, note that h does satisfy all the axioms. The constant, c, can then be absorbed into the weighting function, g, to give the required form of H ( B ) .  ■
We can combine B and C to form one single axiom, H5, which implies B and C:
H5: if 0 < x < 1 and if | y | 1 1 , then
h ( x · y ) = | y | 1 h ( x ) + x h ( y )
Clearly, H5 is a natural way to generalize A to belief functions. It now follows easily that H1, H2, A, H5 and D jointly constrain h to h ( x ) = c i = 1 k x i log x i .
Although it is certainly possible to consider the g-entropy of a belief function, maximising standard entropy over B —as opposed to E P —has bizarre consequences. For | Ω | = 2 , we have that { B z B : z [ 0 , 1 ] , B z ( Ω ) = z , B z ( ) = 1 z , B z ( ω 1 ) = B z ( ω 2 ) = 1 e } is the set of entropy maximisers. This follows from considering the following optimization problem:
maximize B ( ω 1 ) log B ( ω 1 ) B ( ω 2 ) log B ( ω 2 ) subject to 0 B ( ) , B ( Ω ) , B ( ω 1 ) , B ( ω 2 ) B ( ω 1 ) + B ( ω 2 ) 1 B ( ) + B ( Ω ) 1 B ( ) + B ( Ω ) = 1 or B ( ω 1 ) + B ( ω 2 ) = 1
Putting B ( ) + B ( Ω ) = 1 ensures that the last two constraints are satisfied and permits the choice of B ( ω 1 ) , B ( ω 2 ) such that B ( ω 1 ) + B ( ω 2 ) < 1 . For non-negative B ( ω ) , we have that B ( ω ) log B ( ω ) obtains the unique maximum at B ( ω ) = 1 e . The claimed optimality result follows.
It is worth pointing out that this phenomenon does not depend on the base of the logarithm. For | Ω | 3 , however, intuition honed by considering the entropy of probability functions does not lead one astray. For | Ω | 3 , any belief function B with B ( ω ) = 1 | Ω | for ω Ω does maximize standard entropy.
Similarly bizarre consequences also obtain in the case of other g-entropies. For | Ω | = 2 and g ( { Ω } ) + g ( { Ω , } ) g ( { ω 1 } , { ω 2 } ) , belief functions maximizing g-entropy satisfy B ( ω 1 ) = B ( ω 2 ) = 1 e . To see this, simply note that for such g, the optimum obtains for B ( Ω ) + B ( ) = 1 .
For the proposition entropy for | Ω | = 2 , there are two entropy maximisers in B . They are B 1 ( ) = B 1 ( Ω ) = 1 2 , B 1 ( ω 1 ) = B 1 ( ω 2 ) = 1 e and B 2 ( ) = B 2 ( Ω ) = 1 e , B 2 ( ω 1 ) = B 2 ( ω 2 ) = 1 2 .
Thus, an agent adopting a belief function maximizing g-entropy over B may violate the probability norm. Furthermore, the agent may have to choose a belief function from finitely or infinitely many such non-probabilistic functions. For an agent minimizing worst-case g-expected loss, these bizarre situations do not arise. From Theorem 2 and knowing that for inclusive g, minimizing worst-case g-expected loss forces the agent to adopt a probability function that maximizes g-entropy over the set E of calibrated probability functions. By Corollary 2, this probability function is unique.

B. Properties of g-Entropy Maximisation

The general properties of standard entropy (defined on probability functions) have been widely studied in the literature. Here, we examine general properties of the g-entropy of a probability function, for g G . We have already seen one difference between standard and g-entropy in Section 4: standard entropy satisfies language invariance; g-entropy, in general, need not. Surprisingly, language invariance seems to be an exception. Standard entropy and g-entropy behave in many respects in the same way.

B.1. Preserving the Equivocator

For example, as we shall see now, if g is inclusive and symmetric then the probability function that is deemed most equivocal—i.e., the function, out of all probability functions, with maximum g-entropy—is the equivocator function, P = , which gives each state the same probability.
Definition 16 (Equivocator-preserving). A weighting function g is called equivocator-preserving, if and only if arg sup P P H g ( P ) = P = .
That symmetry and inclusiveness are sufficient for g to be equivocator-preserving will follow from the following lemma:
Lemma 9. For inclusive g , g is equivocator-preserving if and only if:
z ( ω ) : = F Ω ω F π Π F π g ( π ) ( 1 log | Ω | + log | F | ) = c
for some constant, c.
Proof: Recall from Proposition 2 that g-entropy is strictly concave on P . Thus, every critical point in the interior of P is the unique maximiser of H g ( · ) on P .
Now consider the Lagrange function, L a g :
L a g ( P ) = λ ( 1 + ω Ω P ( ω ) ) + H g ( P ) = λ ( 1 + ω Ω P ( ω ) ) + π Π g ( π ) F π ω F P ( ω ) log ω F P ( ω )
For fixed ω Ω and π Π , denote by F ω , π the unique F Ω such that ω F and F π . Taking derivatives, we obtain:
P ( ω ) L a g ( P ) = λ + π Π g ( π ) ( 1 + log ν F ω , π P ( ν ) ) for all ω Ω
Now, if P = maximises g-entropy, then for all ω Ω , the following must vanish:
P ( ω ) L a g ( P = ) = λ + π Π g ( π ) ( 1 + log P = ( F ω , π ) ) = λ + π Π g ( π ) ( 1 + log | F ω , π | | Ω | ) = λ + π Π g ( π ) ( 1 log | Ω | + log | F ω , π | ) = λ + F Ω ω F π Π F π g ( π ) ( 1 log | Ω | + log | F | )
Since this expression has to vanish for all ω Ω , it does not depend on ω .
On the other hand, if g is such that:
F Ω ω F π Π F π g ( π ) ( 1 log | Ω | + log | F | )
does not depend on ω , then P = is a critical point of L a g ( P ) and, thus, is the entropy maximiser.  ■
Corollary 6. If g is symmetric and inclusive, then it is equivocator-preserving.
Proof: By Lemma 9, we only need to show that:
F Ω ω F π Π F π g ( π ) ( 1 log | Ω | + log | F | )
does not depend on ω .
Denote by π i j , respectively F i j , the result of replacing ω i by ω j and vice versa in π Π , respectively F Ω . By the symmetry of g, we have g ( π ) = g ( π i j ) . Since | F | = | F i j | , we then find for all ω i , ω j Ω :
F Ω ω i F π Π F π g ( π ) ( 1 log | Ω | + log | F | ) = F Ω ω i F π Π F π g ( π i j ) ( 1 log | Ω | + log | F i j | ) = F Ω ω i F π Π F i j π g ( π ) ( 1 log | Ω | + log | F i j | ) = F Ω ω j F π Π F π g ( π ) ( 1 log | Ω | + log | F | )
 ■
Are there are any non-symmetric, inclusive g that are equivocator-preserving? We pose this as an interesting question for further research.

B.2. Updating

Next, we show that there is widespread agreement between updating by conditionalisation and updating by g-entropy maximisation, a result to which we alluded in Section 5.
Proposition 12. Suppose that E is the set of probability functions calibrated with evidence E. Let g be inclusive and G Ω such that E = { P E : P ( G ) = 1 } , where E is the set of probability functions calibrated with evidence E { G } . Then, the following are equivalent:
  • P E ( · | G ) [ E ]
  • P E { G } ( · ) = P E ( · | G ) ,
where P E , P E { G } are functions in E , E , respectively, that maximise g-entropy.
Proof: First, suppose that P E ( · | G ) [ E ] .
Observe that if E = E , then there is nothing to prove. Thus, suppose that E E . Hence, there exists a function P E with P ( G ¯ ) > 0 . By Proposition 18, inclusive g are open-minded, hence P E ( G ¯ ) > 0 . (Note that the proof of Proposition 18 does not itself depend on Proposition 12.) Therefore, P E ( · | G ¯ ) is well defined.
Now let P 1 : = P E { G } and P : = P E . Then, assume for contradiction that P 1 ( F ) P ( F | G ) for some F Ω . By Corollary 2, the g-entropy maximiser P 1 in [ E ] is unique; furthermore, P ( · | G ) [ E ] . It follows that:
π Π g ( π ) F π P 1 ( F ) log P 1 ( F ) = H g ( P 1 ) > H g ( P ( · | G ) ) = π Π g ( π ) F π P ( F | G ) log P ( F | G )
Now, define P ( · ) = P ( G ) P 1 ( · | G ) + P ( G ¯ ) P ( · | G ¯ ) . Since [ E ] is convex, P 1 , P ( · | G ) [ E ] , and since P 1 ( · | G ) = P 1 , we have that P [ E ] .
Using the above inequality, we observe, using axiom A of Appendix A with x = P ( G ) , P ( G ¯ ) , y 1 = P 1 ( F | G ) : F π and y 2 = P ( F | G ) : F π that:
H g ( P ) = π Π g ( π ) F π P ( F ) log P ( F ) = π Π g ( π ) F π ( P ( G ) P 1 ( F | G ) + P ( G ¯ ) P ( F | G ¯ ) ) log ( P ( G ) P 1 ( F | G ) + P ( G ¯ ) P ( F | G ¯ ) ) = A π Π g ( π ) P ( G ) log P ( G ) + P ( G ¯ ) log P ( G ¯ ) + π Π g ( π ) P ( G ) F π P 1 ( F | G ) log P 1 ( F | G ) + P ( G ¯ ) F π P ( F | G ¯ ) log P ( F | G ¯ ) = π Π g ( π ) P ( G ) log P ( G ) + P ( G ¯ ) log P ( G ¯ ) + π Π g ( π ) P ( G ) F π P 1 ( F ) log P 1 ( F ) + P ( G ¯ ) F π P ( F | G ¯ ) log P ( F | G ¯ ) > π Π g ( π ) P ( G ) log P ( G ) + P ( G ¯ ) log P ( G ¯ ) + π Π g ( π ) P ( G ) F π P ( F | G ) log P ( F | G ) + P ( G ¯ ) F π P ( F | G ¯ ) log P ( F | G ¯ ) = H g ( P )
Our above calculation contradicts that P maximises g-entropy over [ E ] . Thus, P 1 ( · ) = P ( · | G ) .
Conversely, suppose that P E ( · | G ) = P E { G } ( · ) . Now, simply observe P E ( · | G ) [ E ] [ E ] .  ■
Theorem 9. Suppose that convex and closed E is the set of probability functions calibrated with evidence E, and E is the set of probability functions calibrated with evidence E { G } . Furthermore, suppose that P E , P E { G } are functions in E , E , respectively, that maximise g-entropy for some fixed g G inc { g Ω } . If:
(i) 
G Ω ,
(ii) 
the only constraints imposed by E { G } are the constraints imposed by E together with the constraint P ( G ) = 1 ,
(iii) 
the constraints in (ii) are consistent, and
(iv) 
P E ( · | G ) E ,
then, P E { G } ( F ) = P E ( F | G ) for all F Ω .
Proof: For g G inc , this follows directly from Proposition 12. Simply note that E = [ E ] and, thus, P E ( · | G ) [ E ] .
The proof of Proposition 12 also goes through for g = g Ω . This follows from the fact that all the ingredients in the proof—open-mindedness, uniqueness of the g-entropy maximiser on a convex set E and the axiomatic characterizations in Appendix A—also hold for standard entropy.  ■
This extends Seidenfeld’s result for standard entropy, Theorem 8, to arbitrary convex sets, E P , and, also, to inclusive weighting functions.

B.3. Paris-Vencovská Properties

The following eight principles have played a central role in axiomatic characterizations of the maximum entropy principle by Paris and Vencovská—c.f., [21,28,29,30]. The first seven principles were first put forward in [29]. [28] views all eight principles as following from the following single common-sense principle: “Essentially similar problems should have essentially similar solutions.”
While Paris and Vencovská mainly considered linear constraints, we shall consider arbitrary convex sets, E , E 1 . Adopting their definitions and using our notation, we investigate the following properties:
Definition 17 (1: Equivalence). P only depends on E and not on the constraints that give rise to E .
This clearly holds for every weighting function g .
Definition 18 (2: Renaming). Let p e r be an element of the permutation group on { 1 , , | Ω | } . For a proposition F Ω with F = { ω i 1 , , ω i k } , define p e r ( F ) = { ω p e r ( i 1 ) , , ω p e r ( i k ) } . Next, let p e r ( B ( F ) ) = B ( p e r ( F ) ) and p e r ( E ) = { p e r ( P ) : P E } . Then, g satisfies renaming if and only if P E ( F ) = P p e r ( E ) ( p e r ( F ) ) .
Proposition 13. If g is inclusive and symmetric, then g satisfies renaming.
Proof: For π Π with π = { F i 1 , , F i f } , define p e r ( π ) = { p e r ( F i 1 ) , , p e r ( F i f ) } . Using that g is symmetric for the second equality, we find:
H g ( P ) = π Π g ( π ) F π P ( F ) log P ( F ) = π Π g ( p e r 1 ( π ) ) F π P ( F ) log P ( F ) = π Π g ( π ) F p e r ( π ) P ( F ) log P ( F ) = π Π g ( π ) F π P ( p e r ( F ) ) log P ( p e r ( F ) ) = H g ( p e r ( P ) )
Thus, P p e r ( E ) = p e r ( P ) , and hence, P p e r ( E ) ( p e r ( F ) ) = p e r ( P ) ( p e r ( F ) ) = P ( F ) .  ■
Weighting functions g satisfying the renaming property satisfy a further symmetry condition, as we shall see now.
Definition 19 (Symmetric complement). For P P , define the symmetric complement of P with respect to A i , denoted by σ i ( P ) , as follows:
σ i ( P ) ( ± A 1 ± A i 1 ± A i ± A i + 1 ± A n ) : = P ( ± A 1 ± A i 1 ± A i ± A i + 1 ± A n )
i.e., σ i ( P ) ( ω ) = P ( ω ) , where ω is ω but with A i negated. A function P P is called symmetric with respect to A i if and only if P = σ i ( P ) .
We call E P symmetric with respect to A i just when the following condition holds: P [ E ] , if and only if σ i ( P ) [ E ] .
Corollary 7. For all symmetric and inclusive g and all E that are symmetric with respect to A i , it holds that:
P = σ i ( P )
Thus, if E is symmetric with respect to A i so is P .
Proof: Since g is symmetric and inclusive, there is some function γ : N R > 0 , such that H g ( P ) = F Ω γ ( | F | ) P ( F ) log P ( F ) for all P P . Hence:
H g ( P ) = F Ω γ ( | F | ) P ( F ) log P ( F ) = F Ω γ ( | F | ) · σ i ( P ) ( F ) · log ( σ i ( P ) ( F ) ) = H g ( σ i ( P ) )
Since E is symmetric with respect to A i , we have that σ i ( P ) [ E ] . Therefore, if P σ i ( P ) then there are two different probability functions in [ E ] which both have maximum entropy. This contradicts the uniqueness of the g-entropy maximiser (Corollary 2).  ■
This Corollary explains the symmetries exhibited in the tables in the proof of Proposition 8. Since in that proof, E is symmetric with respect to A 3 , the proposition entropy and the partition entropy maximisers are symmetric with respect to A 3 . Thus, P P Ω , L ( ω A 3 ) = P P Ω , L ( ω ¬ A 3 ) and P Π , L ( ω A 3 ) = P Π , L ( ω ¬ A 3 ) for all ω Ω .
Definition 20 (3: Irrelevance). Let P 1 , P 2 be the sets of probability functions on disjoint L 1 , L 2 , respectively. Then irrelevance holds if, for E 1 P 1 and E 2 P 2 , we have that P E 1 ( F × Ω 2 ) = P E 1 × E 2 ( F × Ω 2 ) for all propositions F of L 1 , where P E 1 , P E 1 × E 2 are the g-entropy maximisers on L 1 L 2 with respect to E 1 × P 2 , respectively, E 1 × E 2 .
Proposition 14. Neither the partition nor the proposition weighting satisfy irrelevance.
Proof: Let L 1 = { A 1 , A 2 } , L 2 = { A 3 } , E 1 = { P P 1 : P ( A 1 A 2 ) + 2 P ( ¬ A 1 ¬ A 2 ) = 0 . 2 } and E 2 = { P P 2 : P ( A 3 ) = 0 . 1 } . Then, with ω 1 = ¬ A 1 ¬ A 2 ¬ A 3 , ω 2 = ¬ A 1 ¬ A 2 A 3 and so on, we find:
Table A1. Partition entropy and proposition entropy maximisers and irrelevance.
Table A1. Partition entropy and proposition entropy maximisers and irrelevance.
ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω 8
P Π , E 1 0 . 0142 0 . 0142 0 . 2071 0 . 2071 0 . 2071 0 . 2071 0 . 0715 0 . 0715
P Π , E 1 × E 2 0 . 0312 0 . 0004 0 . 3692 0 . 0466 0 . 3692 0 . 0466 0 . 1304 0 . 0064
P P Ω , E 1 0 . 0050 0 . 0050 0 . 2025 0 . 2025 0 . 2025 0 . 2025 0 . 0901 0 . 0901
P P Ω , E 1 × E 2 0 . 0211 6 . 2 × 10 9 0 . 3606 0 . 0500 0 . 3606 0 . 0500 0 . 1577 2 . 3 × 10 6
Now, simply note that, for instance:
P Π , E 1 ( ¬ A 1 ¬ A 2 ) = P Π , E 1 ( ω 1 ) + P Π , E 1 ( ω 2 ) P Π , E 1 × E 2 ( ω 1 ) + P Π , E 1 × E 2 ( ω 2 ) = P Π , E 1 × E 2 ( ¬ A 1 ¬ A 2 )
(As we are going to see in Proposition 18, none of the values in the table can be zero. Therefore, the small numerical values found by computer approximation are not artifacts of the approximations involved.)  ■
Definition 21 (4: Relativisation). Let F Ω , E = { P P : P ( F ) = z } E 1 E 2 and E = { P P : P ( F ) = z } E 1 E 2 , where E 1 is determined by a set of constraints on the P ( G ) with G F , and the E 2 , E 2 are determined by a set of constraints on the P ( G ) with G F ¯ . Then, P E ( G ) = P E ( G ) for all G F .
Proposition 15. Neither the partition not the proposition weighting satisfy relativisation.
Proof: Let | Ω | = 8 , F = { ω 1 , ω 2 , ω 3 , ω 4 , ω 5 } , P ( F ) = 0 . 5 , and put E 1 = { P P : P ( ω 1 ) + 2 P ( ω 2 ) + 3 P ( ω 3 ) + 4 P ( ω 4 ) = 0 . 2 } , E 2 = P , E 2 = { P P : P ( ω 6 ) + 2 P ( ω 7 ) + 3 P ( ω 8 ) = 0 . 7 } . Then, P Π , E and P Π , E differ substantially on three out of five ω F , as do P P Ω , E and P P Ω , E , as can be seen from the following table:
Table A2. Partition entropy and proposition entropy maximisers and relativisation.
Table A2. Partition entropy and proposition entropy maximisers and relativisation.
ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω 8
P Π , E 0 . 1251 0 . 0308 0 . 0041 0 . 0003 0 . 3398 0 . 1667 0 . 1667 0 . 1667
P Π , E 0 . 1242 0 . 0312 0 . 0041 0 . 0003 0 . 3402 0 . 3356 0 . 1288 0 . 0356
P P Ω , E 0 . 1523 0 . 0239 5 . 5 × 10 7 6 . 8 × 10 9 0 . 3239 0 . 1667 0 . 1667 0 . 1667
P P Ω , E 0 . 1495 0 . 0252 7 . 0 × 10 7 7 . 6 × 10 9 0 . 3252 0 . 3252 0 . 1495 0 . 0252
Definition 22 (5: Obstinacy). If E 1 is a subset of E such that P E [ E 1 ] , then P E = P E 1 .
Proposition 16. If g is inclusive, then it satisfies the obstinacy principle.
Proof: This follows directly from the definition of P E .  ■
Definition 23 (6: Independence). If E = { P P | P ( A 1 A 3 ) = α , P ( A 2 A 3 ) = β , P ( A 3 ) = γ } , then for γ > 0 , it holds that P ( A 1 A 2 A 3 ) = α β γ .
Proposition 17. Neither the partition entropy nor the proposition weighting satisfy independence.
Proof: Let L = { A 1 , A 2 , A 3 } , α = 0 . 2 , β = 0 . 35 , γ = 0 . 6 , then:
P Π ( A 1 A 2 A 3 ) = 0 . 1197 0 . 1167 = 0 . 2 · 0 . 35 0 . 6
and
P P Ω ( A 1 A 2 A 3 ) = 0 . 1237 0 . 1167 = 0 . 2 · 0 . 35 0 . 6
Definition 24 (7: Open-mindedness). A weighting function g is open-minded, if and only if for all E and all F Ω , it holds that P ( F ) = 0 if and only if P ( F ) = 0 for all P E .
Proposition 18. Any inclusive g is open-minded.
Proof: First, observe that P ( F ) = 0 for all P E , if and only if P ( F ) = 0 for all P [ E ] .
Now, note that if P ( F ) = 0 for all P [ E ] , then P g ( F ) = 0 , since P g [ E ] . On the other hand, if there exists an F Ω such that P g ( F ) = 0 < P ( F ) for some P [ E ] , then S g log ( P , P g ) = > H g ( P g ) . Thus, adopting P g exposes one to an infinite loss, and by Theorem 2 adopting the g-entropy maximiser exposes one to the finite loss, H g ( P g ) . This is a contradiction. Thus, P g ( F ) > 0 .
Overall, P g ( F ) = 0 if and only if P ( F ) = 0 for all P [ E ] .  ■
Definition 25 (8: Continuity). Let us recall the definition of the Blaschke metric, Δ, between two convex sets, E , E 1 P :
Δ ( E , E 1 ) = inf { δ | P E P 1 E 1 : | P , P 1 | δ & P 1 E 1 P E : | P , P 1 | δ }
where | · , · | is the usual Euclidean metric between elements of R | Ω | . g satisfies continuity, if and only if the function arg sup P E H g ( P ) is continuous in the Blaschke metric.
Proposition 19. Any inclusive g satisfies the continuity property.
Proof: Since the g-entropy is strictly concave (see Proposition 2), we may apply Theorem 7.5 on p. 91 in [21]. Thus, if E is determined by finitely many linear constraints, then g satisfies continuity. Paris [21] credits I. Maung for the proof of the theorem.
Now, let E P be an arbitrary convex set. Note that we can approximate E arbitrarily closely by two sequences E t , E t , where each member of the sequences is determined by finitely many linear constraints, such that E t E t + 1 E E t + 1 E t . By this subset relation, we have sup P E t H g ( P ) sup P E H g ( P ) sup P E t H g ( P ) . With P t : = arg sup P E t H g ( P ) and P t : = arg sup P E t H g ( P ) , we have lim t P t = lim t P t by Maung’s theorem.
Since E t converges to E t in the Blaschke metric, we have by Maung’s theorem that lim t sup P E t H g ( P ) = lim t sup P E t H g ( P ) = sup P E H g ( P ) . Note that lim t P t [ E ] . Moreover, since E is convex, H g is strictly concave, and since E t converges to E , we have lim t H g ( P t ) = sup P E H g ( P ) . By the uniqueness of the g-entropy maximiser on E , we thus find lim t P t = P , lim t P t = P and lim t P t = lim t P t .
Since the sets determined by finitely many linear constraints are dense in the set of convex E P , we can use a standard approximation argument yielding that arg sup P E H g ( P ) is continuous in the Blaschke metric on the set of convex E P .  ■

B.4. The Topology of g-Entropy

We have so far investigated g-entropy for fixed g G . We now briefly consider the location and shape of the set of g-entropy maximisers.
For standard entropy maximisation and g-entropy maximisation with inclusive and symmetric g, the respective maximisers all obtain at P = , if P = [ E ] ; cf. Corollary 6.
If P = [ E ] , then the maxima all obtain at the boundary of E “facing” P = . To make this latter observation precise, we denote for P , P P the line segment in P that connects P with P , end points included, by P P ¯ .
Proposition 20 (g-entropy is maximised at the boundary). For inclusive and symmetric g, P = P ¯ [ E ] = { P } .
Proof: If P = [ E ] , then P = P = , by Corollary 6.
If P = [ E ] , suppose that there exists a P P = P ¯ [ E ] different from P . Then, by the concavity of g-entropy on P (Proposition 2) and the equivocator-preserving property (Corollary 6), we have H g ( P = ) > H g ( P ) > H g ( P ) . By the convexity of [ E ] and Proposition 2, we have H g ( P ) > H g ( P ) for all P [ E ] { P } . Contradiction.  ■
We saw in Theorem 7 that for a particular sequence g t converging to g Ω , P g t converges to P Ω . We shall now show that this is an instance of a more general phenomenon. We will demonstrate that P g varies continuously for continuous changes in g for g G .
Proposition 21 (Continuity of g-entropy maximisation). For all E , the function:
arg sup P E H ( · ) ( P ) : G [ E ] , g P g
is continuous on G .
Proof: Consider a sequence, ( g t ) t N G , converging to some g G . We need to show that P g t converges to P g .
From g t converging to g, it easily follows that H g t ( P ) converges to H g ( P ) for all P P .
Since g-entropy is strictly concave, we have that for every P [ E ] { P g } , there exists some ϵ > 0 such that H g ( P ) + ϵ = H g ( P g ) . By the fact that H g t ( P ) converges to H g ( P ) for all P, we find that H g t ( P ) + ϵ 2 < H g t ( P g ) for all t which are greater than some T N .
Since H g t ( P g ) H g t ( P g t ) , it follows that P cannot be a point of accumulation of the sequence, ( P g t ) t N .
The sequence P g t takes values in the compact set [ E ] , so it has at least one point of accumulation. We have demonstrated above that P g is the only possible point of accumulation. Hence, P g is the only point of accumulation and, therefore, the limit of this sequence.  ■
The continuity of g-entropy maximisation will be instrumental in proving the next proposition, which asserts that the g-entropy maximisers are clustered together.
Proposition 22. For any E , if G G inc is path-connected, then the set { P g : g G } is path-connected.
Proof: By Proposition 21, the map arg sup P E H ( · ) ( P ) is continuous. The image of a path-connected set under a continuous map is path-connected.  ■
Corollary 8. For all E , the sets { P g : g G inc } and { P g : g G 0 } are path-connected.
Proof: G inc and G 0 are convex; thus, they are path-connected. Now, apply Proposition 22.  ■
It is, in general, not the case that a convex combination of weighting functions generates a convex combination of the corresponding g-entropy maximisers:
Proposition 23. For a convex combination of weighting functions, g = λ g 1 + ( 1 λ ) g 2 , in general, it fails to hold that P g = λ P g 1 + ( 1 λ ) P g 2 . Moreover, in general, P g P g 1 P g 2 ¯ .
Proof: Let g 1 = g Π , g 2 = g P Ω and λ = 0 . 3 . Then, for a language L with two propositional variables and E = { P P : P ( ω 1 ) + 2 P ( ω 2 ) + 3 P ( ω 3 ) + 4 P ( ω 4 ) = 1 . 7 } , we can see from the following table that P 0 . 3 g Π + 0 . 7 g P Ω 0 . 3 P Π + 0 . 7 P P Ω .
Table A3. Partition entropy and proposition entropy maximisers and their convex combinations.
Table A3. Partition entropy and proposition entropy maximisers and their convex combinations.
ω 1 ω 2 ω 3 ω 4
P Π 0 . 5331 0 . 2841 0 . 1324 0 . 0504
P P Ω 0 . 5192 0 . 3008 0 . 1408 0 . 0392
0 . 3 P Π + 0 . 7 P P Ω 0 . 5234 0 . 2958 0 . 1383 0 . 0426
P 0 . 3 g Π + 0 . 7 g P Ω 0 . 5272 0 . 2915 0 . 1353 0 . 0459
P P Ω P 0 . 3 g Π + 0 . 7 g P Ω P P Ω P Π 0 . 5755 0 . 5569 0 . 6429 0 . 6036
If P 0 . 3 g Π + 0 . 7 g P Ω were in P Π P P Ω ¯ , then the last line of the above table would be constant for all ω Ω . As we can see, the values in the last line do vary.  ■

C. Level of Generalisation

In this section we shall show that the generalisation of entropy and score used in the text above is essentially the right one. We shall do this by defining broader notions of entropy and score of which the g-entropy and g-score are special cases, and showing that entropy maximisation only coincides with minimisation of worst-case score in the special case of g-entropy and g-score as they are defined above.
We will focus on the case of belief over propositions; belief over sentences behaves similarly. Our broader notions will be defined relative to a weighting γ : P Ω R 0 of propositions, rather than a weighting g : Π R 0 of partitions.
Definition 26 (γ-entropy). Given a function γ : P Ω R 0 , the γ-entropy of a normalised belief function is defined as:
H γ ( B ) : = F Ω γ ( F ) B ( F ) log B ( F )
Definition 27 (γ-score). Given a loss function, L, and a function γ : P Ω R 0 , the γ-expected loss function or γ-scoring rule, or simply γ-score, is S γ L : P × B [ , ] such that S γ L ( P , B ) = F Ω γ ( F ) P ( F ) L ( F , B ) .
Definition 28 (Equivalent to a weighting of partitions). A weighting of propositions γ : P Ω R 0 is equivalent to a weighting of partitions, if there exists a function g : Π R 0 , such that for all F Ω :
γ ( F ) = π Π F π g ( π )
We see then that the notions of g-entropy and g-score coincide with those of γ-entropy and γ-score, just when the weightings of propositions γ are equivalent to weightings of partitions. Next, we extend the notion of inclusivity to our more general weighting functions:
Definition 29 (Inclusive weighting of propositions). A weighting of propositions γ : P Ω R 0 is inclusive if γ ( F ) > 0 for all F Ω .
We shall also consider a slight generalisation of strict propriety (cf., discussion following Definition 6):
Definition 30 (Strictly X -proper γ-score). For P X B , a γ-score S γ L : P × B [ , ] is strictly X -proper, if for all P P , the restricted function S γ L ( P , · ) : X [ , ] has a unique global minimum at B = P . A γ-score is strictly proper if it is strictly B -proper. A γ-score is merely X -proper if for some P, this minimum at B = P is not the only minimum.
Note that if a γ-score is strictly X -proper, then it is strictly Y -proper for P Y X . Thus, if it is strictly proper, it is also strictly B -proper and strictly P -proper.
Proposition 24. The logarithmic γ-score S γ log ( P , B ) is non-negative and convex as a function of B B . For inclusive γ, convexity is strict, i.e., S γ log ( P , λ B 1 + ( 1 λ ) B 2 ) < λ S γ log ( P , B 1 ) + ( 1 λ ) S γ log ( P , B 2 ) for λ ( 0 , 1 ) , unless B 1 and B 2 agree everywhere except where P ( F ) = 0 .
Proof: The logarithmic γ-score is non-negative because B ( F ) , P ( F ) [ 0 , 1 ] for all F, so log B ( F ) 0 , γ ( F ) P ( F ) 0 and γ ( F ) P ( F ) log B ( F ) 0 .
That S γ log ( P , B ) is strictly convex as a function of B follows from the strict concavity of log x . Take as distinct B 1 , B 2 B and λ ( 0 , 1 ) , and let B = λ B 1 + ( 1 λ ) B 2 . Now:
γ ( F ) P ( F ) log ( B ( F ) ) = γ ( F ) P ( F ) log ( λ · B 1 ( F ) + ( 1 λ ) B 2 ( F ) ) γ ( F ) P ( F ) λ log B 1 ( F ) + ( 1 λ ) log B 2 ( F ) = λ γ ( F ) P ( F ) log B 1 ( F ) + ( 1 λ ) γ ( F ) P ( F ) log B 2 ( F )
with equality iff either P ( F ) = 0 or B 1 ( F ) = B 2 ( F ) (since in the latter case, γ ( F ) P ( F ) > 0 ).
Hence:
S γ log ( P , B ) = F Ω γ ( F ) P ( F ) log B ( F ) λ S γ log ( P , B 1 ) + ( 1 λ ) S γ log ( P , B 2 )
with equality if and only if B 1 and B 2 agree everywhere, except, possibly, where P ( F ) = 0 .  ■
Corollary 9. For inclusive γ and fixed P P , arg inf B B S γ log ( P , B ) is unique. For B : = arg inf B B S γ log ( P , B ) and for all F Ω , we have B ( F ) > 0 if and only if P ( F ) > 0 . Moreover, B ( Ω ) = 1 and B B .
Proof: First of all, suppose that there is an F Ω such that P ( F ) > 0 and B ( F ) = 0 . Then, S γ log ( P , B ) = . Furthermore, S γ log ( P , P ) < for all P P . Hence, for B arg inf B B S γ log ( P , B ) , it holds that P ( F ) > 0 implies B ( F ) > 0 .
Now, note that for P P , we have P ( Ω ) = 1 P ( ) = 1 . Furthermore, there are only two partitions, { Ω } and { Ω , } which contain Ω or . Minimising γ ( ) P ( ) log B ( ) γ ( Ω ) P ( Ω ) log B ( Ω ) , i.e., γ ( Ω ) log B ( Ω ) , subject to the constraint B ( ) + B ( Ω ) 1 , is uniquely solved by taking B ( Ω ) = 1 , and hence, B ( ) = 0 . Thus, for any B minimising S γ log ( P , · ) , it holds that B ( ) = 0 and B ( Ω ) = 1 . Hence, B B is in B .
Now, consider a P P such that there is at least one F Ω with P ( F ) = 0 . We will show that B ( F ) = 0 for all B arg inf B B S γ log ( P , B ) . In the second step, we will show that there is a unique infimum, B .
Therefore, suppose that the there is a B arg inf B B S γ log ( P , B ) such that B ( F ) > 0 = P ( F ) . Assume that H Ω is for this B , with respect to subset inclusion, one such largest subset of Ω .
Now define B : P Ω [ 0 , 1 ] by B ( G ) : = 0 for all G H and B ( F ) : = B ( F ) otherwise. From B ( Ω ) = 1 , B ( ) = 0 , we see that B B ; thus, S γ log ( P , B ) is well defined. Since P P , we have for all G H that P ( H ) = P ( G ) = 0 . Thus, S γ log ( P , B ) = S γ log ( P , B ) .
Note that since B B , we have 1 B ( H ¯ ) + B ( H ) > B ( H ¯ ) = B ( H ¯ ) . Now, define a function B B by:
B ( H ¯ ) : = 1 B ( F ) : = B ( F ) for all F H ¯
Since for all F Ω , B ( F ) B ( F ) , B ( H ¯ ) < B ( H ¯ ) = 1 and P ( H ¯ ) · γ ( H ¯ ) = 1 · γ ( H ¯ ) > 0 , we have:
S γ log ( P , B ) = S γ log ( P , B ) > S γ log ( P , B )
We assumed that B minimises S γ log ( P , · ) over B . Hence, we have a contradiction. We have thus proven that for every B arg inf B B S γ log ( P , B ) , B ( F ) = 0 if and only if P ( F ) = 0 . Hence, for all P P :
arg inf B B S γ log ( P , B ) = arg inf { B B : P ( F ) = 0 B ( F ) = 0 } S γ log ( P , B )
By Proposition 24, we can assume that the right-hand side of Equation (111) is a strictly convex optimisation problem on a convex set, which has, hence, a unique infimum.  ■
Corollary 10. S γ log is strictly B -proper if and only if S γ log is strictly B -proper.
Proof: Assume that S γ log is strictly B -proper. Then for all P P , we have P = arg inf B B S γ log ( P , B ) . Since P B B , we hence have P = arg inf B B S γ log ( P , B ) .
For the converse, suppose that S γ log is strictly B -proper, i.e., for all P P we have P = arg inf B B S γ log ( P , B ) . Note that strict propriety implies that γ is inclusive. Corollary 9 implies then that no B B B can minimise S γ log ( P , B ) .  ■
Definition 31 (Symmetric weighting of propositions). A weighting of propositions, γ, is symmetric, if and only if whenever F can be obtained from F by permuting the ω i in F , then γ ( F ) = γ ( F ) .
Note that γ is symmetric, if and only if | F | = | F | entails γ ( F ) = γ ( F ) . For symmetric γ, we will sometimes write γ ( n ) for γ ( F ) , if | F | = n .
Proposition 25. For inclusive and symmetric γ, S γ log is strictly P -proper.
Proof: We have that for all ω Ω , | { F Ω : | F | = n , ω F } | = | { G { ω } ¯ : | G | = n 1 } | = | Ω | 1 n 1 .
We recall from Example 1 that with ν n : = | Ω | 1 n 1 , we have:
F Ω | F | = n P ( F ) = ν n · ω Ω P ( ω ) = ν n
Multiplying the objective function in an optimisation problem by some positive constant does not change where optima obtain. Thus:
arg inf Q P F Ω | F | = n γ ( n ) P ( F ) log Q ( F ) = arg inf Q P F Ω | F | = n P ( F ) ν n log Q ( F ) = arg inf Q P F Ω | F | = n P ( F ) ν n log ( Q ( F ) ν n · ν n ) = arg inf Q P F Ω | F | = n P ( F ) ν n log Q ( F ) ν n + log ν n = arg inf Q P F Ω | F | = n P ( F ) ν n log Q ( F ) ν n
Now, note that since Q , P P , we have that F Ω | F | = n P ( F ) = ν n = F Ω | F | = n Q ( F ) , and hence, F Ω | F | = n P ( F ) ν n = 1 = F Ω | F | = n Q ( F ) ν n . Put Ψ : = { F Ω : | F | = n } , and let us understand P ( · ) ν n , Q ( · ) ν n as functions P ( · ) ν n , Q ( · ) ν n : Ψ [ 0 , 1 ] with G Ψ P ( G ) ν n = 1 = G Ψ Q ( G ) ν n . It follows that P ( · ) ν n , Q ( · ) ν n are formally probability functions on Ψ, satisfying certain further conditions which are not relevant in the following. Let P Ψ denote the set of probability functions on Ψ, and let P Ω P Ψ be the set of probability functions of the above form, P ( · ) ν n , Q ( · ) ν n , where P , Q P .
Consider a scoring rule, S ( P , B ) , in the standard sense, i.e., expectations over losses are taken with respect to members x of some set X. (At the beginning of Section 2.4, we considered states ω Ω .) Let X denote the set of probability functions on the set X. Suppose that S is strictly X -proper. Then, for any fixed set Y X , it holds that arg inf B Y S ( P , B ) = P for all P Y . It is well known that the standard logarithmic scoring rule on a given universal set is strictly X -proper. Taking X = Ψ , X = P Ψ and Y = P Ω , we obtain for all P ( · ) ν n P Ω that:
P ( · ) ν n = arg inf Q ( · ) ν n P Ω G Ψ P ( G ) ν n log Q ( G ) ν n = arg inf Q P G Ψ P ( G ) ν n log Q ( G ) ν n
We thus find:
P = arg inf Q P F Ω | F | = n γ ( n ) P ( F ) log Q ( F )
Since P minimises Equation (115) for every n, it also minimises the sum over all n , and hence:
P = arg inf Q P 1 n | Ω | F Ω | F | = n γ ( F ) P ( F ) log Q ( F ) = arg inf Q P S g ( P , Q )
          ■
Lemma 10. If γ is an inclusive weighting of propositions that is equivalent to a weighting of partitions, then S γ log is strictly B -proper.
Proof: While this result follows directly from Corollary 3, we shall give another proof that will provide the groundwork for the proof of the next result, Theorem 10.
First, we shall fix a P P and observe that the first part of Corollary 9 up to and including Equation (111) still holds with B substituted for B . We shall thus concentrate on propositions F Ω with P ( F ) > 0 , since it follows from Corollary 9 that whenever P ( F ) = 0 , we must have B ( F ) = 0 and B ( Ω ) = 1 , if S γ log ( P , B ) is to be minimised. We thus let P + Ω : = { F Ω : P ( F ) > 0 } and:
B + : = { B B : 0 < B ( F ) 1 for all F P + Ω , B ( Ω ) = 1 and B ( F ) = 0 for all other F P Ω P + Ω }
In the following optimisation problem, we will thus only consider B ( F ) to be a variable if F P + Ω .
We now investigate:
arg inf B B + S γ log ( P , B )
To this end, we shall first find for all fixed t 2 :
arg inf B B + B ( F ) P ( F ) t for all F P + Ω S γ log ( P , B )
Making this restriction on B ( F ) allows us to evade any problems that arise from taking the derivative of log B ( F ) at B ( F ) = 0 , which inevitably arise when we directly apply Karush-Kuhn-Tucker techniques to Equation (117).
With Π : = { π Π : π { Ω } , π { Ω , } } , we thus need to solve the following optimisation problem:
minimize S γ log ( P , B ) subject to B ( F ) P ( F ) t > 0 for t 2 and all F P + Ω G π G P + Ω B ( G ) 1 for all π Π B ( Ω ) = 1 and B ( F ) = 0 for all other F P Ω P + Ω
Note that the first and second constraints imply that 0 < B ( F ) 1 for all F P + Ω .
Observe that for π Π with G π , | G | 2 and P ( G ) = 0 , there is another partition in Π that subdivides G and agrees with π everywhere else. These two partitions, π , π , will give rise to the exact same constraint on the F P + Ω . Including the same constraint multiple times does not affect the applicability of the Karush-Kuhn-Tucker techniques. Thus, the solutions of this optimisation problem are the solutions of Equation (118).
With Karush-Kuhn-Tucker techniques in mind, we shall define the following function for B B + :
L a g ( B ) = F Ω γ ( F ) P ( F ) log B ( F ) S γ log ( P , B ) + π Π λ π · ( 1 + G π G P + Ω B ( G ) ) + F P + Ω μ F ( P ( F ) t B F ) constraints = F P + Ω γ ( F ) P ( F ) log B ( F ) + π Π λ π · ( 1 + G π G P + Ω B ( G ) ) + F P + Ω μ F ( P ( F ) t B F )
First, recall that B ( F ) = 0 iff P ( F ) = 0 ; thus, the first sum is always finite here. Since B ( F ) > 0 for all F P + Ω , we can take derivatives with respect to the variables B ( F ) . Recalling that γ ( F ) > 0 for all F Ω , we now find:
B ( F ) L a g ( B ) = γ ( F ) P ( F ) B ( F ) + π Π F π λ π μ F for all F P + Ω
Equating these derivatives with zero, we obtain:
γ ( F ) P ( F ) B ( F ) = π Π F π λ π μ F for all F P + Ω
γ is, by our assumption, equivalent to a weighting of partitions, γ ( F ) = π Π F π g ( π ) . Letting λ π : = g ( π ) , μ F : = 0 and B ( F ) = P ( F ) for F P + Ω solves the set of equations in Equation (121). For B ( F ) = P ( F ) when F P + Ω , we trivially have G π G P + Ω B ( G ) = 1 , and hence, λ π ( G π G P + Ω B ( G ) 1 ) = 0 . Furthermore, μ F ( P ( F ) t B ( F ) ) = 0 for F P + Ω .
Thus, by the Karush-Kuhn-Tucker Theorem, B ( F ) = P ( F ) for F P + Ω is a critical point of the optimisation problem in Equation (118) for all t and all P P , since all constraints are linear.
Note that the constraints, B ( Ω ) = 1 , B ( ) = 0 and 0 F π B ( F ) 1 for π Π , ensure that B is a member of B , regardless of the actual value of B ( F ) for F Ω . Thus, B B + , if and only if B ( Ω ) = 1 , B ( ) = 0 , 0 F π B ( F ) 1 for π Π and B ( F ) = 0 iff P ( F ) = 0 . Thus, B + is convex. It follows that B t + : = { B B + : B ( F ) P ( F ) t for all F P + Ω } is convex for all t 2 . Since B + is the feasible region of Equation (118), the critical point of the convex minimisation problem is the unique minimum.
Letting t > 0 tend to zero, we see that B ( F ) = P ( F ) for F P + Ω is the unique solution of Equation (117).
Thus, any function B B minimizing S γ log ( P , · ) has to agree with P on the F P + Ω . By our introductory remarks, it has to hold that B ( Ω ) = 1 and B ( G ) = 0 for all other G Ω . Thus, B ( F ) = P ( F ) for all F Ω .
We have thus shown that S γ log is strictly proper.  ■
Theorem 10. For inclusive γ with γ ( Ω ) γ ( ) , S γ log is strictly proper if and only if γ is equivalent to a weighting of partitions.
Proof: From Lemma 10, we have that the existence of the λ π ensures propriety.
For the converse, suppose that S γ log is strictly B -proper (equivalently, by Corollary 10, strictly proper). By our assumptions, we have γ ( Ω ) γ ( ) > 0 . We can thus put g ( { Ω , } ) : = γ ( ) and g ( Ω ) : = γ ( Ω ) γ ( ) . Then γ ( Ω ) = g ( { Ω , } ) + g ( Ω ) > 0 and γ ( ) = g ( { Ω , } ) > 0 .
Observe that for all P P , for any infimum of the minimisation problem arg inf B B S γ log ( P , B ) , there have to exist multipliers, λ π 0 and μ F 0 , that solve Equation (121) and μ F ( P ( F ) t B ( F ) ) = 0 . Now, fix a P P such that P ( F ) > 0 for all F Ω . If S γ log is strictly B -proper, then the minimisation problem arg inf B B S γ log ( P , B ) for this P has to be solved uniquely by B = P . Thus, strict B -propriety implies that:
0 < γ ( F ) = π Π F π λ π μ F for all F Ω and μ F 1 t t P ( F ) = 0 for all F P + Ω
The latter conditions can only be satisfied if all μ F vanish. Hence, we obtain the following conditions, which necessarily have to hold if S γ log ( P , · ) is to be uniquely minimised by B = P :
0 < γ ( F ) = π Π F π λ π for all F Ω
Since all the constraints are inequalities, the corresponding multipliers, λ π , have to be greater than or equal to zero.
Thus, strict propriety of S γ log implies the existence of these λ π 0 . This, in turn, implies that γ is equivalent to a weighting of partitions.
Note that for the purposes of this proof, we do not need to investigate what happens if P P is such that there exists a proposition, F Ω , with P ( F ) = 0 .  ■
Note that γ ( Ω ) γ ( ) is not a real restriction. The first component in S γ log ( · , · ) is a probability function in the above proof. Thus, P ( ) = 0 . Hence γ ( ) P ( ) log B ( ) = 0 , regardless of γ ( ) . The particular value of γ ( ) is thus irrelevant for strict propriety. Therefore, setting γ ( ) = γ ( Ω ) fulfills the conditions of the Theorem, but does not change the value of the γ-score. (The condition is required, because if γ ( ) > γ ( Ω ) , then, while S γ log may be strictly proper, it cannot be a weighting of partitions.)
The importance of the condition in Theorem 10 that γ should be equivalent to a weighting of partitions is highlighted in the following:
Example 4. Let Ω = { ω 1 , ω 2 , ω 3 } and γ ( 1 ) = γ ( 3 ) = 1 , and γ ( 2 ) = 10 . Now, consider B B , defined as B ( ) : = 0 , B ( F ) : = 0 . 2 if | F | = 1 , B ( F ) : = 0 . 8 if | F | = 2 , and B ( Ω ) : = 1 . Then:
S γ log ( P = , P = ) = ω Ω P = ( ω ) log P = ( ω ) 10 · F Ω | F | = 2 P = ( F ) log P = ( F ) P = ( Ω ) · log P = ( Ω ) = 3 · 1 3 log 1 3 3 · 10 · 2 3 log 2 3 9 . 2079
S γ log ( P = , B ) = ω Ω P = ( ω ) log B ( ω ) 10 · F Ω | F | = 2 P = ( F ) log B ( F ) P = ( Ω ) · log B ( Ω ) = 3 · 1 3 log 0 . 2 3 · 10 · 2 3 log 0 . 8 6 . 0723
Thus, S γ log ( P = , B ) < S γ log ( P = , P = ) . Hence, S γ log is not strictly B -proper, even though γ is inclusive and symmetric. Compare this with Proposition 25, where we proved that positivity and symmetry γ were enough to ensure that S γ log is strictly P -proper.
Note that strict propriety is exactly what is needed in order to derive Theorem 2, as is apparent from its proof (see, also, the discussion at the start of Section 2.5). By Theorem 10, only a weighting of propositions that is equivalent to a weighting of partitions can be strictly proper (up to an inconsequential value for γ ( ) ); hence, the generalisation of standard entropy and score in the main text, which focusses on weightings of partitions, is essentially the right one for our purposes.
Indeed, adopting a non-strictly proper scoring rule S γ log may result in Theorem 2 not holding:
Proposition 26. If S γ log is not strictly X -proper (with P X ), then the worst case γ-expected loss minimisation and γ-entropy maximisation are, in general, achieved by different functions.
Proof: If S g is not merely proper, then there is a P P such that S γ log ( P , · ) is not minimised over X by P . In particular, there is some Q X such that S γ log ( P , Q ) < S γ log ( P , P ) . Suppose that E = { P } . Trivially:
arg sup P E S γ log ( P , P ) = P
By construction:
arg inf Q X sup P E S γ log ( P , Q ) = arg inf Q X sup P { P } S γ log ( P , Q ) = arg inf Q X S γ log ( P , Q ) P
Thus, the γ-entropy maximiser in E (here, P ) is not a function in X that minimises worst-case γ-expected loss.
Finally, consider the case in which S γ log is merely proper, i.e., there exists a P P such that S γ log ( P , · ) is minimised by both P and members of a non-empty subset, Q B { P } . Then, with E = { P } :
arg inf Q X sup P E S γ log ( P , Q ) = arg inf Q X sup P { P } S γ log ( P , Q ) = arg inf Q X S γ log ( P , Q ) = Q { P }
Thus there is some function other than the γ-entropy maximiser that also minimises the γ-score.  ■

References

  1. Williamson, J. In Defence of Objective Bayesianism; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  2. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  3. Grünwald, P.; Dawid, A.P. Game theory, maximum entropy, minimum discrepancy, and robust Bayesian decision theory. Ann. Stat. 2004, 32, 1367–1433. [Google Scholar]
  4. Topsøe, F. Information theoretical optimization techniques. Kybernetika 1979, 15, 1–27. [Google Scholar]
  5. Ramsey, F.P. Truth and Probability. In Studies in Subjective Probability; Kyburg, H.E., Smokler, H.E., Robert, E., Eds.; Krieger Publishing Company: Huntington, New York, NY, USA, 1926; pp. 23–52. [Google Scholar]
  6. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656, Reprinted with corrections. Available online: http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf (accessed on 1 June 2013). [Google Scholar] [CrossRef]
  7. Aczél, J.; Daróczy, Z. On Measures of Information and Their Characterizations; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  8. Dawid, A.P. Probability Forecasting. In Encyclopedia of Statistical Sciences; Kotz, S., Johnson, N.L., Eds.; Wiley: New York, USA, 1986; Volume 7, pp. 210–218. [Google Scholar]
  9. Joyce, J.M. Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief. In Degrees of Belief; Huber, F., Schmidt-Petri, C., Eds.; Synthese Library 342; Springer: Dordrecht, The Netherlands, 2009. [Google Scholar]
  10. Pettigrew, R. Epistemic Utility Arguments for Probabilism. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; The Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University: Stanford, CA, USA, 2011. [Google Scholar]
  11. Predd, J.; Seiringer, R.; Lieb, E.; Osherson, D.; Poor, H.; Kulkarni, S. Probabilistic coherence and proper scoring rules. IEEE Trans. Inf. Theory 2009, 55, 4786–4792. [Google Scholar] [CrossRef]
  12. McCarthy, J. Measures of the value of information. Proc. Natl. Acad. Sci. USA 1956, 42, 654–655. [Google Scholar] [CrossRef] [PubMed]
  13. Shuford, E.H.; Albert, A.; Massengill, H.E. Admissible probability measurement procedures. Psychometrika 1966, 31, 125–145. [Google Scholar] [CrossRef] [PubMed]
  14. Aczel, J.; Pfanzagl, J. Remarks on the measurement of subjective probability and information. Metrika 1967, 11, 91–105. [Google Scholar] [CrossRef]
  15. Savage, L.J. Elicitation of personal probabilities and expectations. J. Am. Stat. Assoc. 1971, 66, 783–801. [Google Scholar] [CrossRef]
  16. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
  17. Ricceri, B. Recent Advances in Minimax Theory and Applications. In Pareto Optimality, Game Theory And Equilibria; Chinchuluun, A., Pardalos, P., Migdalas, A., Pitsoulis, L., Eds.; Optimization and Its Applications; Springer: New York, USA, 2008; Volume 17, pp. 23–52. [Google Scholar]
  18. König, H. A general minimax theorem based on connectedness. Arch. Math. 1992, 59, 55–64. [Google Scholar] [CrossRef]
  19. Keynes, J.M. A Treatise on Probability; Macmillan: London, UK, 1948. [Google Scholar]
  20. Williamson, J. From Bayesian epistemology to inductive logic. J. Appl. Logic 2013, in press. [Google Scholar] [CrossRef]
  21. Paris, J.B. The Uncertain Reasoner’s Companion; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  22. Seidenfeld, T. Entropy and uncertainty. Philos. Sci. 1986, 53, 467–491. [Google Scholar] [CrossRef]
  23. Williamson, J. An Objective Bayesian Account of Confirmation. In Explanation, Prediction, and Confirmation: New Trends and Old Ones Reconsidered; Dieks, D., Gonzalez, W.J., Hartmann, S., Uebel, T., Weber, M., Eds.; Springer: Dordrecht, The Netherlands, 2011; pp. 53–81. [Google Scholar]
  24. Kyburg, H.E., Jr. Are there degrees of belief? J. Appl. Logic 2003, 1, 139–149. [Google Scholar] [CrossRef]
  25. Smets, P.; Kennes, R. The transferable belief model. Artif. Intell. 1994, 66, 191–234. [Google Scholar] [CrossRef]
  26. Keynes, J.M. The general theory of employment. Q. J. Econ. 1937, 51, 209–223. [Google Scholar] [CrossRef]
  27. Csiszàr, I. Axiomatic characterizations of information measures. Entropy 2008, 10, 261–273. [Google Scholar] [CrossRef]
  28. Paris, J.B. Common sense and maximum entropy. Synthese 1998, 117, 75–93. [Google Scholar] [CrossRef]
  29. Paris, J.B.; Vencovská, A. A note on the inevitability of maximum entropy. Int. J. Approx. Reason. 1990, 4, 183–223. [Google Scholar] [CrossRef]
  30. Paris, J.B.; Vencovská, A. In defense of the maximum entropy inference process. Int. J. Approx. Reason. 1997, 17, 77–103. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Landes, J.; Williamson, J. Objective Bayesianism and the Maximum Entropy Principle. Entropy 2013, 15, 3528-3591. https://doi.org/10.3390/e15093528

AMA Style

Landes J, Williamson J. Objective Bayesianism and the Maximum Entropy Principle. Entropy. 2013; 15(9):3528-3591. https://doi.org/10.3390/e15093528

Chicago/Turabian Style

Landes, Jürgen, and Jon Williamson. 2013. "Objective Bayesianism and the Maximum Entropy Principle" Entropy 15, no. 9: 3528-3591. https://doi.org/10.3390/e15093528

Article Metrics