Next Article in Journal
Between Nonlinearities, Complexity, and Noises: An Application on Portfolio Selection Using Kernel Principal Component Analysis
Next Article in Special Issue
A Novel Uncertainty Management Approach for Air Combat Situation Assessment Based on Improved Belief Entropy
Previous Article in Journal
Saliency Detection Based on the Combination of High-Level Knowledge and Low-Level Cues in Foggy Images
Previous Article in Special Issue
Using the Data-Compression Method for Studying Hunting Behavior in Small Mammals
Open AccessEditor’s ChoiceArticle

Bounded Rational Decision-Making from Elementary Computations That Reduce Uncertainty

Institute of Neural Information Processing, Ulm University, 89081 Ulm, Germany
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(4), 375; https://doi.org/10.3390/e21040375
Received: 19 February 2019 / Revised: 2 April 2019 / Accepted: 4 April 2019 / Published: 6 April 2019

Abstract

In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou–Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. Consequently, we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures.
Keywords: uncertainty; entropy; divergence; majorization; decision-making; bounded rationality; limited resources; Bayesian inference uncertainty; entropy; divergence; majorization; decision-making; bounded rationality; limited resources; Bayesian inference

1. Introduction

In rational decision theory, uncertainty may have multiple sources that ultimately share the commonality that they reflect a lack of knowledge on the part of the decision-maker about the environment. A paramount example is the perfectly rational decision-maker [1] that has a probabilistic model of the environment and chooses its actions to maximize the expected utility entailed by the different choices. When we consider bounded rational decision-makers [2], we may add another source of uncertainty arising from the decision-maker’s limited processing capabilities, since the decision-maker will not only accept a single best choice, but will accept any satisficing option. Today, bounded rationality is an active research topic that crosses multiple scientific fields such as economics, political science, decision theory, game theory, computer science, and neuroscience [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21], where uncertainty is one of the most important common denominators.
Uncertainty is often equated with Shannon entropy in information theory [22], measuring the average number of yes/no-questions that have to be answered to resolve the uncertainty. Even though Shannon entropy has many desirable properties, there are plenty of alternative suggestions for entropy measures in the literature, known as generalized entropies, such as Rényi entropy [23] or Tsallis entropy [24]. Closely related to entropies are divergence measures, which express how probability distributions differ from a given reference distribution. If the reference distribution is uniform then divergence measures can be expressed in terms of entropy measures, which is why divergences can be viewed as generalizations of entropy, for example the Kullback-Leibler divergence [25] generalizing Shannon entropy.
Here, we introduce the concept of elementary computation based on a slightly stronger notion of uncertainty than is expressed by Shannon entropy, or any other generalized entropy alone, but is equivalent to all of them combined. Equating decision-making with uncertainty reduction, this leads to a new comprehensive view of decision-making with limited resources. Our main contributions can be summarized as follows:
(i) 
Based on a fundamental concept of probability transfers related to the Pigou–Dalton principle of welfare economics [26], we promote a generalized notion of uncertainty reduction of a probability distribution that we call elementary computation. This leads to a natural definition of cost functions that quantify the resource costs for uncertainty reduction necessary for decision-making. We generalize these concepts to arbitrary reference distributions. In particular, we define Pigou–Dalton-type transfers for probability distributions relative to a reference or prior distribution, which induce a preorder that is slightly stronger than Kullback-Leibler divergence, but is equivalent to the notion of divergence given by all f-divergences combined. We prove several new characterizations of the underlying concept, known as relative majorization.
(ii) 
An interesting property of cost functions is their behavior under coarse-graining, which plays an important role in decision-making and formalizes the general notion of making abstractions. More precisely, if a decision in a set Ω is split up into two steps by partitioning Ω = i A i and first deciding in the set of (coarse-grained) partitions { A i } i and secondly choosing a fine-grained option inside the selected partition A i , then it is an important question how the cost for the total decision-making process differs from the sum of the costs in each step. We show that f-divergences are superadditive with respect to coarse-graining, which means that decision-making costs can potentially be reduced by splitting up the decision into multiple steps. In this regard, we find evidence that the well-known property of Kullback-Leibler divergence of being additive under coarse-graining might be viewed as describing the minimal amount of processing cost that cannot be reduced by a more intelligent decision-making strategy.
(iii) 
We define bounded rational decision-makers as decision-making processes that are optimizing a given utility function under a constraint on the cost function, or minimizing the cost function under a minimal requirement on expected utility. As a special case for Shannon-type information costs, we arrive at information-theoretic bounded rationality, which may form a normative baseline for bounded-optimal decision-making in the absence of process-dependent constraints. We show that bounded-optimal posteriors with informational costs trace a path through probability space that can itself be seen as an anytime decision-making process, where each step optimally trades off utility and processing costs.
(iv) 
We show that Bayesian inference can be seen as a decision-making process with limited resources given by the number of available datapoints.
Section 2 deals with Items ( i ) and ( i i ) , aiming at a general characterization of decision-making in terms of uncertainty reduction. Item ( i i i ) is covered in Section 3, deriving information-theoretic bounded rationality as a special case. Section 4 illustrates the concepts with an example including Item ( i v ) . Section 5 and Section 6 contain a general discussion and concluding remarks, respectively.

Notation

Let R denote the real numbers, R + : = [ 0 , ) the set of non-negative real numbers, and Q the rational numbers. We write | A | for the number of elements contained in a countable set A, and B \ A for the set difference, that is the set of elements in B that are not in A. P Ω denotes the set of probability distributions on a set Ω , in particular, any p P Ω is normalized so that p ( Ω ) = E p [ 1 ] = 1 . Random variables are denoted by capital letters X , Y , Z , while their explicit values are denoted by small letters x , y , z . For the probability distribution of a random variable X we write p ( X ) , and p ( x ) for the values of p ( X ) . Correspondingly, the expectation E [ f ( X ) ] is also written as E p ( X ) [ f ( X ) ] , E p ( X ) [ f ] , or E p [ f ] . We also write f p : = 1 n i = 1 n f ( x n ) , to denote the approximation of E p [ f ] by an average over samples { x 1 , , x n } from p P Ω .

2. Decision-Making with Limited Resources

In this section, we develop the notion of a decision-making process with limited resources following the simple assumption that any decision-making process
(i) 
reduces uncertainty
(ii) 
by spending resources.
Starting from an intuitive interpretation of uncertainty and resource costs, these concepts are refined incrementally until a precise definition of a decision-making process is given at the end of this section (Definition 7) in terms of elementary computations. Here, a decision-making process is a comprehensive term that describes all kinds of biological as well as artificial systems that are searching for solutions to given problems, for example a human decision-maker that burns calories while thinking, or a computer that uses electric energy to run an algorithm. However, resources do not necessarily refer to a real consumable quantity but can also represent more explicit resources (e.g., time) as a proxy, for example the number of binary comparisons in a search algorithm, the number of forward simulations in a reinforcement learning algorithm, the number of samples in a Monte Carlo algorithm, or, even more abstractly, they can express the limited availability of some source of information, for example the number of data that are available to an inference algorithm (see Section 4).

2.1. Uncertainty Reduction by Eliminating Options

In its most basic form, the concept of decision-making can be formalized as the process of looking for a decision x Ω in a discrete set of options Ω = { x 1 , , x N } . We say that a decision x Ω is certain, if repeated queries of the decision-maker will result in the same decision, and it is uncertain, if repeated queries can result in different decisions. Uncertainty reduction then corresponds to reducing the amount of uncertain options. Hence, a decision-making process that transitions from a space Ω of options to a strictly smaller subset A Ω reduces the amount of uncertain options from N = | Ω | to N A : = | A | < N , with the possible goal to eventually find a single certain decision x . Such a process is generally costly, the more uncertainty is reduced the more resources it costs (Figure 1). The explicit mapping between uncertainty reduction and resource cost depends on the details of the underlying process and on which explicit quantity is taken as the resource. For example, if the resource is given by time (or any monotone function of time), then a search algorithm that eliminates options sequentially until the target value is found (linear search) is less cost efficient than an algorithm that takes a sorted list and in each step removes half of the options by comparing the mid point to the target (logarithmic search). Abstractly, any real-valued function C on the power set of Ω that satisfies C ( A ) < C ( A ) whenever A A might be used as a cost function in the sense that C ( A ) quantifies the expenses of reducing the uncertainty from Ω to A Ω .
In utility theory, decision-making is modeled as an optimization process that maximizes a so-called utility function U : Ω R (which can itself be an expected utility with respect to a probabilistic model of the environment, in the sense of von Neumann and Morgenstern [1]). A decision-maker that is optimizing a given utility function U obtains a utility of 1 N A x A U ( x ) 1 N x Ω U ( x ) on average after reducing the amount of uncertain options from N to N A < N (see Figure 2). A decision-maker that completely reduces uncertainty by finding the optimum x = argmax x Ω U ( x ) is called rational (without loss of generality we can assume that x is unique, by redefining Ω in the case when it is not). Since uncertainty reduction generally comes with a cost, a utility optimizing decision-maker with limited resources, correspondingly called bounded rational (see Section 3), in contrast will obtain only uncertain decisions from a subset A Ω . Such decision-makers seek satisfactory rather than optimal solutions, for example by taking the first option that satisfies a minimal utility requirement, which Herbert A. Simon calls a satisficing solution [2].
Summarizing, we conclude that a decision-making process with decision space Ω that successively eliminates options can be represented by a mapping ϕ between subsets of Ω , together with a cost function C that quantifies the total expenses of arriving at a given subset,
Ω A ϕ ( A ) A
such that
Ω A ϕ ( A ) A , 0 = C ( Ω ) < C ( A ) < C ( ϕ ( A ) ) < C ( A ) ,
For example, a rational decision-maker can afford C ( { x } ) , whereas a decision-maker with limited resources can typically only afford uncertainty reduction with cost C ( A ) < C ( { x } ) .
From a probabilistic perspective, a decision-making process as described above is a transition from a uniform probability distribution over N options to a uniform probability distribution over N < N options, that converges to the Dirac measure δ x centered at x in the fully rational limit. From this point of view, the restriction to uniform distributions is artificial. A decision-maker that is uncertain about the optimal decision x might indeed have a bias towards a subset A without completely excluding other options (the ones in A c = Ω \ A ), so that the behavior must be properly described by a probability distribution p P Ω . Therefore, in the following section, we extend Equations (1) and (2) to transitions between probability distributions. In particular, we must replace the power set of Ω by the space of probability distributions on Ω , denoted by P Ω .

2.2. Probabilistic Decision-Making

Let Ω be a discrete decision space of N = | Ω | < options, so that P Ω consists of discrete distributions p, often represented by probability vectors p = ( p 1 , , p N ) . However, many of the concepts presented in this and the following section can be generalized to the continuous case [27,28].
Intuitively, the uncertainty contained in a distribution p P Ω is related to the relative inequality of its entries, the more similar its entries are, the higher the uncertainty. This means that uncertainty is increased by moving some probability weight from a more likely option to a less likely option. It turns out that this simple idea leads to a concept widely known as majorization [27,29,30,31,32,33], which has roots in the economic literature of the early 19th century [26,34,35], where it was introduced to describe income inequality, later known as the Pigou–Dalton Principle of Transfers. Here, the operation of moving weight from a more likely to a less likely option corresponds to the transfer of income from one individual of a population to a relatively poorer individual (also known as a Robin Hood operation [30]). Since a decision-making process can be viewed as a sequence of uncertainty reducing computations, we call the inverse of such a Pigou–Dalton transfer an elementary computation.
Definition 1
(Elementary computation). A transformation on P Ω of the form
T ε : p ( p 1 , , p m + ε , , p n ε , , p N ) ,
where m , n are such that p m p n , and 0 < ε p n p m 2 , is called a Pigou–Dalton transfer (see Figure 3). We call its inverse T ε 1 an elementary computation.
Since making two probability values more similar or more dissimilar are the only two possibilities to minimally transform a probability distribution, elementary computations are the most basic principle of how uncertainty is reduced. Hence, we conclude that a distribution p has more uncertainty than a distribution p if and only if p can be obtained from p by finitely many elementary computations (and permutations, which are not considered an elementary computation due to the choice of ε ).
Definition 2
(Uncertainty). We say that p P Ω contains more uncertainty than p P Ω , denoted by
p p ,
if and only if p can be obtained from p by a finite number of elementary computations and permutations.
Note that, mathematically, this defines a preorder on P Ω , i.e., a reflexive ( p p for all p P Ω ) and transitive (if p p , p p then p p for all p , p , p P Ω ) binary relation.
In the literature, there are different names for the relation between p and p expressed by Definition 2, for example p is called more mixed than p [36], more disordered than p [37], more chaotic than p [32], or an average of p [29]. Most commonly, however, p is said to majorize p , which started with the early influences of Muirhead [38], and Hardy, Littlewood, and Pólya [29] and was developed by many authors into the field of majorization theory (a standard reference was published by Marshall, Olkin, and Arnold [27]), with far reaching applications until today, especially in non-equilibrium thermodynamics and quantum information theory [39,40,41].
There are plenty of equivalent (arguably less intuitive) characterizations of p p , some of which are summarized below. However, one characterization makes use of a concept very closely related to Pigou–Dalton transfers, known as T-transforms [27,32], which expresses the fact that moving some weight from a more likely option to a less likely option is equivalent to taking (weighted) averages of the two probability values. More precisely, a T-transform is a linear operator on P Ω with a matrix of the form T = ( 1 λ ) I + λ Π , where I denotes the identity matrix on R N , Π denotes a permutation matrix of two elements, and 0 λ 1 . If Π permutes p m and p n , then ( T p ) k = p k for all k { m , n } , and
( T p ) m = ( 1 λ ) p m + λ p n , ( T p ) n = λ p m + ( 1 λ ) p n .
Hence, a T-transform considers any two probability values p m and p n of a given p P Ω , calculates their weighted averages with weights ( 1 λ , λ ) and ( λ , 1 λ ) , and replaces the original values with these averages. From Equation (5), it follows immediately that a T-transform with parameter 0 < λ 1 2 and a permutation Π of p m , p n with p m p n is a Pigou–Dalton transfer with ε = ( p n p m ) λ . In addition, allowing 1 2 λ 1 means that T-transfers include permutations, in particular, p p if and only if p can be derived from p by successive applications of finitely many T-transforms.
Due to a classic result by Hardy, Littlewood and Pólya ([29] (p. 49)), this characterization can be stated in an even simpler form by using doubly stochastic matrices, i.e., matrices A = ( A i j ) i , j with A i j 0 and i A i j = 1 = j A i j for all i , j . By writing x A : = A T x for all x R N , and e : = ( 1 , , 1 ) , these conditions are often stated as
A i j 0 , A e = e , e A = e .
Note that doubly stochastic matrices can be viewed as generalizations of T-transforms in the sense that a T-transform takes an average of two entries, whereas if p = p A with a doubly stochastic matrix A, then p j = i A i j p i is a convex combination, or a weighted average, of p with coefficients ( A i j ) i for each j. This is also why p is then called more mixed than p [36]. Therefore, similar to T-transforms, we might expect that, if p is the result of an application of a doubly stochastic matrix, p = p A , then p is an average of p and therefore contains more uncertainty than p. This is exactly what is expressed by Characterization ( i i i ) in the following theorem. A similar characterization of p p is that p must be given by a convex combination of permutations of the elements of p (see property ( i v ) below).
Without having the concept of majorization, Schur proved that functions of the form p i f ( p i ) with a convex function f are monotone with respect to the application of a doubly stochastic matrix [42] (see property ( v ) below). Functions of this form are an important class of cost functions for probabilistic decision-makers, as we discuss in Example 1.
Theorem 1
(Characterizations of p p [27]). For p , p P Ω , the following are equivalent:
(i) 
p p , i.e., p contains more uncertainty than p (Definition 2)
(ii) 
p is the result of finitely many T-transforms applied to p
(iii) 
p = p A for a doubly stochastic matrix A
(iv) 
p = k = 1 K θ k Π k ( p ) where K N , k = 1 K θ k = 1 , θ k 0 , and Π k is a permutation for all k { 1 , , K }
(v) 
i = 1 N f ( p i ) i = 1 N f ( p i ) for all continuous convex functions f
(vi) 
i = 1 k ( p i ) i = 1 k p i for all k { 1 , , N 1 } , where p denotes the decreasing rearrangement of p
As argued above, the equivalence between ( i ) and ( i i ) is straight-forward. The equivalences among ( i i ) , ( i i i ) , and ( v i ) are due to Muirhead [38] and Hardy, Littlewood, and Pólya [29]. The implication ( v ) ( i i i ) is due to Karamata [43] and Hardy, Littlewood, and Pólya [44], whereas ( i i i ) ( v ) goes back to Schur [42]. Mathematically, ( i v ) means that p belongs to the convex hull of all permutations of the entries of p, and the equivalence ( i i i ) ( i v ) is known as the Birkhoff–von Neumann theorem. Here, we state all relations for probability vectors p P Ω , even though they are usually stated for all p , p R N with the additional requirement that i = 1 N p i = i = 1 N p i .
Condition ( v i ) is the classical and most commonly used definition of majorization [27,29,34], since it is often the easiest to check in practical examples. For example, from ( v i ) , it immediately follows that uniform distributions over N options contain more uncertainty than uniform distributions over N < N options, since i = 1 k 1 N = k N k N = i = 1 k 1 N for all k < N , i.e., for N 3 we have
1 N , , 1 N 1 N 1 , , 1 N 1 , 0 1 2 , 1 2 , 0 , , 0 1 , 0 , 0 .
In particular, if A A Ω , then the uniform distribution over A contains less uncertainty than the uniform distribution over A , which shows that the notion of uncertainty introduced in Definition 2 is indeed a generalizatin of the notion of uncertainty given by the number of uncertain options introduced in the previous section.
Note that ≺ only being a preorder on P Ω , in general, two distributions p , p P Ω are not necessarily comparable, i.e., we can have both p p and p p . In Figure 4, we visualize the regions of all comparable distributions for two exemplary distributions on a three-dimensional decision space ( N = 3 ), represented on the two-dimensional simplex of probability vectors p = ( p 1 , p 2 , p 3 ) . For example, p = ( 1 2 , 1 4 , 1 4 ) and p = ( 2 5 , 2 5 , 1 5 ) cannot be compared under ≺, since 1 2 > 2 5 , but 3 4 < 4 5 .
Cost functions can now be generalized to probabilistic decision-making by noting that the property C ( A ) < C ( A ) whenever A A in Equation (2) means that C is strictly monotonic with respect to the preorder given by set inclusion.
Definition 3
(Cost functions on P Ω ). We say that a function C : P Ω R + is a cost function, if it is strictly monotonically increasing with respect to the preorder ≺, i.e., if
p p C ( p ) C ( p ) ,
with equality only if p and p are equivalent, p p , which is defined as p p and p p . Moreover, for a parameterized family of posteriors ( p r ) r I , we say that r is a resource parameter with respect to a cost function C, if the mapping I R + , r C ( p r ) is strictly monotonically increasing.
Since monotonic functions with respect to majorization were first studied by Schur [42], functions with this property are usually called (strictly) Schur-convex ([27] (Ch. 3)).
Example 1
(Generalized entropies). From ( v ) in Theorem 1, it follows that functions of the form
C ( p ) = i = 1 N f ( p i ) ,
where f is strictly convex, are examples of cost functions. Since many entropy measures used in the literature can be seen to be special cases of Equation (9) (with a concave f), functions of this form are often called generalized entropies [45]. In particular, for the choice f ( t ) = t log t , we have C ( p ) = H ( p ) , where H ( p ) denotes the Shannon entropy of p. Thus, if p contains more uncertainty than p in the sense of Definition 2 ( p p ) then the Shannon entropy of p is larger than the Shannon entropy of p and therefore p contains also more uncertainty in the sense of classical information theory than p. Similarly, for f ( t ) = log ( t ) we obtain the (negative) Burg entropy, and for functions of the form f ( t ) = ± t α for α R \ { 0 , 1 } we get the (negative) Tsallis entropy, where the sign is chosen depending on α such that f is convex (see, e.g., [46] for more examples). Moreover, the composition of any (strictly) monotonically increasing function g with Equation (9) generates another class of cost functions, which contains for example the (negative) Rényi entropy [23]. Note also that entropies of the form of Equation (9) are special cases of Csiszár’s f-divergences [47] for uniform reference distributions (see Example 3 below). In Figure 5, several examples of cost functions are shown for N = 3 . In this case, the two-dimensional probability simplex P Ω is given by the triangle in R 3 with edges ( 1 , 0 , 0 ) , ( 0 , 1 , 0 ) , and ( 0 , 0 , 1 ) . Cost functions are visualized in terms of their level sets.
We prove in Proposition A1 in Appendix A that all cost functions of the form of Equation (9) are superadditive with respect to coarse-graining. This seems to be a new result and an improvement upon the fact that generalized entropies (and f-divergences) satisfy information monotonicity [48]. More precisely, if a decision in Ω, represented by a random variable Z, is split up into two steps by partitioning Ω = i I A i and first deciding about the partition i I , correspondingly described by a random variable X with values in I, and then choosing an option inside of the selected partition A i , represented by a random variable Y, i.e., Z = ( X , Y ) , then
C ( Z ) C ( X ) + C ( Y | X ) ,
where C ( X ) : = C ( p ( X ) ) and C ( Y | X ) : = E p ( X ) [ C ( p ( Y | X ) ) ] . For symmetric cost functions (such as Equation (9)) this is equivalent to
C ( p 1 , , p N ) C ( p 1 + p 2 , p 3 , , p N ) + ( p 1 + p 2 ) C ( p 1 p 1 + p 2 , p 2 p 1 + p 2 ) .
The case of equality in Equations (10) and (11) (see Figure 6) is sometimes called separability [49], strong additivity [50], or recursivity [51], and it is often used to characterize Shannon entropy [23,52,53,54,55,56]. In fact, we also show in Appendix A (Proposition A2) that cost functions C that are additive under coarse-graining are proportional to the negative Shannon entropy H . See also Example 3 in the next section, where we discuss the generalization to arbitrary reference distributions.
We can now refine the notion of a decision-making process introduced in the previous section as a mapping ϕ together with a cost function C satisfying Equation (2). Instead of simply mapping from sets A to smaller subsets A A by successively eliminating options, we now allow ϕ to be a mapping between probability distributions such that ϕ ( p ) can be obtained from p by a finite number of elementary computations (without permutations), and we require C to be a cost function on P Ω , so that
p ϕ ( p ) , C ( p ) < C ( ϕ ( p ) ) p P Ω .
Here, C ( p ) quantifies the total costs of arriving at a distribution p, and p p means that p p and p p . In other words, a decision-making process can be viewed as traversing probability space by moving pieces of probability from one option to another option such that uncertainty is reduced.
Up to now, we have ignored one important property of a decision-making process, the distribution q with minimal cost, i.e., satisfying C ( q ) C ( p ) for all p, which must be identified with the initial distribution of a decision-making process with cost function C. As one might expect (see Figure 5), it turns out that all cost functions according to Definition 3 have the same minimal element.
Proposition 1
(Uniform distributions are minimal). The uniform distribution ( 1 N , , 1 N ) is the unique minimal element in P Ω with respect to ≺, i.e.
1 N , , 1 N p p P Ω .
Once Equation (13) is established, it follows from Equation (8) that C ( ( 1 N , , 1 N ) ) C ( p ) for all p, in particular the uniform distribution corresponds to the initial state of all decision-making processes with cost function C satisfying Equation (12). In particular, it contains the maximum amount of uncertainty with respect to any entropy measure of the form of Equation (9), known as the second Khinchin axiom [49], e.g., for Shannon entropy 0 H ( p ) log N . Proposition 1 follows from Characterization ( i v ) in Theorem 1 after noticing that every p P Ω can be transformed to a uniform distribution by permuting its elements cyclically (see Proposition A3 in Appendix A for a detailed proof).
Regarding the possibility that a decision-maker may have prior information, for example originating from the experience of previous comparable decision-making tasks, the assumption of a uniform initial distribution seems to be artificial. Therefore, in the following section, we arrive at the final notion of a decision-making process by extending the results of this section to allow for arbitrary initial distributions.

2.3. Decision-Making with Prior Knowledge

From the discussion at the end of the previous section we conclude that, in full generality, a decision-maker transitions from an initial probability distribution q P Ω , called prior, to a terminal distribution p P Ω , called posterior. Note that, since once eliminated options are excluded from the rest of the decision-making process, a posterior p must be absolutely continuous with respect to the prior q, denoted by p q , i.e., p ( x ) can be non-zero for a given x Ω only if q ( x ) is non-zero.
The notion of uncertainty (Definition 2) can be generalized with respect to a non-uniform prior q P Ω by viewing the probabilities q i as the probabilities Q ( A i ) of partitions A i of an underlying elementary probability space Ω ˜ = i A i of equally likely elements under Q, in particular Q represents q as the uniform distribution on Ω ˜ (see Figure 7). The similarity of the entries of the corresponding representation P P Ω ˜ of any p P Ω (its uncertainty) then contains information about how close p is to q, which we call the relative uncertainty of p with respect to q (Definition 4 below).
The formal construction is as follows: Let p , q P Ω be such that p q and q i Q . The case when q i R then follows from a simple approximation of each entry by a rational number. Let α N be such that α q i N for all i { 1 , , N } , for example α could be chosen as the least common multiple of the denominators of the q i . The underlying elementary probability space Ω ˜ then consists of α elements and there exists a partitioning { A i } i = 1 , , N of Ω ˜ such that
| A i | = α q i i { 1 , , N } ,
where Q denotes the uniform distribution on Ω ˜ . In particular, it follows that
Q ( A i ) = j = 1 | A i | 1 α = q i i { 1 , , N } ,
i.e., Q represents q in Ω ˜ with respect to the partitioning { A i } i . Similarly, any p P Ω can be represented as a distribution on Ω ˜ by requiring that P ( A i ) = p i for all i { 1 , , N } and letting P to be constant inside of each partition, i.e., similar to Equation (15) we have P ( A i ) = | A i | P ( ω ) = p i for all ω A i and therefore by Equation (14)
P ( ω ) = 1 α p i q i ω A i .
Note that, if q i = 0 then p i = 0 by absolute continuity ( p q ) in which case we can either exclude option i from Ω or set P ( ω ) = 0 .
Example 2.
For a prior q = ( 1 6 , 1 2 , 1 3 ) we put α = 6 , so that Ω ˜ = { ω 1 , , ω 6 } should be partitioned as Ω ˜ = { ω 1 } { ω 2 , ω 3 , ω 4 } { ω 5 , ω 6 } . Then, q i corresponds to the probability of the ith partition under the uniform distribution Q = 1 6 ( 1 , , 1 ) , while the distribution p = ( 1 6 , 3 4 , 1 12 ) is represented on Ω ˜ by the distribution P = ( 1 6 , 1 4 , 1 4 , 1 4 , 1 24 , 1 24 ) (see Figure 7).
Importantly, if the components of the representation Λ q p : = P in P Ω ˜ given by Equation (16) are similar to each other, i.e., if P is close to uniform, then the components of p must be very similar to the components of q, which we express by the concept of relative uncertainty.
Definition 4
(Uncertainty relative to q). We say that p P Ω contains more uncertainty with respect to a prior q P Ω than p P Ω , denoted by p q p , if and only if Λ q p contains more uncertainty than Λ q p , i.e.
p q p : Λ q p Λ q p
where Λ q : P Ω P Ω ˜ , p P is given by Equation (16).
As we show in Theorem 2 below, it turns out that q coincides with a known concept called q-majorization [57], majorization relative to q [27,28], or mixing distance [58]. Due to the lack of a characterization by partial sums, it is usually defined as a generalization of Characterization ( i i i ) in Theorem 1, that is p is q-majorized by p iff p = p A , where A is a so-called q-stochastic matrix, which means that it is a stochastic matrix ( A e = e ) with q A = q . In particular, q does not depend on the choice of α in the definition of Λ q . Here, we provide two new characterizations of q-majorization, the one given by Definition 4, and one using partial sums generalizing the original definition of majorization.
Theorem 2
(Characterizations of p q p ). The following are equivalent
(i) 
p q p , i.e., p contains more uncertainty relative to q than p (Definition 4).
(ii) 
Λ q p can be obtained from Λ q p by a finite number of elementary computations and permutations on P Ω ˜ .
(iii) 
p = p A for a q-stochastic matrix A, i.e., A e = e and q A = q .
(iv) 
i = 1 N q i f p i q i i = 1 N q i f p i q i for all continuous convex functions f.
(v) 
i = 1 l 1 ( p i ) + a q ( k , l ) ( p l ) i = 1 l 1 p i + a q ( k , l ) p l for all α i = 1 l 1 q i k α i = 1 l q i and 1 l N , where a q ( k , l ) : = ( k α i = 1 l 1 q i ) / q l , and the arrows indicate that ( p i / q i ) i is ordered decreasingly.
To prove that ( i ) , ( i i i ) , and ( v ) are equivalent (see Proposition A4 in Appendix A), we make use of the fact that Λ q : P Ω P Ω ˜ has a left inverse Λ q 1 : Λ q ( P Ω ) P Ω . This can be verified by simply multiplying the corresponding matrices given in the proof of Proposition A4. The equivalence between ( i i i ) and ( i v ) is shown in [28] (see also [27,58]). Characterization ( i i ) follows immediately from Definitions 2 and 4.
As required from the discussion at the end of the previous section, q is indeed minimal with respect to q , which means that it contains the most amount of uncertainty with respect to itself.
Proposition 2
(The prior is minimal). The prior q P Ω is the unique minimal element in P Ω with respect to q , that is
q q p p P Ω .
This follows more or less directly from Proposition 1 and the equivalence of ( i ) and ( i i i ) in Theorem 2 (see Proposition A5 in Appendix A for a detailed proof).
Order-preserving functions with respect to q generalize cost functions introduced in the previous section (Definition 3). According to Proposition 2, such functions have a unique minimum given by the prior q. Since cost functions are used in Definition 7 below to quantify the expenses of a decision-making process, we require their minimum to be zero, which can always be achieved by redefining a given cost function by an additive constant.
Definition 5
(Cost functions relative to q). We say that a function C q : P Ω R + is a cost function relative to q, if C q ( q ) = 0 , if it is invariant under relabeling ( q i , p i ) i , and if it is strictly monotonically increasing with respect to the preorder q , that is if
p q p C q ( p ) C q ( p ) ,
with equality only if p q p , i.e., if p q p and p q p . Moreover, for a parameterized family of posteriors ( p r ) r I , we say that r is a resource parameter with respect to a cost function C q , if the mapping I R + , r C q ( p r ) is strictly monotonically increasing.
Similar to generalized entropy functions discussed in Example 1, in the literature, there are many examples of relative cost functions, usually called divergences or measures of divergence.
Example 3
(f-divergences). From ( i v ) in Theorem 2, it follows that functions of the form
C q ( p ) : = i = 1 N q i f p i q i ,
where f is continuous and strictly convex with f ( 1 ) = 0 , are examples of cost functions relative to q. Many well-known divergence measures can be seen to belong to this class of relative cost functions, also known as Csiszár’s f-divergences [47]: the Kullback-Leibler divergence (or relative entropy), the squared 2 distance, the Hartley entropy, the Burg entropy, the Tsallis entropy, and many more [46,50] (see Figure 8 for visualizations of some of them in N = 3 relative to a non-uniform prior).
As a generalization of Proposition A1 (superadditivity of generalized entropies), we prove in Proposition A6 in Appendix A that f-divergences are superadditive under coarse-graining, that is
C q ( Z ) C q ( X ) + C q ( Y | X )
whenever Z = ( X , Y ) , and C q ( X ) : = C q ( X ) ( p ( X ) ) and C q ( Y | X ) : = E p ( X ) [ C q ( Y | X ) ( p ( Y | X ) ) ] ,
This generalizes Equation (10) to the case of a non-uniform prior. Similar to entropies, the case of equality in Equation (21) is sometimes called composition rule [59], chain rule [60], or recursivity [50], and is often used to characterize Kullback-Leibler divergence [8,50,59,60].
Indeed, we also show in Appendix A (Proposition A7) that all additive cost functions with respect to q are proportional to Kullback-Leibler divergence (relative entropy). This goes back to Hobson’s modification [59] of Shannon’s original proof [22], after establishing the following monotonicity property for uniform distributions: If f ( M , N ) denotes the cost C u N ( u M ) of a uniform distribution u M over M elements relative to a uniform distribution u N over N M elements, then (see Figure 9).
f ( M , N ) f ( M , N ) M M N , f ( M , N ) f ( M , N ) M N N .
Note that, even though our proof of Proposition A7 uses additivity under coarse graining to show the monotonicity property in Equation (22), it is easy to see that any relative cost function of the form of Equation (20) also satisfies Equation (22) by using the convexity of f as f ( t ) t s f ( s ) + ( 1 t s ) f ( 0 ) with t = N M < N M = s .
In terms of decision-making, superadditivity under coarse-graining means that decision-making costs can potentially be reduced by splitting up the decision into multiple steps, for example by a more intelligent search strategy. For example, if N = 2 k for some k N and C q is superadditive, then the cost for reducing uncertainty to a single option, i.e., p = ( 1 , 0 , , 0 ) , when starting from a uniform distribution q, satisfies
C q ( p ) C q 2 ( 1 , 0 ) + C q N / 2 ( 1 , 0 , , 0 ) log N = D K L ( p q ) ,
where q n : = ( 1 n , , 1 n ) , and we have set C q 2 ( 1 , 0 ) = 1 as unit cost (corresponding to 1 bit in the case of Kullback-Leibler divergence). Thus, intuitively the property of the Kullback-Leibler divergence of being additive under coarse-graining might be viewed as describing the minimal amount of processing costs that must be contained in any cost function, because it cannot be reduced by changing the decision-making process. Therefore, in the following, we call cost functions that are proportional to the Kullback-Leibler divergence simply informational costs.
In contrast to the previous section, in the definition of q and its characterizations, we never use elementary computations on P Ω directly. This is because permutations interact with the uncertainty relative to q, and therefore q cannot be characterized by a finite number of elementary computations and permutations on P Ω . However, we can still define elementary computations relative to q by the inverse of Pigou–Dalton transfers T ε of the form of Equation (3) such that T ε p q p for ε > 0 , which is arguably the most basic form of how to generate uncertainty with respect to q.
Even for small ε , a regular Pigou–Dalton transfer does not necessarily increase uncertainty relative to q, because the similarity of the components now needs to be considered with respect to q. Instead, we compare the components of the representation P = Λ q p of p P Ω , and move some probability weight ε 0 from P ( A n ) to P ( A m ) whenever P ( ω ) P ( ω ) for ω A m and ω A n , by distributing ε evenly among the elements in A m (see Figure 10), denoted by the transformation T ˜ ε . Here, ε must be small enough such that the inequality 1 α p m q m = P ( ω ) P ( ω ) = 1 α p n q n is invariant under T ˜ ε , which means that
( T ˜ ε P ) ( ω ) ( T ˜ ε P ) ( ω ) 1 α p m q m + ε | A m | 1 α p n q n ε | A n | ( 14 ) ε p n q n p m q m 1 q m + 1 q n .
By construction, T ˜ ε minimally increases uncertainty in P Ω ˜ while staying in the image of P Ω under Λ q , by keeping the values of P constant in each partition, and therefore T ε : = Λ q 1 T ˜ ε Λ q can be considered as the most basic way of how to increase uncertainty relative to q.
Definition 6
(Elementary computation relative to q). We call a transformation on P Ω of the form
T ε : p ( p 1 , , p m + ε , , p n ε , , p N ) ,
with m , n such that p m q m p n q n , and ε satisfying Equation (23), a Pigou–Dalton transfer relative to q, and its inverse T ε 1 an elementary computation relative to q.
We are now in the position to state our final definition of a decision-making process.
Definition 7
(Decision-making process). A decision-making process is a gradual transformation
q p ϕ ( p ) p
of a prior q P Ω to a posterior p P Ω , such that each step decreases uncertainty relative to q. This means that p is obtained from q by successive application of a mapping ϕ between probability distributions on Ω, such that ϕ ( p ) can be obtained from p by finitely many elementary computations relative to q, in particular
q q p q ϕ ( p ) , 0 = C q ( q ) < C q ( p ) < C q ( ϕ ( p ) ) ,
where C q ( p ) quantifies the total costs of a distribution p , and p q p means that p q p and p q p .
In other words, a decision-making process can be viewed as traversing probability space from prior q to posterior p by moving pieces of probability from one option to another option such that uncertainty is reduced relative to q, while expending a certain amount of resources determined by the cost function C q .

3. Bounded Rationality

3.1. Bounded Rational Decision-Making

In this section, we consider decision-making processes that trade off utility against costs. Such decision-makers either maximize a utility function subject to a constraint on the cost function, for example an author of a scientific article who optimizes the article’s quality until a deadline is reached, or minimizing the cost function subject to a utility constraint, for example a high-school student who minimizes effort such that the requirement to pass a certain class is achieved. In both cases, the decision-makers are called bounded rational, since in the limit of no resource constraints they coincide with rational decision-makers.
In general, depending on the underlying system, such an optimization process might have additional process dependent constraints that are not directly given by resource limitations, for example in cases when the optimization takes place in a parameter space that has less degrees of freedom than the full probability space P Ω . Abstractly, this is expressed by allowing the optimization process to search only in a subset Γ P Ω .
Definition 8
(Bounded rational decision-making process). Let U : Ω R be a given utility function, and Γ P Ω . A decision-making process with prior q, posterior p Γ , and cost function C q is called bounded rational if its posterior satisfies
p = argmax p Γ E p [ U ] | C q ( p ) C 0 ,
for a given upper bound C 0 0 , or equivalently
p = argmin p Γ C q ( p ) | E p [ U ] U 0 ,
for a given lower bound U 0 R . In the case when the process constraints disappear, i.e., if Γ = P Ω , then a bounded rational decision-maker is called bounded-optimal.
The equivalence between Equation (26) and Equation (27) is easily seen from the equivalent optimization problem given by the formalism of Lagrange multipliers [61],
p β : = argmin p Γ C q ( p ) β E p [ U ] = argmax p Γ E p [ U ] 1 β C q ( p ) ,
where the cost or utility constraint is expressed by a trade-off between utility and cost, or cost and utility, with a trade-off parameter given by the Lagrange multiplier β , which is chosen such that the constraint given by C 0 or U 0 is satisfied. It is easily seen from the maximization problem on the right side of Equation (28) that a larger value of β decreases the weight of the cost term and thus allows for higher values of the cost function. Hence, β parameterizes the amount of resources the decision-maker can afford with respect to the cost function C q , and, at least in non-trivial cases (non-constant utilities) it is therefore a resource parameter with respect to C q in the sense of Definition 5. In particular, for β 0 , the decision-maker minimizes its cost function irrespective of the expected utility, and therefore stays at the prior, p 0 = q , whereas β makes the cost function disappear so that the decision-maker becomes purely rational with a Dirac posterior centered on the optima x of the utility function U.
For example, in Figure 11, we can see how the posteriors ( p β ) β 0 of bounded-optimal decision-makers with different cost functions for N = 3 and with utility U = ( 0.8 , 1.0 , 0.4 ) leave a trace in probability space, by moving away from an exemplary prior q = ( 1 3 , 1 2 , 1 6 ) and eventually arriving at the rational solution δ ( 0 , 1 , 0 ) .
For informational costs (i.e., proportional to Kullback-Leibler divergence), β is a resource parameter with respect to any cost function.
Proposition 3.
If ( p β ) β 0 is a family of bounded-optimal posteriors given by Equation (28) with C q ( p ) = D K L ( p q ) , then β is a resource parameter with respect to any cost function, in particular
q = p 0 q p β q p β β , β w i t h β < β .
This generalizes a result in [37] to the case of non-uniform priors, by making use of our new Characterization ( v ) of q , by which it suffices to show that β i = 1 l 1 ( p β , i ) + a q ( k , l ) ( p β , l ) is an increasing function of β for all k , l specified in Theorem 2 (see Proposition A8 in Appendix A for details). For simplicity, we restrict ourselves to the case of the Kullback-Leibler divergence, however the proof is analogous for cost functions of the form of Equation (20) with f being differentiable and strictly convex on [ 0 , 1 ] (so that f is strictly monotonically increasing and thus invertible on [ 0 , 1 ] , see [37] for the case of uniform priors).
Hence, for any β > 0 , the posteriors ( p β ) β < β of a bounded-optimal decision-making process with the Kullback-Leibler divergence as cost function can be regarded as the steps of a decision-making process (i.e., satisfying Equation (25)) with posterior p β , where each step optimally trades off utility against informational cost. This means that with increasing β the posteriors p β do not only decrease entropy in the sense of the Kullback-Leibler divergence, but also in the sense of any other cost function.
The important case of bounded-optimal decision-makers with informational costs is termed information-theoretic bounded rationality [14,18,62] and is studied more closely in the following sections.

3.2. Information-Theoretic Bounded Rationality

For bounded-optimal decision-making processes with informational costs, the unconstrained optimization problem in Equation (28) takes the form max p P Ω F [ p ] , where
F [ p ] : = E p [ U ] 1 β D K L ( p q ) ,
which has a unique maximum p β , the bounded-optimal posterior given by
p β ( x ) = 1 Z β q ( x ) e β U ( x )
with normalization constant Z β . This form can easily be derived by finding the zeros of the functional derivative of the objective functional in Equation (30) with respect to p (with an additional normalization constraint), whereas the uniqueness follows from the convexity of the mapping p D K L ( p q ) . For the actual maximum of F we obtain
F β : = max p P Ω F [ p ] = F [ p β ] = 1 β log Z β ,
so that p β ( x ) = q ( x ) e β ( U ( x ) F β ) .
Due to its analogy with physics, in particular thermodynamics (see, e.g., [18]), the maximization of Equation (30) is known as the Free Energy principle of information-theoretic bounded rationality, pioneered in [14,18,62], further developed in [63,64], and applied in recent studies of artificial systems, such as generative neural networks trained by Markov chain Monte Carlo methods [65], or in reinforcement learning as an adaptive regularization strategy [66,67], as well as in recent experimental studies on human behavior [68,69]. Note that there is a formal connection of Equation (30) and the Free Energy principle of active inference [70], however, as discussed in [64] Section 6.3: both Free Energy principles have conceptually different interpretations.
Example 4
(Bayes rule as a bounded-optimal posterior). In Bayesian inference, the parameter θ of the distribution p θ of a random variable Y is inferred from a given dataset d = { y 1 , , y N } of observations of Y by treating the parameter itself as a random variable Θ with a prior distribution q ( Θ ) . The parameterized distribution of Y evaluated at an observation y i d given a certain value of Θ, i.e., p ( y i | Θ = θ ) , is then understood as a function of θ, known as the likelihood of the datapoint y i under the assumption of Θ = θ . After seeing the dataset d, the belief about Θ is updated by using Bayes rule
p ( θ ) = q ( θ ) p ( d | θ ) E q ( Θ ) [ p ( d | Θ ) ] .
This takes the form of a bounded-optimal posterior in Equation (31) with β = N and utility function given by the average log-likelihood per datapoint,
U ( θ ) : = 1 N log p ( d | θ ) = 1 N i = 1 N log ( p ( y i | θ ) ) ,
since then Bayes rule reads
p ( θ ) = 1 Z q ( θ ) e β U ( θ ) .
The corresponding Free Energy in Equation (30), which is maximized by Equation (32),
F [ p ( Θ ) ] = E p ( Θ ) [ U ( Θ ) ] 1 β D K L ( p ( Θ ) q ( Θ ) ) = 1 N E p ( Θ ) log p ( d | Θ ) log p ( Θ ) q ( Θ ) = 1 N D K L p ( Θ ) q ( Θ ) p ( d | Θ )
coincides with the variational Free Energy F v a r from Bayesian statistics. Indeed, from Equation (33) it is easy to see that F assumes its maximum when p ( Θ ) is proportional to q ( Θ ) p ( d | Θ ) , that is when p ( Θ ) is given by Equation (32). In the literature, F v a r is used in the variational characterization of Bayes rule, in cases when the form of Equation (32) cannot be achieved exactly but instead is approximated by optimizing Equation (33) over the parameters ϑ of a parameterized distribution p ϑ ( Θ ) [71,72].
In the following section, we show that the Free Energy F of a bounded rational decision-making process satisfies a recursivity property, which allows the interpretation of F as a certainty-equivalent.

3.3. The Recursivity of F and the Value of a Decision Problem

Consider a bounded-optimal decision-maker with an informational cost function deciding about a random variable Z with values in Ω that is decomposed into the random variables X and Y, i.e., Z = ( X , Y ) . This decomposition can be understood as a two-step process, where first a decision about a partition A i of the full search space Ω = i I A i is made, represented by a random variable X with values in I, followed by a decision about Y inside the partition selected by X (see Figure 6).
Since p ( Z ) = p ( X ) p ( Y | X ) , by the additivity of the Kullback-Leibler divergence (Proposition A7), we have
F [ p ( Z ) ] = E p ( Z ) [ U ( Z ) ] 1 β D K L ( p ( Z ) q ( Z ) ) = E p ( X ) E p ( Y | X ) [ U ( X , Y ) ] 1 β D K L p ( Y | X ) q ( Y | X ) 1 β D K L ( p ( X ) q ( X ) ) ,
and therefore, if F β [ p ( Y | X ) ] : = E p ( Y | X ) [ U ( X , Y ) ] 1 β D K L ( p ( Y | X ) q ( Y | X ) ) denotes the Free Energy of the second step,
F [ p ( X ) p ( Y | X ) ] = E p ( X ) F β [ p ( Y | X ) ] 1 β D K L ( p ( X ) q ( X ) ) .
In particular, the Free Energy F β [ p ( Y | X ) ] of the second decision-step plays the role of the utility function of the first decision-step (see Figure 12). In Equation (34), the two decision-steps have the same resource parameter β , controlling the strength of the constraint on the total informational costs
D K L ( p ( Z ) q ( Z ) ) = D K L ( p ( X ) q ( X ) ) + E p ( X ) D K L ( p ( Y | X ) q ( Y | X ) ) .
More generally, each step might have a separate information-processing constraint, which requires two resource parameters β 1 and β 2 , and results in the total Free Energy
F [ p ( X ) , p ( Y | X ) ] = E p ( X ) F β 2 [ p ( Y | X ) ] 1 β 1 D K L ( p ( X ) q ( X ) ) .
Example 5.
Consider a bounded-rational decision-maker with informational cost function and a utility function U defined on a set Ω = { z 1 , , z 4 } with values as given in Figure 13 and an information-processing bound of 0.2 bits ( β 0.9 ). If we partition Ω into the disjoint subsets { z 1 , z 2 } and { z 3 , z 4 } , then the decision about Z can be decomposed into two steps, Z = ( X , Y ) , the decision about X corresponding to the choice of the partition and the decision about Y given X corresponding to the choice of z i inside the given partition determined by X. According to Equation (34), the choice of the partition X = x i is not in favor of the achieved expected utility inside each partition, but of the Free Energy (see Figure 13).
Therefore, a bounded rational decision-maker that has the choice among decision-problems ideally should base its decision not on the expected utility that might be achieved but on the Free Energy of the subordinate problems. In other words, the Free Energy quantifies the value of a decision-problem that, besides the achieved average utility, also takes the information-processing costs into account.

3.4. Multi-Task Decision-Making and the Optimal Prior

Thus far, we have considered decision-making problems with utility functions defined on Ω only, modeling a single decision-making task. This is extended to multi-task decision-making problems by utility functions of the form U : W × Ω R , ( w , x ) U ( w , x ) , where the additional variable w W represents the current state of the world. Different world states w in general lead to different optimal decisions x ( w ) : = argmax x Ω U ( w , x ) . For example, in a chess game, the optimal moves depend on the current board configurations the players are faced with.
The prior q for a bounded-rational multi-task decision-making problem may either depend or not depend on the world state w W . In the first case, the multi-task decision-making problem is just given by multiple single-task problems, i.e., for each w W , q ( X | W = w ) and p ( X | W = w ) are the prior and posterior of a bounded rational decision-making process with utility function x U ( x , w ) , as described in the previous sections. In the case when there is a single prior for all world states w W , the Free Energy is
F [ p ( X | W ) ] = E p ( W ) E p ( X | W ) [ U ( W , X ) ] 1 β D K L ( p ( X | W ) q ( X ) )
where p ( W ) is a given world state distribution. Note that, for simplicity, we assume that β is independent of w W , which means that only the average information-processing is constrained, in contrast to the information-processing being constrained for each world state which in general would result in β being a function of w. Similar to single-task decision-making (Equation (31)), the maximum of Equation (35) is achieved by
p β ( x | w ) = 1 Z β ( w ) q ( x ) e β U ( w , x )
with normalization constant Z β ( w ) . Interestingly, the deliberation cost in Equation (35) depends on how well the prior was chosen to reach all posteriors without violating the processing constraint. In fact, viewing the Free Energy in Equation (35) as a function of both, posterior and prior, F [ p ( X | W ) ] = F [ p ( X | W ) , q ( X ) ] , and optimizing for the prior yields the marginal of the joint distribution p ( W , X ) = p ( W ) p ( X | W ) , i.e., the mean of the posteriors for the different world states,
q ( X ) : = argmax q ( X ) F [ p ( X | W ) , q ( X ) ] = E p ( W ) [ p ( X | W ) ] .
Similar to Equation (31), Equation (37) follows from finding the zeros of the functional derivative of the Free Energy with respect to q ( X ) (modified by an additional term for the normalization constraint).
Optimizing the Free Energy F [ p ( X | W ) , q ( X ) ] for both prior and posterior can be achieved by iterating Equations (36) and (37). This results in an alternating optimization algorithm, originally developed independently by Blahut and Arimoto to calculate the capacity of a memoryless channel [73,74] (see [75] for a convergence proof by Csiszár and Tusnády). Note that
F [ p ( X | W ) , q ( X ) ] = E p ( W ) p ( X | W ) [ U ( W , X ) ] 1 β I ( W ; X ) ,
in particular that the information-processing cost is now given by the mutual information I ( W ; X ) between the random variables W and X. In this form, we can see that the Free Energy optimization with respect to prior and posterior is equivalent to the optimization problem in classical rate distortion theory [76], where U is given by the negative of the distortion measure.
Similar to in rate-distortion theory, where compression algorithms are analyzed with respect to the rate-distortion function, any decision-making system can now be analyzed with respect to informational bounded-optimality. More precisely, when plotting the achieved expected utility against the information-processing resources of a bounded-rational decision-maker with optimal prior, we obtain a Pareto-optimality curve that forms an efficiency-frontier that cannot be surpassed by any decision-making process (see Figure 14c).

3.5. Multi-Task Decision-Making with Unknown World State Distribution

A bounded rational decision-making process with informational cost and utility U : W × Ω R that has an optimal prior q ( X ) given by the marginal in Equation (37) must have perfect knowledge about the world state distribution p ( W ) . In contrast, here we consider the case when the exact shape of the world state distribution is unknown to the decision-maker and therefore has to be inferred from the already seen world states. More precisely, we assume that the world state distribution is parameterized by a parameter θ R , i.e., p ( W ) = p θ t r u e ( W ) for a given parameterized distribution p θ ( W ) . Since the true parameter θ t r u e is unknown, θ is treated as a random variable by itself, so that p θ ( W ) = p ( W | Θ = θ ) . After a dataset d = { w 1 , , w N } W N of samples from p ( W | Θ = θ t r u e ) has been observed the joint distribution of all involved random variables can be written as
p ( Θ , D , W , X ) = p ( Θ ) p ( D | Θ ) p ( W | Θ ) p ( X | D , W )
where p ( Θ ) denotes the decision-maker’s prior belief about Θ , and p ( D = d | Θ ) = i = 1 N p ( w i | Θ ) is the likelihood of the previously observed world states. Therefore, the resulting (multi-task) Free Energy (see Equation (35)) is given by
E p ( Θ ) p ( D | Θ ) p ( W | Θ ) E p ( X | D , W ) [ U ( W , X ) ] 1 β D K L ( p ( X | D , W ) q ( X | D ) ) .
It turns out that we obtain Bayesian inference as a byproduct of optimizing Equation (38) with respect to the prior q ( X | D ) . Indeed, by calculating the functional derivative with respect to q ( X | D ) of the Free Energy in Equation (38) plus an additional term for the normalization constraint of q ( X | D ) (with Lagrange multiplier λ ), we can see that any distribution q ( X | D ) that optimizes Equation (38) must satisfy
1 β E p ( Θ ) p ( W | Θ ) p ( D | Θ ) p ( X | D , W ) q ( X | D ) + λ = 0 ,
where λ R is chosen such that q ( X | D = d ) P Ω for any d W N . This is equivalent to
q ( X | D ) = 1 Z D E p ( Θ ) p ( D | Θ ) E p ( W | Θ ) [ p ( X | D , W ) ] ,
where Z D denotes the normalization constant of q ( X | D ) , given by Z D = E p ( Θ ) [ p ( D | Θ ) ] , since E p ( X | D , W ) [ 1 ] = 1 as well as E p ( W | Θ ) [ 1 ] = 1 . Therefore, we obtain
q ( X | D ) = E p ( Θ | D ) E p ( W | Θ ) [ p ( X | D , W ) ]
with p ( Θ | D ) as defined in Equation (39). Hence, we have shown
Proposition 4
(Optimality of Bayesian inference). The optimal prior q ( X | D ) that maximizes Equation (38) is given by q ( X | D ) = E p ( Θ | D ) p ( W | Θ ) [ p ( X | D , W ) ] , where p ( Θ | D ) is the Bayes posterior
p ( Θ | D ) : = p ( Θ ) p ( D | Θ ) E p ( Θ ) [ p ( D | Θ ) ] .

4. Example: Absolute Identification Task with Known and Unknown Stimulus Distribution

Consider a bounded rational decision-maker with a multi-task utility function U such that, for each w W , U ( w , x ) is non-zero for only one choice x Ω , as shown in Figure 14. Here, the decision and world spaces are both finite sets of N = 20 elements. The world state distribution p ( W ) is given by a mixture of two Gaussian distributions, as shown in Figure 14b. Due to some world states w W being more likely than others, there are some choices x Ω that are less likely to be optimal.

4.1. Known Stimulus Distribution

As can be seen in Figure 14c (dashed line), here it is not ideal to have a uniform prior distribution, q ( x ) = 1 N for all x Ω . Instead, if the world state distribution is known perfectly and the prior has the form suggested by Equation (37), i.e., q ( x ) = w p ( w ) p β ( x | w ) , then, as can be seen in Figure 14c (solid line), achieving the same expected utility as with a uniform prior requires less informational resources. In particular, the explicit form of q depends on the resource parameter β , see Figure 14d. For low resource availability (small β ), only the choices that correspond to the most probable world states are considered. However, for β , we have
q ( x ) = w p ( W = w ) δ w , x = p ( W = x ) ,
because here lim β p β ( x | w ) = δ w , x is the posterior of a rational decision-maker, where δ w , x denotes the Kronecker- δ (which is only non-zero if w = x ). Hence, for decision-makers with abundant information-processing resources (large β ) the optimal prior q ( X ) approaches the form of the world state distribution p ( W ) (since here W = Ω ).

4.2. Unknown Stimulus Distribution

In the case when the decision-maker has to infer its knowledge about p ( W ) from a set of samples d = { w 1 , , w N } , we know from Section 3.5 that this is optimally done via Bayesian inference. Here, we assume a mixture of two Gaussians as a parameterization of p ( W ) , so that θ = ( μ 1 , μ 2 , σ 1 , σ 2 ) , where μ i and σ i denote mean and standard-deviation of the ith component, respectively (for simplicity, with fixed equal weights for the two mixture components).
In Figure 15a, we can see how different values of N affect the belief about the world state distribution, p ( W | D ) = E p ( Θ | D ) [ p ( W | Θ ) ] , when p ( Θ | D ) is given by the Bayes posterior (39) with a uniform prior belief p ( Θ ) . The resulting expected utilities (averaged over samples from p ( D | θ t r u e ) ) as functions of available information-processing resources are displayed in Figure 15b, which shows how the performance of a bounded-rational decision-maker with optimal prior and perfect knowledge about the true world state distribution is approached by bounded rational decision-makers with limited but increasing knowledge given by the sample size N.
Abstractly, we can view Equation (39) as the bounded optimal solution to the decision-making problem that starts with a prior p ( Θ ) and arrives at a posterior p ( Θ | D = d ) after processing the samples in d = { w 1 , , w N } (see also Example 4). In fact, the posteriors shown in Figure 15a satisfy the requirements for a decision-making process with resource given by the number of data N, when averaged over p ( D ) . In particular, by increasing N the posteriors contain less and less uncertainty with respect to the preorder ≺ given by majorization. Accordingly, if we plot the achieved expected utility against the number of samples, we obtain an optimality curve similar to Figure 14c and Figure 15b. In Figure 16, we can see how Bayesian Inference outperforms Maximum Likelihood when evaluated with respect to the average expected utility of a bounded-rational decision-maker with 2 bits of information-processing resources.

5. Discussion

In this work, we have developed a generalized notion of decision-making in terms of uncertainty reduction. Based on the simple idea of transferring pieces of probability between the elements of a probability distribution, which we call elementary computations, we have promoted a notion of uncertainty which is known in the literature as majorization, a preorder ≺ on P Ω . Taking non-uniform initial distributions into account, we extended the concept to the notion of relative uncertainty, which corresponds to relative majorization q . Even though a large amount of research has been done on majorization theory, from the early works [29,34,38] through further developments [27,30,31,32,36,77,78] to modern applications [39,40,41], there is a lack of results on the more general concept of relative majorization. This does not seem to be due to a lack of interest, as can be seen from the results [28,57,58,79], but mostly because relative majorization looses some of the appealing properties of majorization which makes it harder to deal with, for example that permutations no longer leave the ordering q invariant, in contrast to the case of a uniform prior. This restriction does, however, not affect our application of the concept to decision-making, as permutations are not considered as elementary computations, since they do not diminish uncertainty. By reducing the non-uniform to the uniform case, we managed to prove new results on relative majorization (Theorem 2), which then enabled new results in other parts of the paper (Example 3 and Propositions A6 and A8), and allowed an intuitive interpretation of our final definition of a decision-making process (Definition 7) in terms of elementary computations with respect to non-uniform priors (Definition 6).
More precisely, starting from stepwise elimination of uncertain options (Section 2.3), we have argued that decision-making can be formalized by transitions between probability distributions (Section 2.2), and arrived at the concept of decision-making processes traversing probability space from prior to posterior by successively moving pieces of probability between options such that uncertainty relative to the prior is reduced (Section 2.1). Such transformations can be quantified by cost functions, which we define as order-preserving functions with respect to q and capture the resource costs that must be expended by the process. We have shown (Propositions A1 and A6) that many known generalized entropies and divergences, which are examples of such cost functions (Examples 1 and 3), satisfy superadditivity with respect to coarse-graining. This means that under such cost functions, decision-making costs can potentially be reduced by a more intelligent search strategy, in contrast to Kullback-Leibler divergence, which was characterized as the only additive cost function (Proposition A7). There are plenty of open questions for further investigation in that regard. First, it is not clear under which assumptions on the cost functions C q superadditivity could be improved to C q ( p ) = α D K L ( p q ) + r ( p , q ) with α > 0 and r ( p , q ) 0 . Additionally, it would be an interesting challenge to find sufficient conditions implying super-additivity that include more cost functions than f-divergences. The field of information geometry might give further insights on the topic, since there are studies in similar directions, in particular characterizations of divergence measures in terms of information monotonicity and the data-processing inequality [48,80,81,82]. One interesting result is the characterization of Kullback-Leibler divergence as the single divergence measure being both an f-divergence and a Bregman divergence.
In Section 3, bounded rational decision-makers were defined as decision-making processes that are maximizing utility under constraints on the cost function, or equivalently minimizing resource costs under a minimal utility requirement. In the important case of additive cost functions (i.e., proportional to Kullback-Leibler divergence), this leads to information-theoretic bounded rationality [14,18,62,63,64], which has precursors in the economic and game-theoretic literature [4,8,11,12,13,14,15,16,19,83,84,85]. We have shown that the posteriors of a bounded rational decision-maker with increasing informational constraints leave a path in probability space that can itself be considered an anytime decision-making process, in each step perfectly trading off utility against processing costs (Proposition 3). In particular, this means that the path of a bounded rational decision-maker with informational cost decreases uncertainty with respect to all cost functions, not just Kullback-Leibler divergence. We have also studied the role of the prior in bounded rational multi-task decision-making, where we have seen that imperfect knowledge about the world state distribution leads to Bayesian inference as a byproduct, which is in line with the characterization of Bayesian inference as minimizing prediction surprise [86], but also demonstrates the wide applicability of the developed theory of decision-making with limited resources.
Finally, in Section 4, we have presented the results of a simulated bounded rational decision-maker solving an absolute identification task with and without knowledge about the world state distribution. Additionally, we have seen that Bayesian inference can be considered a decision-making process with limited resources by itself, where the resource is given by the number of available data points.

6. Conclusions

To our knowledge, this is the first principled approach to decision-making based on the intuitive idea of Pigou–Dalton-type probability transfers (elementary computations). Information-theoretic bounded rationality has been introduced by other axiomatic approaches before [8,62]. For example, in [62], a precise relation between rewards and information value is derived by postulating that systems will choose those states with high probability that are desirable for them. This leads to a direct coupling of probabilities and utility, where utility and information inherit the same structure, and only differ with respect to normalization (see [87] for similar ideas). In contrast, we assume utility and probability to be independent objects a priori that only have a strict relationship in the case of bounded-optimal posteriors. The approach in [8] introduces Kullback-Leibler divergence as disutility for decision control. Based on Hobson’s characterization [59], the authors argued that cost functions should be monotonic with respect to uniform distributions (the property in Equation (22)) and invariant under decomposition, which coincides with additivity under coarse-graining (see Examples 1 and 3). Both assumptions are special cases of our more general treatment, where cost functions must be monotonic with respect to elementary computations and are generally not restricted to being additive.
In the literature, there are many mechanistic models of decision-making that instantiate decision-making processes with limited resources. Examples include reinforcement learning algorithms with variable depth [88,89], Markov chain Monte Carlo (MCMC) models where only a certain number of samples can be evaluated [65,85,90], and evidence accumulation models that accumulate noisy evidence until either a fixed threshold is reached [91,92,93,94,95] or where thresholds are determined dynamically by explicit cost functions depending on the number of allowed evidence accumulation steps [96,97]. Many of these concrete models may be described abstractly by resource parameterizations (Definition 5). More precisely, in such cases, the posteriors { p r } r I Γ P Ω are generated by an explicit process with process constraints Γ and resource parameter r. For example, in diffusion processes r may correspond to the amount of time allowed for evidence accumulation, in Monte Carlo algorithms r may reflect the number of MCMC steps, and in a reinforcement learning agent r may represent the number of forward-simulations. If the resource restriction is described by a monotonic cost function r c r [96,97], then the process can be optimized by a maximization problem of the form
max r I , p Γ r E p [ U ] c r = max r I , p Γ r E p [ U ] | c r M = max p Γ E p [ U ] | C q ( p ) M ,
where M , M are non-negative constants, Γ r Γ denotes the subset of probability distributions with resource r, and C q denotes a cost function such that r C q ( p ) for p Γ r is strictly monotonically increasing. In particular, such cases can also be regarded as bounded rational decision-making problems of the form of Equation (26).
Bounded rationality models in the literature come in a variety of flavors. In the heuristics and biases paradigm, the notion of optimization is often dismissed in its entirety [7], even though decision-makers still have to have a notion of options being better or worse, for example to adapt their aspiration levels in a satisficing scheme [98]. We have argued that from an abstract normative perspective we can formulate satisficing in probabilistic terms, such that one could investigate the efficiency of heuristics within this framework. Another prominent approach to bounded rationality is given by systems capable of decision-making about decision-making, i.e., meta-decision-making. Explicit decision-making processes composed of two decision steps have been studied, for example, in the reinforcement learning literature [88,89,99,100], where the first step is represented by a meta decision about whether a cheap model-free or a more expensive model-based learning algorithm is used in the second step. The meta step consists of a trade-off between the estimated utility against the decision-making costs of the second decision step. In the information-theoretic framework of bounded rationality, this could be seen as a natural property of multi-step decision-making and the recursivity property in Equation (34), from which it follows that the value of a decision-making problem is given by its free energy that, besides the achieved utility, also takes the corresponding processing costs into account. Another prominent approach to bounded rationality is computational rationality [19], where the focus lies on finding bounded-optimal programs that solve constrained optimization problems presented by the decision-maker’s architecture and the task environment. As described above, such architectural constraints could be represented by a process dependent subset Γ P Ω , and in fact our resource costs could be included into such a subset Γ r as well. From this point of view, both frameworks would look for bounded-optimal solutions in that the search space is first restricted and then the best solution in the restricted search space is found. However, our search space would consist of distributions describing probabilistic input-output maps, whereas the search space of programs would be far more detailed.
The notion of decision-making presented in this work, intuitively developed from the basic concept of uncertainty reduction given by elementary computations and motivated by the simple idea of progressively eliminating options, on the one hand provides a promising theoretical playground that is open for further investigation (e.g., superadditivity of cost functions and minimality of relative entropy), potentially providing new insights into the connection between the fields of rationality theory and information theory, and on the other hand serves the purpose of a general framework to describe and analyze all kinds of decision-making processes (e.g., in terms of bounded-optimality).

Author Contributions

Conceptualization, S.G. and D.A.B.; Formal analysis, S.G.; Funding acquisition, D.A.B.; Methodology, S.G. and D.A.B.; Software, S.G.; Supervision, D.A.B.; Visualization, S.G.; Writing—original draft, S.G.; and Writing—review and editing, S.G. and D.A.B.

Funding

This research was funded by the European Research Council, grant number ERC-StG-2015-ERC, Project ID: 678082, “BRISC: Bounded Rationality in Sensorimotor Coordination”.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Proofs of Technical Results from Section 2 and Section 3

Proposition A1
(Superadditivity of generalized entropies under coarse-graining, Example 1). All cost functions of the form
C ( p ) = i = 1 N f ( p i ) ,
with f (strictly) convex and differentiable on [ 0 , 1 ] , and f ( 1 ) = 0 , are superadditive with respect to coarse-graining, that is
C ( Z ) C ( X ) + C ( Y | X )
whenever Z = ( X , Y ) , and C ( X ) : = C ( p ( X ) ) and C ( Y | X ) : = E p ( X ) [ C ( p ( Y | X ) ) ] .
Proof. 
As shown in the following proof, strict convexity is not needed for superadditivity, but it is required for the definition of a cost function. First, since i p i = 1 , notice that we can always redefine the convex function f in Equation (A1) by f c ( t ) : = f ( t ) c ( t 1 ) for an arbitrary constant c R without changing C ( p ) for all p P Ω . Hence, without loss of generality, we may assume f ( 1 ) = 0 , so that t = 1 is a global minimum of f (since f ( t ) f ( 1 ) + ( t 1 ) f ( 1 ) = f ( 1 ) due to convexity). Since C is symmetric, superadditivity under coarse-graining is equivalent to
C ( p 1 , , p N ) C ( p 1 + p 2 , p 3 , , p N ) + ( p 1 + p 2 ) C ( p 1 p 1 + p 2 , p 2 p 1 + p 2 )
This simply follows by induction, since Equation (A2) corresponds to the partitioning Ω = j = 1 N 1 A j with A 1 = { x 1 , x 2 } and A j = { x j + 1 } for all j = 2 , , N 1 (see also [101] Proposition 2.3.5). In terms of f, Equation (A2) reads
f ( p 1 ) + f ( p 2 ) f ( p 1 + p 2 ) + ( p 1 + p 2 ) f p 1 p 1 + p 2 + f p 2 p 1 + p 2
By setting u = p 1 + p 2 and v = p 1 p 1 + p 2 , this is equivalent to
f ( u v ) + f ( u ( 1 v ) ) f ( u ) + u f ( v ) + f ( 1 v )
for all u , v [ 0 , 1 ] . Writing F v ( u ) : = f ( u v ) + f ( u ( 1 v ) ) f ( u ) u ( f ( v ) + f ( 1 v ) ) and noting that F v ( 0 ) 0 and F v ( 1 ) = 0 , it suffices to show that F v ( u ) 0 , which shows that F v is monotonically decreasing from F v ( 0 ) to F v ( 1 ) = 0 and thus F v ( u ) 0 for all u [ 0 , 1 ] . We have for all v , u [ 0 , 1 ]
F v ( u ) = v f ( u v ) + ( 1 v ) f ( u ( 1 v ) ) f ( u ) f ( v ) f ( 1 v )
By the symmetry of F v under the replacement of v by 1 v , without loss of generality, we may assume that v 1 2 , so that u v u ( 1 v ) u . Since f is convex, f is monotonically increasing on [ 0 , 1 ] , and thus f ( u v ) f ( u ( 1 v ) f ( u ) . In particular,
f ( u ) = v f ( u ) + ( 1 v ) f ( u ) v f ( u v ) + ( 1 v ) f ( u ( 1 v ) )
and thus, since f ( v ) + f ( 1 v ) 0 , it follows that F v ( u ) 0 , which completes the proof. □
Proposition A2
(Characterization of Shannon entropy, Example 1). If a cost function C is additive under coarse-graining, that is if C ( Z ) = C ( X ) + C ( Y | X ) with the notation from Proposition A1, then
C = α H
for some α 0 , i.e., C is proportional to the negative Shannon entropy H .
Proof. 
Since uniform distributions over N options are majorized by uniform distributions over N < N options (see Equation (7)), it follows for any cost function C that the function f defined by f ( N ) : = C 1 N , , 1 N is monotonically increasing. Therefore, the claim follows directly from Shannon’s proof [22], who showed that this monotonicity, additivity under coarse-graining, and continuity determine Shannon entropy up to a constant factor. □
Proposition A3
(Proposition 1). The uniform distribution ( 1 N , , 1 N ) is the unique minimal element in P Ω with respect to ≺, i.e.,
1 N , , 1 N p p P Ω .
Proof. 
For the proof of Equation (A4), let ( Π i ) i = 1 N be the family of all cyclic permutations of the N entries of a probability vector p, such that
Π 1 ( p ) = p , Π 2 ( p ) = ( p N , p 1 , , p N 1 ) , , Π N ( p ) = ( p 2 , , p N , p 1 ) .
It follows that i = 1 N Π i ( p ) = e for all p P Ω , where e = ( 1 , , 1 ) as above, and therefore the uniform distribution 1 N e is given by a convex combination of permutations of p, so that ( i v ) in Theorem 1 implies 1 N e p . There are many different ways to prove uniqueness. A direct way is to assume there exists q P with q 1 N e for all p P Ω , so that by ( i i i ) in Theorem 1 there exists a stochastic matrix A with 1 N e A = q . However, since e A = e (A stochastic), it follows that q = 1 N e . An indirect way would be to use that if q p for all p P Ω , then from Example 1 we know that this implies H ( q ) H ( p ) for all p P Ω , where H denotes the Shannon entropy, simply because H is a cost function. In particular, q maximizes H and therefore coincides with the uniform distribution 1 N e . □
Proposition A4
(Equivalence of ( i ) , ( i i i ) , ( v ) in Theorem 2). The following are equivalent
(i) 
p q p , i.e., p contains more uncertainty relative to q than p (Definition 4).
(iii) 
p = p A for a q-stochastic matrix A, i.e., A e = e and q A = q .
(v) 
i = 1 l 1 ( p i ) + a q ( k , l ) ( p l ) i = 1 l 1 p i + a q ( k , l ) p l for all α i = 1 l 1 q i k α i = 1 l q i and 1 l N , where a q ( k , l ) : = ( k α i = 1 l 1 q i ) / q l , and the arrows indicate that ( p i / q i ) i is ordered decreasingly.
Proof. 
We use the fact that Λ q : P Ω P Ω ˜ has a left inverse Λ q 1 : Λ q ( P Ω ) P Ω satisfying Λ q 1 Λ q = I , where I denotes the identity on P Ω . This can be verified by simply multiplying the corresponding matrices, given by
Λ q 1 = 1 1 0 0 0 0 1 1 0 0 0 0 1 1 | A 1 | + + | A N | = α , Λ q = 1 α 1 q 1 0 0 0 1 q 1 0 0 0 0 0 0 1 q N 0 0 0 1 q N α
and noting that, by definition, α q i = | A i | .
Characterization ( v ) follows from ( v i ) of Theorem 1 and Definition 4, since p q p if and only if
i = 1 k ( Λ q p ) i i = 1 k ( Λ q p ) i
for all 1 k α 1 , from which ( v ) is an immediate consequence.
( i ) ( i i i ) : Assuming that p q p , we have Λ q p Λ q p and, therefore, by ( i i i ) in Theorem 1, there exists a doubly stochastic matrix B such that Λ q p = B T Λ q p . With A T : = Λ q 1 B T Λ q , it follows that
A e = Λ q T B ( Λ q 1 ) T e = ( Λ q 1 ) T B e = ( Λ q 1 ) T e = e ,
where we use that ( Λ q 1 ) T e = e and Λ q T e = e which is easy to check, and B e = e from B being a stochastic matrix. Note that, by slightly abusing notation, here e is always the constant vector ( 1 , , 1 ) irrespective of the number of its entries (N or α , depending on which space the operator is defined). Moreover, we have
A T q = Λ q 1 B T Λ q q = 1 α Λ q 1 B T e = 1 α Λ q 1 e = q
where we have used that Λ q q by definition is the uniform distribution on P Ω ˜ , i.e., Λ q q = 1 α e , and therefore also Λ q 1 e = α q . In particular, since also A i j 0 ( B , Λ q , Λ q 1 have non-negative entries), it follows that A is a q-stochastic matrix such that p = A T p = p A .
( i ) ( i i i ) : Similarly, if A is a q-stochastic matrix with p = p A , then Λ q p = B T Λ q p , where B : = ( Λ q 1 ) T A Λ q T satisfies B T e = α Λ q A T q = α Λ q q = e and B e = ( Λ q 1 ) T A e = ( Λ q 1 ) T e = e , where we have used that Λ q 1 e = α q , Λ q q = 1 α e , Λ q T e = e , and ( Λ q 1 ) T e = e . In particular, since also B i j 0 , B is doubly stochastic and therefore Λ q p Λ q p , i.e., p q p . □
Proposition A5
(Proposition 2). The prior q P Ω is the unique minimal element in P Ω with respect to q , that is q q p for all p P Ω .
Proof. 
Let p P Ω , and let P : = Λ q p denote its representation in P Ω ˜ . Then, Q P by Proposition A3 (uniform distributions are minimal) and therefore q = Λ q 1 Q q p , in particular q is minimal with respect to q . For uniqueness, let p be possibly another minimal element. Then, p q q and therefore by ( i i i ) in Theorem 2 there exists a q-stochastic matrix A with p = q A . However, since A is q-stochastic, q A = q , and thus p = q . □
Proposition A6
(Example 3: Superadditivity of f-divergences under coarse-graining). All relative cost functions of the form
C q ( p ) = i = 1 N q i f p i q i ,
with f (strictly) convex and differentiable on [ 0 , 1 ] , and f ( 1 ) = 0 , are superadditive with respect to coarse-graining, that is
C q ( Z ) C q ( X ) + C q ( Y | X )
whenever Z = ( X , Y ) , and C q ( X ) : = C q ( X ) ( p ( X ) ) and C q ( Y | X ) : = E p ( X ) [ C q ( Y | X ) ( p ( Y | X ) ) ] , which for cost functions that are symmetric with respect to permutations of ( ( p i , q i ) ) i = 1 , , N (such as Equation (20)) is equivalent to
C q ( p ) C ( q 1 + q 2 , q 3 , q N ) ( p 1 + p 2 , p 3 , , p N ) + ( p 1 + p 2 ) C ( q 1 q 1 + q 2 , q 2 q 1 + q 2 ) p 1 p 1 + p 2 , p 2 p 1 + p 2 .
Proof. 
This is a simple corollary to Proposition A1, after establishing the following interesting property of cost functions of the form of Equation (A5):
C q ( p ) = C Λ q q ( Λ q p )
where Λ q denotes the representation mapping defined in Section 2.3 that maps q to a uniform distribution on an elementary decision space Ω ˜ of | Ω ˜ | = α elements, given by ( Λ q p ) ( ω ) = 1 α p i q i whenever ω A i , where { A i } i = 1 N is a disjoint partition of Ω ˜ such that | A i | = α q i . Equation (A7) then follows from
C Λ q q ( Λ q p ) = ω Ω ˜ ( Λ q q ) ( ω ) f ( Λ q p ) ( ω ) ( Λ q q ) ( ω ) = 1 α i = 1 N w A i f p i q i = i = 1 N q i f p i q i ,
where we use that w A i 1 = | A i | = α q i . Hence, the case of a non-uniform prior reduces to the case of a uniform prior, which is shown in Proposition A1. □
Proposition A7
(Example 3: Characterization of Kullback-Leibler divergence). If C q is a continuous cost function relative to q that is additive under coarse-graining, that is C q ( Z ) = C q ( X ) + C q ( Y | X ) in the notation of Proposition A6, then
C q ( p ) = α D K L ( p q )
for some α 0 , where D K L ( p q ) denotes the Kullback-Leibler divergence D K L ( p q ) = i p i log ( p i / q i ) .
Proof. 
First, we show that any relative cost function that is additive under coarse-graining satisfies Equation (22), the monotonicity property for uniform distributions: If f ( M , N ) denotes the cost C u N ( u M ) of a uniform distribution u M over M elements relative to a uniform distribution u N over N M elements, then (22) is true. Once Equation (22) has been established, then Equation (A8) goes back to a result by Hobson [59] (see also [8]), whose proof is a modification of Shannon’s axiomatic characterization [22].
The first property in Equation (22) actually is true for all relative cost functions: For q = u N with N = | Ω | , we have p q p iff p p and thus the first property follows from Equation (7), and the same is true in the case when N < | Ω | , since we always assume that p , p are absolutely continuous with respect to q, which allows to redefine Ω to only contain the N options covered by q.
For the proof of the second property in Equation (22), we let the random variable X indexing the partitions E 1 , E 2 of Ω , where E 1 denotes the support of u N and E 2 = Ω \ E 1 its complement, and Y representing the choice inside of the selected partition E i given X = i . Letting q ( Z ) = u N , and p ( Z ) = u N , then it follows from addivity under coarse-graining that C q ( X ) ( p ( X ) ) = C u N ( u N ) = f ( N , N ) , and letting p ( Z ) = u M , we obtain
f ( M , N ) = C q ( X ) ( p ( X ) ) + C ( Y | X = 1 ) = f ( N , N ) + f ( M , N ) ,
since p ( X = 1 ) = 1 , p ( X = 2 ) = 0 , and C ( Y | X = 1 ) = C u N ( u M ) , and thus f ( M , N ) f ( M , N ) . □
Proposition A8
(Proposition 3). If ( p β ) β 0 is a family of bounded-optimal posteriors given by Equation (28) with cost function C q ( p ) = D K L ( p q ) , then β is a resource parameter with respect to any cost function, in particular
q = p 0 q p β q p β β , β with β < β .
Proof. 
Part of the proof generalizes a result in [37] to the case of a non-uniform prior q, by making use of our new Characterization ( v ) of q , by which it suffices to show that β i = 1 l 1 ( p β , i ) + a q ( k , l ) ( p β , l ) is an increasing function of β for all k , l specified in Theorem 2. By Equation (31) below, we have
β p β , i = β 1 Z β q i e β U i = p β , i ( U i E p β [ U ] ) ,
where U i are the decreasingly arranged utility values U ( x ) for x Ω , so that also p β , i / q i is arranged decreasingly. From the ordering of the U i , it is easy to see that
i = 1 k p i U i + p k + 1 U k + 1 j = 1 k p j i = 1 k p i U i j = 1 k p j + p k + 1
with the notation p k : = p β , k , from which it follows that S k : = i = 1 k p ^ k U i with p ^ k : = p k / j = 1 k p j , is monotonically decreasing in k (with S N = E p [ U ] ), and therefore
i = 1 k p i ( U i E p [ U ] ) = i = 1 k p ^ i U i E p [ U ] j = 1 k p j = ( S k S N ) j = 1 k p j 0
for all k N . Hence, it suffices to show that i = 1 l 1 x i + t x l 0 if i = 1 k x i 0 for all k and t [ 0 , 1 ] . If x l 0 , there is nothing to show, and if x l < 0 , we have i = 1 l 1 x i + t x l i = 1 l x i 0 , which completes the proof of p β q p β .
It remains to be shown that p β q p β . This follows again from ( v ) in Theorem 2, more precisely from the requirement that p β , 1 p β , 1 if p β q p β , after establishing that β Z β 1 e β U 1 is monotonically increasing. For the latter, note that since the U i are ordered decreasingly, i = 1 N U i q i e β U i U 1 i = 1 N q i e β U i from which follows that β ( Z β 1 e β U 1 ) 0 , which completes the proof. □

References

  1. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  2. Simon, H.A. A Behavioral Model of Rational Choice. Q. J. Econ. 1955, 69, 99–118. [Google Scholar] [CrossRef]
  3. Russell, S.J.; Subramanian, D. Provably Bounded-optimal Agents. J. Artif. Intell. Res. 1995, 2, 575–609. [Google Scholar] [CrossRef]
  4. Ochs, J. Games with Unique, Mixed Strategy Equilibria: An Experimental Study. Games Econ. Behav. 1995, 10, 202–217. [Google Scholar] [CrossRef]
  5. Lipman, B.L. Information Processing and Bounded Rationality: A Survey. Can. J. Econ. 1995, 28, 42–67. [Google Scholar] [CrossRef]
  6. Aumann, R.J. Rationality and Bounded Rationality. Games Econ. Behav. 1997, 21, 2–14. [Google Scholar] [CrossRef]
  7. Gigerenzer, G.; Selten, R. Bounded Rationality: The Adaptive Toolbox; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  8. Mattsson, L.G.; Weibull, J.W. Probabilistic choice and procedurally bounded rationality. Games Econ. Behav. 2002, 41, 61–78. [Google Scholar] [CrossRef]
  9. Jones, B.D. Bounded Rationality and Political Science: Lessons from Public Administration and Public Policy. J. Public Adm. Res. Theory 2003, 13, 395–412. [Google Scholar] [CrossRef]
  10. Sims, C.A. Implications of rational inattention. J. Monet. Econ. 2003, 50, 665–690. [Google Scholar] [CrossRef]
  11. Wolpert, D.H. Information Theory—The Bridge Connecting Bounded Rational Game Theory and Statistical Physics. In Complex Engineered Systems: Science Meets Technology; Braha, D., Minai, A.A., Bar-Yam, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 262–290. [Google Scholar]
  12. Howes, A.; Lewis, R.L.; Vera, A. Rational adaptation under task and processing constraints: Implications for testing theories of cognition and action. Psychol. Rev. 2009, 116, 717–751. [Google Scholar] [CrossRef] [PubMed]
  13. Still, S. Information-theoretic approach to interactive learning. Europhys. Lett. 2009, 85, 28005. [Google Scholar] [CrossRef]
  14. Tishby, N.; Polani, D. Information Theory of Decisions and Actions. In Perception-Action Cycle: Models, Architectures, and Hardware; Cutsuridis, V., Hussain, A., Taylor, J.G., Eds.; Springer: New York, NY, USA, 2011; pp. 601–636. [Google Scholar]
  15. Spiegler, R. Bounded Rationality and Industrial Organization; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  16. Kappen, H.J.; Gómez, V.; Opper, M. Optimal control as a graphical model inference problem. Mach. Learn. 2012, 87, 159–182. [Google Scholar] [CrossRef]
  17. Burns, E.; Ruml, W.; Do, M.B. Heuristic Search when Time Matters. J. Artif. Intell. Res. 2013, 47, 697–740. [Google Scholar] [CrossRef]
  18. Ortega, P.A.; Braun, D.A. Thermodynamics as a theory of decision-making with information-processing costs. Proc. R. Soc. Lond. A Math. Phys. Eng. Sci. 2013, 469, 20120683. [Google Scholar] [CrossRef]
  19. Lewis, R.L.; Howes, A.; Singh, S. Computational Rationality: Linking Mechanism and Behavior through Bounded Utility Maximization. Top. Cogn. Sci. 2014, 6, 279–311. [Google Scholar] [CrossRef]
  20. Acerbi, L.; Vijayakumar, S.; Wolpert, D.M. On the Origins of Suboptimality in Human Probabilistic Inference. PLoS Comput. Biol. 2014, 10, e1003661. [Google Scholar] [CrossRef]
  21. Gershman, S.J.; Horvitz, E.J.; Tenenbaum, J.B. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 2015, 349, 273–278. [Google Scholar] [CrossRef]
  22. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  23. Renyi, A. On Measures of Entropy and Information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
  24. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  25. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  26. Dalton, H. The Measurement of the Inequality of Incomes. Econ. J. 1920, 30, 348–361. [Google Scholar] [CrossRef]
  27. Marshall, A.W.; Olkin, I.; Arnold, B.C. Inequalities: Theory of Majorization and Its Applications, 2nd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  28. Joe, H. Majorization and divergence. J. Math. Anal. Appl. 1990, 148, 287–305. [Google Scholar] [CrossRef]
  29. Hardy, G.H.; Littlewood, J.; Pólya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1934. [Google Scholar]
  30. Arnold, B.C. Majorization and the Lorenz Order: A Brief Introduction; Springer: New York, NY, USA, 1987. [Google Scholar]
  31. Pečarić, J.E.; Proschan, F.; Tong, Y.L. Convex Functions, Partial Orderings, and Statistical Applications; Academic Press: Cambridge, MA, USA, 1992. [Google Scholar]
  32. Bhatia, R. Matrix Analysis; Springer: New York, NY, USA, 1997. [Google Scholar]
  33. Arnold, B.C.; Sarabia, J.M. Majorization and the Lorenz Order with Applications in Applied Mathematics and Economics; Springer International Publishing: Berlin, Germany, 2018. [Google Scholar]
  34. Lorenz, M.O. Methods of Measuring the Concentration of Wealth. Publ. Am. Stat. Assoc. 1905, 9, 209–219. [Google Scholar] [CrossRef]
  35. Pigou, A.C. Wealth and Welfare; Macmillan: New York, NY, USA, 1912. [Google Scholar]
  36. Ruch, E.; Mead, A. The principle of increasing mixing character and some of its consequences. Theor. Chim. Acta 1976, 41, 95–117. [Google Scholar] [CrossRef]
  37. Rossignoli, R.; Canosa, N. Limit temperature for entanglement in generalized statistics. Phys. Lett. 2004, 323, 22–28. [Google Scholar] [CrossRef]
  38. Muirhead, R.F. Some Methods applicable to Identities and Inequalities of Symmetric Algebraic Functions of n Letters. Proc. Edinb. Math. Soc. 1902, 21, 144–162. [Google Scholar] [CrossRef]
  39. Brandão, F.G.S.L.; Horodecki, M.; Oppenheim, J.; Renes, J.M.; Spekkens, R.W. Resource Theory of Quantum States Out of Thermal Equilibrium. Phys. Rev. Lett. 2013, 111, 250404. [Google Scholar] [CrossRef] [PubMed]
  40. Horodecki, M.; Oppenheim, J. Fundamental limitations for quantum and nanoscale thermodynamics. Nat. Commun. 2013, 4, 2056. [Google Scholar] [CrossRef]
  41. Gour, G.; Müller, M.P.; Narasimhachar, V.; Spekkens, R.W.; Halpern, N.Y. The resource theory of informational nonequilibrium in thermodynamics. Phys. Rep. 2015, 583, 1–58. [Google Scholar] [CrossRef]
  42. Schur, I. Über eine Klasse von Mittelbildungen mit Anwendungen auf die Determinanten-Theorie. Sitz. Berl. Math. Ges. 1923, 22, 9–20. [Google Scholar]
  43. Karamata, J. Sur une inégalité relative aux fonctions convexes. Publ. L’Institut Math. 1932, 1, 145–147. [Google Scholar]
  44. Hardy, G.; Littlewood, J.; Pólya, G. Some Simple Inequalities Satisfied by Convex Functions. Messenger Math. 1929, 58, 145–152. [Google Scholar]
  45. Canosa, N.; Rossignoli, R. Generalized Nonadditive Entropies and Quantum Entanglement. Phys. Rev. Lett. 2002, 88, 170401. [Google Scholar] [CrossRef] [PubMed]
  46. Gorban, A.N.; Gorban, P.A.; Judge, G. Entropy: The Markov Ordering Approach. Entropy 2010, 12, 1145–1193. [Google Scholar] [CrossRef]
  47. Csiszár, I. A class of measures of informativity of observation channels. Period. Math. Hung. 1972, 2, 191–213. [Google Scholar] [CrossRef]
  48. Amari, S. α-Divergence Is Unique, Belonging to Both f-Divergence and Bregman Divergence Classes. IEEE Trans. Inf. Theory 2009, 55, 4925–4931. [Google Scholar] [CrossRef]
  49. Khinchin, A.Y. Mathematical Foundations of Information Theory; Dover Books on Advanced Mathematics, Dover: New York, NY, USA, 1957. [Google Scholar]
  50. Csiszár, I. Axiomatic Characterizations of Information Measures. Entropy 2008, 10, 261–273. [Google Scholar] [CrossRef]
  51. Aczél, J.; Forte, B.; Ng, C.T. Why the Shannon and Hartley Entropies Are ‘Natural’. Adv. Appl. Probab. 1974, 6, 131–146. [Google Scholar] [CrossRef]
  52. Faddeev, D.K. On the concept of entropy of a finite probabilistic scheme. Usp. Mat. Nauk 1956, 11, 227–231. [Google Scholar]
  53. Tverberg, H. A New Derivation of the Information Function. Math. Scand. 1958, 6, 297–298. [Google Scholar] [CrossRef]
  54. Kendall, D.G. Functional equations in information theory. Probab. Theory Relat. Fields 1964, 2, 225–229. [Google Scholar] [CrossRef]
  55. Lee, P.M. On the Axioms of Information Theory. Ann. Math. Stat. 1964, 35, 415–418. [Google Scholar] [CrossRef]
  56. Aczel, J. On different characterizations of entropies. In Probability and Information Theory; Behara, M., Krickeberg, K., Wolfowitz, J., Eds.; Springer: Berlin/Heidelberg, Germany, 1969; pp. 1–11. [Google Scholar]
  57. Veinott, A.F. Least d-Majorized Network Flows with Inventory and Statistical Applications. Manag. Sci. 1971, 17, 547–567. [Google Scholar] [CrossRef]
  58. Ruch, E.; Schranner, R.; Seligman, T.H. The mixing distance. J. Chem. Phys. 1978, 69, 386–392. [Google Scholar] [CrossRef]
  59. Hobson, A. A new theorem of information theory. J. Stat. Phys. 1969, 1, 383–391. [Google Scholar] [CrossRef]
  60. Leinster, T. A short characterization of relative entropy. arXiv 2017, arXiv:1712.04903. [Google Scholar]
  61. Everett, H. Generalized Lagrange Multiplier Method for Solving Problems of Optimum Allocation of Resources. Oper. Res. 1963, 11, 399–417. [Google Scholar] [CrossRef]
  62. Ortega, P.A.; Braun, D.A. A conversion between utility and information. In Proceedings of the 3rd Conference on Artificial General Intelligence (AGI-2010), Washington, DC, USA, 5–8 March 2010; Atlantis Press, Springer International Publishing: Cham, Switzerland, 2010. [Google Scholar]
  63. Genewein, T.; Leibfried, F.; Grau-Moya, J.; Braun, D.A. Bounded Rationality, Abstraction, and Hierarchical Decision-Making: An Information-Theoretic Optimality Principle. Front. Robot. AI 2015, 2, 27. [Google Scholar] [CrossRef]
  64. Gottwald, S.; Braun, D.A. Systems of bounded rational agents with information-theoretic constraints. Neural Comput. 2019, 1–37. [Google Scholar] [CrossRef]
  65. Hihn, H.; Gottwald, S.; Braun, D.A. Bounded Rational Decision-Making with Adaptive Neural Network Priors. In Artificial Neural Networks in Pattern Recognition; Pancioni, L., Schwenker, F., Trentin, E., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 213–225. [Google Scholar]
  66. Leibfried, F.; Grau-Moya, J.; Bou-Ammar, H. An Information-Theoretic Optimality Principle for Deep Reinforcement Learning. arXiv 2018, arXiv:1708.01867v5. [Google Scholar]
  67. Grau-Moya, J.; Leibfried, F.; Vrancx, P. Soft Q-Learning with Mutual-Information Regularization. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 March 2019. [Google Scholar]
  68. Ortega, P.A.; Stocker, A. Human Decision-Making under Limited Time. In Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  69. Schach, S.; Gottwald, S.; Braun, D.A. Quantifying Motor Task Performance by Bounded Rational Decision Theory. Front. Neurosci. 2018, 12, 932. [Google Scholar] [CrossRef]
  70. Friston, K.J. The free-energy principle: A rough guide to the brain? Trends Cogn. Sci. 2009, 13, 293–301. [Google Scholar] [CrossRef]
  71. Hinton, G.E.; van Camp, D. Keeping the Neural Networks Simple by Minimizing the Description Length of the Weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, Santa Cruz, CA, USA, 26–28 July 1993; ACM: New York, NY, USA, 1993; pp. 5–13. [Google Scholar]
  72. MacKay, D.J.C. Developments in Probabilistic Modelling with Neural Network—Ensemble Learning. In Neural Networks: Artificial Intelligence and Industrial Applications; Kappen, B., Gielen, S., Eds.; Springer: London, UK, 1995; pp. 191–198. [Google Scholar]
  73. Blahut, R.E. Computation of channel capacity and rate-distortion functions. IEEE Trans. Inf. Theory 1972, 18, 460–473. [Google Scholar] [CrossRef]
  74. Arimoto, S. An algorithm for computing the capacity of arbitrary discrete memoryless channels. IEEE Trans. Inf. Theory 1972, 18, 14–20. [Google Scholar] [CrossRef]
  75. Csiszár, I.; Tusnády, G. Information geometry and alternating minimization procedures. Stat. Decis. Suppl. Issue 1984, 1, 205–237. [Google Scholar]
  76. Shannon, C.E. Coding theorems for a discrete source with a fidelity criterion. IRE Int. Conv. Rec. 1959, 7, 142–163. [Google Scholar]
  77. Tsuyoshi, A. Majorization, doubly stochastic matrices, and comparison of eigenvalues. Linear Algebra Its Appl. 1989, 118, 163–248. [Google Scholar]
  78. Cohen, J.E.; Derriennic, Y.; Zbăganu, G. Majorization, monotonicity of relative entropy, and stochastic matrices. In Doeblin and Modern Probability (Blaubeuren, 1991); Amer. Math. Soc.: Providence, RI, USA, 1993; Volume 149, pp. 251–259. [Google Scholar]
  79. Latif, N.; Pečarić, D.; Pečarić, J. Majorization, Csiszár divergence and Zipf-Mandelbrot law. J. Inequal. Appl. 2017, 2017, 197. [Google Scholar] [CrossRef] [PubMed]
  80. Jiao, J.; Courtade, T.A.; No, A.; Venkat, K.; Weissman, T. Information Measures: The Curious Case of the Binary Alphabet. IEEE Trans. Inf. Theory 2014, 60, 7616–7626. [Google Scholar] [CrossRef]
  81. Amari, S.; Cichocki, A. Information geometry of divergence functions. Bull. Pol. Acad. Sci. 2010, 58, 183–195. [Google Scholar] [CrossRef]
  82. Amari, S.; Karakida, R.; Oizumi, M. Information geometry connecting Wasserstein distance and Kullback-Leibler divergence via the entropy-relaxed transportation problem. Inf. Geom. 2018, 1, 13–37. [Google Scholar] [CrossRef]
  83. McKelvey, R.D.; Palfrey, T.R. Quantal Response Equilibria for Normal Form Games. Games Econ. Behav. 1995, 10, 6–38. [Google Scholar] [CrossRef]
  84. Todorov, E. Efficient computation of optimal actions. Proc. Natl. Acad. Sci. USA 2009, 106, 11478–11483. [Google Scholar] [CrossRef]
  85. Vul, E.; Goodman, N.; Griffiths, T.L.; Tenenbaum, J.B. One and Done? Optimal Decisions From Very Few Samples. Cogn. Sci. 2014, 38, 599–637. [Google Scholar] [CrossRef]
  86. Ortega, P.A.; Braun, D.A. A Minimum Relative Entropy Principle for Learning and Acting. J. Artif. Int. Res. 2010, 38, 475–511. [Google Scholar] [CrossRef]
  87. Candeal, J.C.; De Miguel, J.R.; Induráin, E.; Mehta, G.B. Utility and entropy. Econ. Theory 2001, 17, 233–238. [Google Scholar] [CrossRef]
  88. Keramati, M.; Dezfouli, A.; Piray, P. Speed/Accuracy Trade-Off between the Habitual and the Goal-Directed Processes. PLoS Comput. Biol. 2011, 7, 1–21. [Google Scholar] [CrossRef]
  89. Keramati, M.; Smittenaar, P.; Dolan, R.J.; Dayan, P. Adaptive integration of habits into depth-limited planning defines a habitual-goal–directed spectrum. Proc. Natl. Acad. Sci. USA 2016, 113, 12868–12873. [Google Scholar] [CrossRef]
  90. Ortega, P.A.; Braun, D.A.; Tishby, N. Monte Carlo methods for exact amp; efficient solution of the generalized optimality equations. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; pp. 4322–4327. [Google Scholar]
  91. Laming, D.R.J. Information Theory of Choice-Reaction Times; Academic Press: Oxford, UK, 1968. [Google Scholar]
  92. Ratcliff, R. A theory of memory retrieval. Psychol. Rev. 1978, 85, 59–108. [Google Scholar] [CrossRef]
  93. Townsend, J.; Ashby, F. The Stochastic Modeling of Elementary Psychological Processes (Part 2); Cambridge University Press: Cambridge, UK, 1983. [Google Scholar]
  94. Ratcliff, R.; Starns, J.J. Modeling confidence judgments, response times, and multiple choices in decision making: Recognition memory and motion discrimination. Psychol. Rev. 2013, 120, 697–719. [Google Scholar] [CrossRef]
  95. Shadlen, M.; Hanks, T.; Churchland, A.K.; Kiani, R.; Yang, T. The Speed and Accuracy of a Simple Perceptual Decision: A Mathematical Primer. In Bayesian Brain: Probabilistic Approaches to Neural Coding; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  96. Frazier, P.I.; Yu, A.J. Sequential Hypothesis Testing Under Stochastic Deadlines. In Proceedings of the 20th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 3–6 December 2007; Curran Associates Inc.: Red Hook, NY, USA, 2007; pp. 465–472. [Google Scholar]
  97. Drugowitsch, J.; Moreno-Bote, R.; Churchland, A.K.; Shadlen, M.N.; Pouget, A. The cost of accumulating evidence in perceptual decision making. J. Neurosci. 2012, 32, 3612–3628. [Google Scholar] [CrossRef]
  98. Simon, H.A. Models of Bounded Rationality; MIT Press: Cambridge, MA, USA, 1982. [Google Scholar]
  99. Pezzulo, G.; Rigoli, F.; Chersi, F. The Mixed Instrumental Controller: Using Value of Information to Combine Habitual Choice and Mental Simulation. Front. Psychol. 2013, 4, 92. [Google Scholar] [CrossRef]
  100. Viejo, G.; Khamassi, M.; Brovelli, A.; Girard, B. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning. Front. Behav. Neurosci. 2015, 9, 225. [Google Scholar] [CrossRef] [PubMed]
  101. Aczel, J.; Daroczy, Z. On Measures of Information and Their Characterizations; Academic Press: New York, NY, USA, 1975. [Google Scholar]
Figure 1. Decision-making as search in a set of options. At the expense of more and more resources, the number of uncertain options is progressively reduced until x is the only remaining option.
Figure 1. Decision-making as search in a set of options. At the expense of more and more resources, the number of uncertain options is progressively reduced until x is the only remaining option.
Entropy 21 00375 g001
Figure 2. Decision-making as utility optimization process.
Figure 2. Decision-making as utility optimization process.
Entropy 21 00375 g002
Figure 3. A Pigou–Dalton transfer as given by Equation (3). The transfer of probability from a more likely to a less likely option increases uncertainty.
Figure 3. A Pigou–Dalton transfer as given by Equation (3). The transfer of probability from a more likely to a less likely option increases uncertainty.
Entropy 21 00375 g003
Figure 4. Comparability of probability distributions in N = 3 . The region in the center consists of all p that are majorized by p, i.e., p p , whereas the outer region consists of all p that majorize p, p p . The bright regions are not comparable to p. (a) p = ( 1 3 , 1 2 , 1 6 ) ; (b) p = ( 1 2 , 1 4 , 1 4 ) .
Figure 4. Comparability of probability distributions in N = 3 . The region in the center consists of all p that are majorized by p, i.e., p p , whereas the outer region consists of all p that majorize p, p p . The bright regions are not comparable to p. (a) p = ( 1 3 , 1 2 , 1 6 ) ; (b) p = ( 1 2 , 1 4 , 1 4 ) .
Entropy 21 00375 g004
Figure 5. Examples of cost functions for decision spaces with three elements ( N = 3 ): (a) Shannon entropy; (b) Tsallis entropy of order α = 4 ; and (c) Rényi entropy of order α = 3.5 .
Figure 5. Examples of cost functions for decision spaces with three elements ( N = 3 ): (a) Shannon entropy; (b) Tsallis entropy of order α = 4 ; and (c) Rényi entropy of order α = 3.5 .
Entropy 21 00375 g005
Figure 6. Additivity under coarse-graining. If the cost for Z = ( X , Y ) is the sum of the costs for X and the cost for Y given X, then the cost function is proportional to Shannon entropy.
Figure 6. Additivity under coarse-graining. If the cost for Z = ( X , Y ) is the sum of the costs for X and the cost for Y given X, then the cost function is proportional to Shannon entropy.
Entropy 21 00375 g006
Figure 7. Representation of q and p by Q and P on Ω ˜ (Example 2), such that the probabilities q i and p i are given by the probabilities of the partitions A i with respect to Q and P, respectively.
Figure 7. Representation of q and p by Q and P on Ω ˜ (Example 2), such that the probabilities q i and p i are given by the probabilities of the partitions A i with respect to Q and P, respectively.
Entropy 21 00375 g007
Figure 8. Examples of cost functions for N = 3 relative to q = ( 1 3 , 1 2 , 1 6 ) : (a) Kullback-Leibler divergence; (b) Squared 2 distance; and (c) Tsallis relative entropy of order α = 3.0 .
Figure 8. Examples of cost functions for N = 3 relative to q = ( 1 3 , 1 2 , 1 6 ) : (a) Kullback-Leibler divergence; (b) Squared 2 distance; and (c) Tsallis relative entropy of order α = 3.0 .
Entropy 21 00375 g008
Figure 9. Monotonicity property in Equation (22): and (a) the cost is higher when more uncertainty has been reduced; and (b) if the posterior is the same, then it is cheaper to start from a prior with fewer options.
Figure 9. Monotonicity property in Equation (22): and (a) the cost is higher when more uncertainty has been reduced; and (b) if the posterior is the same, then it is cheaper to start from a prior with fewer options.
Entropy 21 00375 g009
Figure 10. Pigou–Dalton transfer relative to q. A distribution p P Ω is transformed relative to q by first moving some amount of weight ε 0 from P ( A n ) to P ( A m ) where n , m are such that P ( ω ) P ( ω ) whenever ω A m and ω A n , with ε small enough such that this relation remains true after the transformation, and then mapping the transformed distribution back to P Ω by Λ q 1 (see Definition 6).
Figure 10. Pigou–Dalton transfer relative to q. A distribution p P Ω is transformed relative to q by first moving some amount of weight ε 0 from P ( A n ) to P ( A m ) where n , m are such that P ( ω ) P ( ω ) whenever ω A m and ω A n , with ε small enough such that this relation remains true after the transformation, and then mapping the transformed distribution back to P Ω by Λ q 1 (see Definition 6).
Entropy 21 00375 g010
Figure 11. Paths of bounded-optimal decision-makers in P ( Ω ) for N = 3 . The straight lines in the background denote level sets of expected utility, the solid lines are level sets of the cost functions, and the dashed curves represent the paths ( p β ) β 0 of a bounded-optimal decision-maker given by Equation (28) with utility U = ( 0.8 , 1.0 , 0.4 ) , prior q = ( 1 3 , 1 2 , 1 6 ) , and cost functions given by: (a) Kullback-Leibler divergence; (b) Tsallis relative entropy of order α = 3 ; and (c) Burg relative entropy.
Figure 11. Paths of bounded-optimal decision-makers in P ( Ω ) for N = 3 . The straight lines in the background denote level sets of expected utility, the solid lines are level sets of the cost functions, and the dashed curves represent the paths ( p β ) β 0 of a bounded-optimal decision-maker given by Equation (28) with utility U = ( 0.8 , 1.0 , 0.4 ) , prior q = ( 1 3 , 1 2 , 1 6 ) , and cost functions given by: (a) Kullback-Leibler divergence; (b) Tsallis relative entropy of order α = 3 ; and (c) Burg relative entropy.
Entropy 21 00375 g011
Figure 12. Recursivity of the Free Energy under coarse-graining. The decision about Z = ( X , Y ) is equivalent to a two-step process consisting of the decision about X and the decision about Y given X. The objective function for the first step is the Free Energy of the second step.
Figure 12. Recursivity of the Free Energy under coarse-graining. The decision about Z = ( X , Y ) is equivalent to a two-step process consisting of the decision about X and the decision about Y given X. The objective function for the first step is the Free Energy of the second step.
Entropy 21 00375 g012
Figure 13. The Free Energy as certainty-equivalent (Example 5). (a) Utility function U as a function of z i (top) and expected utilities for the coarse-grained partitions { z 1 , z 2 } and { z 3 , z 4 } corresponding to the choices x 1 and x 2 , respectively, for a bounded-rational decision-maker with β = 0.9 (bottom). (b) The bounded optimal probability distribution over x i (top) does not correspond to the expected utilities in (a) but to the Free Energy of the second decision-step, i.e., the decision about Y given X (bottom).
Figure 13. The Free Energy as certainty-equivalent (Example 5). (a) Utility function U as a function of z i (top) and expected utilities for the coarse-grained partitions { z 1 , z 2 } and { z 3 , z 4 } corresponding to the choices x 1 and x 2 , respectively, for a bounded-rational decision-maker with β = 0.9 (bottom). (b) The bounded optimal probability distribution over x i (top) does not correspond to the expected utilities in (a) but to the Free Energy of the second decision-step, i.e., the decision about Y given X (bottom).
Entropy 21 00375 g013
Figure 14. Absolute identification task with known world state distribution: (a) utility function; (b) world states distribution (a mixture of two Gaussians); (c) expected utility as a function of information-processing resources for a bounded-optimal decision-maker with a uniform and with an optimal prior (the shaded region cannot be reached by any decision-making process); and (d) exemplary optimal priors q ( X ) for different information-processing bounds.
Figure 14. Absolute identification task with known world state distribution: (a) utility function; (b) world states distribution (a mixture of two Gaussians); (c) expected utility as a function of information-processing resources for a bounded-optimal decision-maker with a uniform and with an optimal prior (the shaded region cannot be reached by any decision-making process); and (d) exemplary optimal priors q ( X ) for different information-processing bounds.
Entropy 21 00375 g014
Figure 15. Absolute identification task with unknown world state distribution: (a) average of inferred world state distributions for different sizes N of datasets (standard-deviations across datasets indicated by error bars); and (b) resulting utility-information curves of a bounded-rational decision-maker with optimal prior that has to infer the world state distribution from datasets with different sizes N.
Figure 15. Absolute identification task with unknown world state distribution: (a) average of inferred world state distributions for different sizes N of datasets (standard-deviations across datasets indicated by error bars); and (b) resulting utility-information curves of a bounded-rational decision-maker with optimal prior that has to infer the world state distribution from datasets with different sizes N.
Entropy 21 00375 g015
Figure 16. Optimality curve given by Bayesian inference. The average expected utility as a function of N achieved by a bounded-rational decision-maker that infers the world state distribution with Bayes rule in Equation (39) forms an efficiency frontier that cannot be surpassed by any other inference scheme, like for example Maximum Likelihood, when starting from the same prior belief about the world.
Figure 16. Optimality curve given by Bayesian inference. The average expected utility as a function of N achieved by a bounded-rational decision-maker that infers the world state distribution with Bayes rule in Equation (39) forms an efficiency frontier that cannot be surpassed by any other inference scheme, like for example Maximum Likelihood, when starting from the same prior belief about the world.
Entropy 21 00375 g016
Back to TopTop