Next Article in Journal
Computational Optimized Monitoring Methodology to Avoid Crowd Crush Accidents with Scattered Data
Previous Article in Journal
Generalizations of the Eneström–Kakeya Theorem Involving Weakened Hypotheses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Posteriors in Limited Time

by
Ayan Bhattacharya
1,2
1
Booth School of Business, University of Chicago, Chicago, IL 60637, USA
2
Arrow Markets, Subharmonic Technologies, Ft Lauderdale, FL 33304, USA
AppliedMath 2022, 2(4), 700-710; https://doi.org/10.3390/appliedmath2040041
Submission received: 27 September 2022 / Revised: 17 November 2022 / Accepted: 5 December 2022 / Published: 12 December 2022

Abstract

:
This paper obtains a measure-theoretic restriction that must be satisfied by a prior probability measure for posteriors to be computed in limited time. Specifically, it is shown that the prior must be factorizable. Factorizability is a set of independence conditions for events in the sample space that allows agents to calculate posteriors using only a subset of the dataset. The result has important implications for models in mathematical economics and finance that rely on a common prior. If one introduces the limited time restriction to Aumann’s famous Agreeing to Disagree setup, one sees that checking for factorizability requires agents to have access to every event in the measure space, thus severely limiting the scope of the agreement result.

1. Introduction

In most economic models that study learning and interactive behavior, the time needed by agents for their computation plays no formal role. Increasingly, however, such models are applied in situations where the constraint on time is critical. Arguably, for a high-frequency trader placing orders in microseconds, or an online advertiser bidding on a Google pay-per-click auction, the most important constraint is the limitation on time. The main results in this paper derive a measure-theoretic condition, called factorizability, on the prior probability measure that enables the computation of posteriors despite limited time and discusses the counter-intuitive implications of the condition for interactive, multi-agent decision-making. When the prior is factorizable, a dataset can be divided up into smaller blocks such that Bayesian inference in one block is independent of outcomes in the other blocks, allowing for quick computation. However, an agent needs access to the finest grained partitions of the dataset for ascertaining factorizability, and this constraint limits the scope of many results in interactive decision-making that rely on different agents using different coarsenings.
Factorizability formalizes the restrictions that come with limited time and is especially relevant for models that rely on common priors and learning. In a celebrated result, Aumann [1] showed that agents with a common prior cannot agree to disagree. Suppose Alice and Bob are Bayesians who have the same prior probabilities over states of the world to begin with, and then they obtain (possibly different) information about an event. Aumann showed that if the posteriors calculated by Alice and Bob for the event are common knowledge, they must be equal, even if the posteriors are based on different information.
An implicit assumption in results such as Aumann’s is that all calculations are instantaneous; that is to say, the time needed by agents for their calculations has no formal role in the model. Yet, typical practical applications of such results are to situations such as trading, where time plays a critical role. Even when the situation is less acute, limitations on time constitute an important constraint in real-world economic interactions. When a realtor shows a home to Alice and Bob, he expects a reply within a stipulated amount of time. If Alice needs longer than the given time to calculate her price, the agreement result automatically becomes irrelevant for the transaction. How should one formally account for limited time in agreement-type results, and what are the game-theoretic implications when agents have limited time? It is these inter-related questions that provide a practical motivation for the paper.
Limitations on time do not matter in traditional models for two reasons. First, the probability measures are taken to be mappings from the space of all measurable events. This means that equipping an agent with a prior is tantamount to handing her a giant look-up table that lists the probability for every conceivable event—no computations needed. This is seldom true in the real world. Particularly in the case of objective probabilities derived from historical databases, the starting point is usually probabilities of only elementary events. In keeping with this reality, we assume here that probability measures are mappings from the sample space, and agents need to actually compute the probabilities of composite events. The second reason is that all computations are implicitly assumed to be instantaneous. Once again, this is unrealistic because—as computer science repeatedly reminds us—all computations need the manipulation of a physical medium, and that takes time. Computer scientists have developed elaborate models of computation over the years, but for the purposes of the current analysis, it will be enough to assume that each elementary computation (addition, subtraction, multiplication, division) takes a non-zero, finite time. Limited time means that agents do not have sufficient time to complete the posterior computation in the regular manner.
In the case of the agreement theorem, all agents are Bayesian and use the Bayes formula for arriving at the posterior. Thus, limited time implies that agents do not have enough time to complete the Bayesian posterior computation using the full, available dataset. However, is it always necessary to use the complete dataset to arrive at the posterior? Going back to our home sale example, the ubiquity of the internet might mean that the prospective home buyers Alice and Bob have access to data about every major home sale in every major city of the world for the last decade. Going through all this data is indeed a daunting task. Yet, most transactions in such a dataset will have no bearing on the particular home deal that Alice and Bob are prospecting. In the language of probability theory, the particular home deal they are prospecting is independent of most other transactions in the dataset, and this is what makes a probability calculation feasible.
In technical terms, the condition that is needed to make the calculation of Bayesian posteriors feasible in limited time is the factorizability of the prior measure. Factorizability is a condition that generalizes the notion of independence from single random variables to sets of random variables. Without factorizability, every random variable might be dependent on every other random variable in the data set; thus, one has to work with the complete dataset for calculating posteriors. This is not feasible in limited time, especially when datasets are large. When the prior is factorizable, on the other hand, we can divide up the dataset into smaller blocks such that Bayesian inference in one block is independent of outcomes in the other blocks. In the home sale example, despite the reams of sales data available on the internet, the only block that might be relevant for Alice and Bob could be the sales price for homes in the last six months in the particular neighborhood they are prospecting. Thus, factorizability of the prior is a necessary condition for the agreement result to hold up in limited time.
The key question that arises in light of the above discussion is the conditions under which agents can determine the factorizability of the common prior. Recall that two random variables are independent only when all the events associated with the variables are independent. A similar condition needs to be verified for factorizability. In other words, agents must have access to the outcome of every possible event in the measure space. In a partitional information structure, this means that agents must be endowed with the finest possible partition of the sample space to verify factorizability. This brings us to the central predicament with results such as the agreement theorem when there is limited time. Aumann’s theorem acquires its surprising power only when agents can be endowed with arbitrary partitions; different partitions mean that agents obtain different information, and a condition that guarantees the equality of the posteriors despite different information is both surprising and powerful. With limited time, however, agreement type results no longer have this strength. Verifying factorizability requires all agents to have the finest possible partition; thus, they all obtain the same, perfect information, and the equality of the posteriors is then a trivial consequence of Bayesian updating. The conclusion is that the agreement theorem, in its original form, is rather narrow in scope when there is limited time.
This paper does not explore approximate Bayesian inference. There is a vast body of literature in statistics and econometrics on approximate Bayesian inference, but this paper does not speak directly to that literature. The focus in this paper is on deriving a condition on the prior probability measure and sample space that renders exact Bayesian updates feasible, and furthermore, the form of the condition derived is tailored towards applications in game theory and finance such as Aumann’s agreement theorem.
The rest of the paper is organized as follows. Section 2 provides a literature review. Section 3 introduces the model, and Section 4 discusses the significance of factorization and independence for posterior calculation. Section 5 recasts the agreement theorem under limited time, pointing out its limitations, and Section 6 concludes.

2. Literature Review

This paper contributes to two strands of literature in mathematical economics. The first strand is the theory of bounded rationality. In the bounded rationality literature, one starts by relaxing some feature of rationality, and the model then generates conclusions that are at variance with a rational model. In contrast, this paper explores the limits with Bayesian rationality, and our model shows theoretically when rationality becomes ineffective. In this sense, analysis of the kind presented in this paper serves as a precursor to a mathematical study of bounded rationality. Questions such as what happens when there is limited time and agents have arbitrary partitional information structures are now clearly understood as subject matters for bounded rationality models. Depending on the situation, one could have agents misperceive correlations in data (see, for example, Spiegler [2]) and deploy a probabilistic correlation representation (Ellis and Piccione [3]), use model-based inference (Mailath and Samuelson [4]) where the models are subjective (Esponda and Puozo [5]), use incomplete theories (Eyster and Piccione [6], Eyster et al. [7]), misspecify the information of other agents (Bohren [8]), use analogies (Jehiel [9], Jehiel and Koessler [10], Jehiel [11]) or employ other forms of bounded rationality, and then check for agreement (for broad surveys of the theoretical bounded rationality literature, see Rubinstein [12] and Spiegler [13]). Traditionally, the justification for such bounded rationality models has come only from experiments (where one verifies the particular features of rationality that have been relaxed) or empirics (where one verifies the conclusions that are at variance with the rational model). Therefore, a methodological contribution of this paper is that it provides a technique to obtain a theoretical boundary for rationality, beyond which one has to operate with some or other form of boundedness.
The paper also contributes to the literature on agreement-type results (Aumann [1], Milgrom and Stokey [14], Geanakoplos and Polemarchakis [15], McKelvey and Page [16], Samet [17], Heifetz [18], Samet [19], Lehrer and Samet [20], Chen et al. [21], Shaiderman [22], Di Tillio et al. [23]) in game theory, finance and mathematical economics. Our study shows that the scope of such agreement results is quite restricted when one operates under limited time.
Data reduction techniques have a long history in statistics and computer science, and the notion of data reduction that we develop is related to the notions of conditional independence (Dawid [24]) and sufficiency statistics with nuisance parameters (Fraser [25]). The connections are explored in Section 4, and more references to this literature are provided in that section.

3. Model

Let Ω , F be a measurable space. Ω is assumed to be a finite product set. That is, Ω = × u U Ω u , where U N is a finite index set, and each Ω u is finite. Finiteness implies F = ( Ω ) , the set of all subsets of Ω , and we use ( Ω ) to denote the powerset of Ω . Each ( Ω u , ( Ω u ) ) comes equipped with a set of random variables X u , and we let X u denote a representative member of this set. Each X u is a measurable map from Ω u to R . The interpretation is that Ω , F is the space of possible outcomes of an experiment, and the realizations of random variables { X u } u U are potential observables about (parts of the) experiment.
The agents who are reasoning about the experiment have a common prior π . In contrast to the traditional approach, we shall assume that the probability measure π is a mapping from is a mapping from Ω to 0 , 1 . Since Ω is finite, technically, this definition of the measure is equivalent to the traditional definition.
Assumption 1.
The probability measure π, defined on the finite measurable space Ω , F , is a mapping from Ω to 0 , 1 .
Another implicit assumption often made in the traditional approach is that computations do not take tangible time; thus one deems two separate arguments involving different lengths of computations equivalent as long as they reach the same logical conclusion. In the real world, every computation needs a physical medium and takes time, even if that time is small, and we assume that every elementary operation (addition, subtraction, multiplication, division) takes a non-zero, finite time. The precise time needed for an elementary operation will not be important for the arguments of the paper, but we will need the total time for a calculation to be an increasing function of the number of elementary operations involved.
Assumption 2.
Every elementary operation—addition, subtraction, multiplication, and division—takes a non-zero, finite time, and the total time for a calculation involving the elementary operations is an increasing function of the number of elementary computations.
All agents in the model are Bayesian rational in the usual sense. In other words, agents use the Bayes formula to calculate their posteriors, and they calculate under the correct probability measure.

4. Factorizability

Suppose an agent obtains information X b = β for some b U . To obtain the posterior of an event, say X a = α for some a U , given this information, the agent needs to compute
P π X a = α | X b = β = ω Ω U \ { a , b } P π X a 1 ( α ) , X b 1 ( β ) , ω ω Ω U \ b P π X b 1 ( β ) , ω ,
where
Ω U \ a , b = × v U \ a , b Ω v , Ω U \ b = × w U \ b Ω w .
It is easy to see that the number of elementary operations in Equation (1) grows exponentially in the cardinality of the set U. When U has cardinality m, the number of additions in the denominator is 2 m 1 . Furthermore, there are more additions to be performed in the numerator, and then a further division must be performed to arrive at the final result. Thus, when the time available for computation is limited, one needs restrictions on the probability space to make the calculation in (1) feasible for a Bayesian rational agent. Intuitively, the reason the calculation in (1) takes a long time is because it requires us to track the states for all the random variables defined on the space. This, however, becomes unnecessary when the variables are independent. The propositions in this section obtain the minimal independence condition that has to be imposed on the variables for the calculation in (1) to be feasible, given limited time.
To formalize ideas, say it takes T units of time for the complete calculation in Equation (1). Denote by | U | the cardinality of set U. As we saw above, T is exponential in | U | when each elementary operation takes finite time. When we say limited time, it means an agent has τ < T units of time for the calculation, where τ is polynomial in the size of U. Namely,
τ = c · | U | k < T ,
where c and k are positive constants. The objective is to find the minimal restrictions that have to be imposed on the probability space such that for any U, the calculation in (1) always completes within τ .
As indicated earlier, these restrictions have to do with the independence relations among random variables on the sample space. Intuitively, if random variables are independent, one can ignore irrelevant parts of the sample space, thus speeding up calculations such as Equation (1). For example, with two independent coin tosses, one does not need to bother with the outcome of the first coin toss to calculate the probabilities for the second. Before getting to the main result of this section on the factorization of measures, we prove a preparatory proposition below. The proposition says that a necessary condition for the Bayesian posterior calculation, given limited time, is that the random variables needed for inference are independent of a non-empty subset of variables in U. This is a version of conditional independence among the variables; recall that random variables Y and Z are conditionally independent given the random variable W if P Y , Z | W = P Y | W · P Z | W , equivalently expressed as P Y | Z , W = P Y | Z . (See Dawid [24] and Pearl [26] for more on conditional independence.) It is not too hard to see the reasoning behind the result of the proposition: if there were not such a subset of variables, all | U | variables would be needed in the calculation of Equation (1), which would then take T > τ .
Given a set D, for any two random variables X f and X g , the notation X f π X g | { X d } d D means that X f is independent of X g under the probability measure π , conditional on the knowledge of the random variables { X d } d D . The symbol ⊆ is used for the strict subset or equality, while ⊂ is reserved for the strict subset.
Proposition 1.
Given limited time, the posterior P π X a = α | X b = β can be calculated by the Bayes formula only if there are sets D a , D b , with D a D b U , such that
(i)
X a π X f | { X i } i D a , for all { X f } f U \ D a ,
(ii)
X b π X g | { X j } j D b , for all { X g } g U \ D b .
Proof. 
Note that the posterior is calculated by the Bayes formula only if the summation in the numerator of Equation (1) is over R 1 S 1 = × i U \ a , b Ω i ; the summation in the denominator is over R 2 S 2 = × j U \ b Ω j ; and R 2 R 1 × Ω a .
Next, observe that each additional variable X n increases the number of terms that need to be summed for the calculation by a factor of | Ω n | , and this renders the overall calculation time exponential. The calculation is rendered non-exponential and polynomial only if, beyond some point, we are indifferent to the realized value of X n , in which case | Ω n | becomes effectively 1 for the purposes of the calculation. Thus, we need
R 1 = × i M U Ω i , R 2 = × j L M Ω j , R 2 R 1 × Ω a , | L | | M | 1 , | M | < | U | 1 .
In the last inequality above, we have | U | 1 on the right-hand side because the value of X b is fixed in the denominator of Equation (1), while in the previous inequality above, we have | M | 1 on the right-hand side because the values of both X a and X b are fixed in the numerator of the equation. Designating D b = L and D a D b = M gives us the result. □
The result in the proposition provides a conditional independence condition among the set of random variables that must be satisfied whenever the Bayes formula is deployed successfully for the calculation of posteriors in limited time. The condition can be sharpened, and made more intuitive, when expressed in terms of the factorizability of the prior probability measure.
Roughly, factorizability means that one can divide up a sample space into smaller blocks, such that probabilistic inference in one block is not influenced by the outcomes in other blocks. That is to say, a probability measure π over a finite sample space Ω is factorizable if there is a decomposition of the space such that the product of the marginalizations of λ over the decomposed spaces gives λ over Ω . For example, for two coin tosses, if P λ H , H = 1 / 4 , P λ H , T = 1 / 4 , P λ T , H = 1 / 4 , P λ T , T = 1 / 4 , then λ is factorizable, because P λ H , H = ( P λ H , H + P λ H , T ) · ( P λ H , H + P λ T , H ) . However, if P λ H , H = 1 / 6 , P λ H , T = 1 / 2 , P λ T , H = 1 / 4 , P λ T , T = 1 / 12 , then λ is no longer factorizable.
The formal definition of factorizability is provided below.
Definition 1.
A probability measure π over a finite sample space Ω = × u U Ω u is said to be factorizable, if there exists a strict subset M U such that
P π ( A · B ) = P π ( A · ( × j U \ M Ω j ) ) · P π ( ( × i M Ω i ) · B ) ,
for any A ( × i M Ω i ) and B ( × j U \ M Ω j ) . In this case, the marginals P π ( A · ( × j U \ M Ω j ) ) and P π ( ( × i M Ω i ) · B ) define the factor measures of π on the measurable spaces ( × i M Ω i , ( × i M Ω i ) ) and ( × j U \ M Ω j , ( × j U \ M Ω j ) ) , respectively.
Factorizability is related to the notion of independence, but the two concepts are not identical. Independence is a property of individual events (and functions defined on the events), but factorizability is a property of the measure and sample space. A factorizable measure defines a set of independent events on the sample space in such a way that the resulting sets of (potentially) dependent events form a partition of ( Ω ) . Events that belong to a particular member of such a partition are conditionally independent of other events in the sample space. Using the notation of Definition 1, for events A 1 , A 2 ( × i M Ω i ) and B ( × j U \ M Ω j )
P π A 1 | A 2 , B = P π A 1 | A 2 .
Conditional independence underlies the notion of sufficiency in statistics, and although we do not use sufficient statistics in our exposition in this paper, it is possible to devise an alternative formulation of factorizability using the notion of sufficiency. What conditions might such a characterization entail? Recall that if S θ denotes a partially sufficient statistic for θ × j M Ω j , then for a random variable X on Ω :
P π X | θ , ϕ = g S θ | θ h ( X | S θ , ϕ ) ,
where ϕ × j U \ M Ω j , and g and h denote, respectively, the marginal probability measure of S θ , and the conditional probability measure of X given S θ . Since the events in ( × i M Ω i ) are independent of the events in ( × j U \ M Ω j ) when π is factorizable, for statistics S θ and S ϕ that are partially sufficient for θ and ϕ , respectively, the condition
P π X | θ , ϕ = g S θ | θ h ( X | S θ ) g S ϕ | ϕ h ( X | S ϕ )
must hold when π is factorizable. (For a more detailed discussion of partial sufficiency, see Fraser [25], Dawid [27] and the review in Basu [28].)
As the following theorem shows, the ability to divide up and ignore parts of the sample space—that factorizability permits—is essential if posteriors have to be computed in limited time.
Theorem 1.
Given limited time, computing the posterior P π X a = α | X b = β , a , b U , is feasible for a Bayesian rational agent if and only if the probability measure π is factorizable, such that under one of the factor measures, say μ, the calculation P μ X a = α | X b = β completes within τ units of time.
Proof. 
To show sufficiency, we must start with a factor measure μ , under which the computation P μ X a = α | X b = β completes within τ units of time, and then show that this is equivalent to computing Equation (1). Suppose the factor measure μ is defined over a subset M U . This means we are given that the computation
P μ X a = α | X b = β = ω × v M \ a , b Ω v P μ X a 1 ( α ) , X b 1 ( β ) , ω ω × w M \ b Ω w P μ X b 1 ( β ) , ω
completes within τ units of time. Write M = U \ M , and let λ be the factor measure on × k M Ω k . If the computation in (8) completes within τ units of time, then so does the computation
ω × k M Ω k P λ ω · ω × v M \ a , b Ω v P μ X a 1 ( α ) , X b 1 ( β ) , ω ω × k M Ω k P λ ω · ω × w M \ b Ω w P μ X b 1 ( β ) , ω ,
because the additional terms in the numerator and denominator cancel. Since μ and λ are factor measures of π , the term (9) is equal to Equation (1), which means that the computation for P π X a = α | X b = β is completed within τ units of time. This completes the proof of sufficiency.
To show necessity, we have to start with Equation (1) and show that if the computation is completed in a limited amount of time, π must be factorizable, and the posterior must be calculated under a factor measure μ . Let us start by noting that limited time as defined in inequality (2) implies that the computation of P π X a = α | X b = β in Equation (1) is not feasible when
Ω U \ a , b = × v U \ a , b Ω v , Ω U \ b = × w U \ b Ω w .
For each additional element in the index set U, the number of terms that need to be summed in the numerator and denominator of Equation (1) increases by a factor of | Ω u | , u U . This renders the overall computation time of Equation (1) exponential. The calculation can be undertaken in polynomial time τ only if there is a fixed, strict subset Z U , such that for u Z , | Ω u | is 1 for the purposes of calculating Equation (1). This implies that for any A ( × i Z Ω i ) and B ( × j U \ Z Ω j ) , we must have
P π A | B = P π A ,
which means π satisfies Equation (4), the condition for factorizability.
Define μ to be the factor measure on the space ( × i Z Ω i , ( × i Z Ω i ) ) . Summing over sets Ω U \ a , b and Ω U \ b in Equation (1) under measure π is equivalent to summing over the sets Ω Z \ a , b and Ω Z \ b , respectively, with Ω Z \ a , b = × v Z \ a , b Ω v and Ω Z \ b = × w Z \ b Ω w , if we use the measure μ . In other words,
P π X a = α | X b = β = ω Ω Z \ a , b P μ X a 1 ( α ) , X b 1 ( β ) , ω ω Ω Z \ b P μ X b 1 ( β ) , ω .
This completes the proof of necessity. □
Theorem 1 sheds fresh light on the conditions under which posteriors may be calculated in limited time. Although factorizability enables computation of the posterior, it comes with a number of caveats that have received relatively little attention in the mathematical economics literature. We look at some of the issues below in the context of Aumann’s agreement theorem.

5. Recasting the Agreement Theorem

A number of important results in economic theory make use of the assumption of common priors. For instance, in a celebrated result, Aumann [1] showed that agents with a common prior cannot agree to disagree. Subsequent papers such as Milgrom and Stokey [14] extended this result to the case of trading. With limited time, a prerequisite for such results is that agents must first correctly verify that Theorem 1 holds and then settle on a common factor measure for calculating posteriors. Thus, to transport results that depend on a common prior to limited-time settings, we must have much more stringent conditions fulfilled. Especially when the prior has multiple factorizations, it can be hard to justify the use of a common factor measure by everyone. Thus, agreement-type results potentially carry little bite in settings with limited time, such as high-frequency trading environments.
Let us illustrate this point in the context of the agreement theorem in Aumann [1]. As in the original paper, we shall insist throughout that agents in the interaction use the Bayes formula and calculate with the correct probability distribution. Such an analysis is valuable because it gives us the theoretical boundaries of agreement with Bayesian rationality.
A precondition for the Aumann [1] agreement theorem is that agents must be able to calculate their posteriors. As described in the previous section, with limited time, this translates to the factorizability of the prior. Let q i = P π X a = α | P i denote the posterior probability of some event X a = α given agent i’s information, with P i a partition of Ω representing i’s information.
Corollary 1 (Theorem 1).
Given limited time for computation, common knowledge at ω Ω that q i = q i and q j = q j implies q i = q j for Bayesian rational agents i and j only if the prior π is factorizable.
Proof. 
If π is not factorizable, then by Theorem 1, the posteriors q i and q j cannot be calculated by the agents in limited time. Thus, the conditions for the agreement theorem are then no longer valid. □
Thus, a pre-requisite for the agreement theorem is that agents must be able to decide whether or not the prior is factorizable. If they conclude the prior is factorizable and then choose the same factor measure for calculating posteriors, the theorem holds sway. This brings us to the main predicament with agreement-type results under limited time. Factorizability requires that for all possible events, condition (4) must hold. Therefore, unless an agent has access to all possible events, he cannot verify factorizability. However, having access to all possible events is equivalent to being perfectly informed about the state of the world.
Recall that when the true state of the world is ω Ω , an agent’s information is the partition that contains this true state—that is, p i ω P i for agent i, where p i ω is the member of P i that contains ω .
Proposition 2.
An agent, say i, cannot verify if prior π is factorizable unless p i ( ω ) = ω for all ω Ω .
Proof. 
Say p i ( ω * ) ω * for some ω * Ω . Suppose Ω I = × m M Ω m for some M U , Ω II = × k N \ M Ω k , and λ I and λ II are the marginal probability measures corresponding to the prior π on the two spaces. In other words,
P λ I E I : = P π E I × Ω II for E I ( Ω I ) ; P λ II E II : = P π Ω I × E II for E II ( Ω II ) .
If π is factorizable into λ I and λ II , we have P π E I × E II = P λ I E I · P λ II E II .
Now, if p i ( ω * ) ω * , either p i ( ω I * ) ω I * , or p i ( ω II * ) ω II * , or both, where ω I * ( Ω I ) , ω II * ( Ω II ) . Suppose p i ( ω I * ) ω I * . In this case, if λ I is consistent with agent i’s information partition, then so is the probability measure λ I , where
P λ I ω I * = P λ I ω I * + ϵ , ϵ 1 P λ I ω I * ; P λ I ω I = P λ I ω I ϵ k , with k = | p i ( ω I * ) | 1 , for all ω I p i ( ω I * ) \ ω I * ,
and for E I ( Ω I ) when E I p i ( ω * ) = , P λ I E I = P λ I E I . However, now, P π ω * P λ I ω I * P λ II ω II * , so the prior is not factorized, despite λ I being consistent with i’s information partition. The reasoning for the case with p i ( ω II * ) ω II * is similar.
The above arguments go through for any factorization of the prior unless p i ( ω ) = ω for all ω Ω , and this gives the result. □
As a consequence of Proposition 2, under limited time, the agreement theorem is valid only when the agent information partitions are the finest partitions. This special case, however, is trivial and obvious. If agents did have perfect information about the state of the world, then their posteriors are the same as a simple consequence of Bayes formula when the prior is factorizable. The surprising power of the Aumann [1] agreement result in applications comes from promising equal posteriors even when agents have coarser—and possibly different—information partitions. Unfortunately, that power is missing when there is limited time.
What happens when time is limited and agent information partitions are not the finest? The analysis above shows that we can no longer offer a comment within the realm of Bayesian rationality. As discussed in the introduction and literature review sections, there are a number of bounded rationality and behavioral approaches in economics, each with its own specialized assumptions, and one could choose one of those models according to the specifics of the situation at hand. This brings us to a methodological point. In a way, bounded rational methods are approximation methods where one relaxes certain rationality constraints to reach a solution. In computer science, whether or not one needs approximations is decided in the first place by checking theoretically if it is computationally impossible to find an exact solution, whereas in economics, researchers have traditionally relied on experiments and empirics to make the argument. The analysis in this paper shows that, even in economics, one could make the case for approximations and boundedness purely theoretically.

6. Conclusions

Though the limitations imposed by time play a very important role in real-world decision making, given the way economic theory has developed, such considerations have seldom merited a first-order role in the theoretical models. This paper provides an example where limited time makes an important difference: the Aumann [1] agreement theorem, which lies at the root of many significant results in areas ranging from game theory to finance, loses its bite when agents have limited time. On the other hand, theoretical computer scientists have long been interested in the computational properties of algorithms such as the Bayes formula, and increasingly in applied areas such as high-frequency trading and market design, economists are engaging with computer scientists for robust, practical implementations of their models (see, for example, Leyton-Brown, Milgrom and Segal [29], and Lopez de Prado [30]). This paper furthers such engagement.
A substantive difference in the limited-time setting is that the use of factor measures, necessitated by Theorem 1, can restrict agents from learning the correlation structure in the data correctly. Once the prior has been factorized, agents continue to update only the factor measure and not the original prior measure. Thus, there is no way for agents to update the independence structure in the data derived from the prior measure. A related issue is that factorizability requires that for all possible events in the sample space, condition (4) must hold. However, results such as the agreement or no-trade theorem rely on different agents having access to different coarsenings of the sample space. In such cases, agents cannot verify the factorizability of their updated prior measure even if they wish to, because they do not have access to all events in the space. Thus, accounting for limited time can lead to substantial revisions in the canonical models of interactive behavior. It is hoped that future research will build on the results in this paper to uncover new facets of such settings.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Aumann, R.J. Agreeing to Disagree. Ann. Stat. 1976, 4, 1236–1239. [Google Scholar] [CrossRef]
  2. Spiegler, R. Bayesian Networks and Boundedly Rational Expectations. Q. J. Econ. 2016, 131, 1243–1290. [Google Scholar] [CrossRef]
  3. Ellis, A.; Piccione, M. Correlation Misperception in Choice. Am. Econ. Rev. 2017, 107, 1264–1292. [Google Scholar] [CrossRef] [Green Version]
  4. Mailath, G.J.; Samuelson, L. Learning under Diverse World Views: Model-Based Inference. Am. Econ. Rev. 2020, 110, 1464–1501. [Google Scholar] [CrossRef]
  5. Esponda, I.; Puozo, D. Berk–Nash Equilibrium: A Framework for Modeling Agents with Misspecified Models. Econometrica 2016, 84, 1093–1130. [Google Scholar] [CrossRef] [Green Version]
  6. Eyster, E.; Piccione, M. An Approach to Asset Pricing under Incomplete and Diverse Perceptions. Econometrica 2013, 81, 1483–1506. [Google Scholar]
  7. Eyster, E.; Rabin, M.; Vayanos, D. Financial Markets Where Traders Neglect the Informational Content of Prices. J. Financ. 2019, 74, 371–399. [Google Scholar] [CrossRef] [Green Version]
  8. Bohren, J.A. Informational herding with model misspecification. J. Econ. Theory 2016, 163, 222–247. [Google Scholar] [CrossRef]
  9. Jehiel, P. Analogy-based Expectation Equilibrium. J. Econ. Theory 2005, 123, 81–104. [Google Scholar] [CrossRef]
  10. Jehiel, P.; Koessler, F. Revisiting Games of Incomplete Information with Analogy-based Expectations. Games Econ. Behav. 2008, 62, 533–557. [Google Scholar] [CrossRef] [Green Version]
  11. Jehiel, P. Analogy-Based Expectation Equilibrium and Related Concepts: Theory, Applications, and Beyond. World Congress of the Econometric Society. 2020. Available online: https://philippe-jehiel.enpc.fr/wp-content/uploads/sites/2/2020/10/SurveyABEE.pdf (accessed on 1 September 2022).
  12. Rubinstein, A. Modeling Bounded Rationality; The MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  13. Spiegler, R. Bounded Rationality and Industrial Organization; Oxford University Press: New York, NY, USA, 2011. [Google Scholar]
  14. Milgrom, P.; Stokey, N. Information, Trade and Common Knowledge. J. Econ. Theory 1982, 26, 17–27. [Google Scholar] [CrossRef]
  15. Polemarchakis, J.G.M. We can’t disagree forever. J. Econ. Theory 1982, 26, 363–390. [Google Scholar]
  16. McKelvey, R.; Page, T. Common knowledge, consensus and aggregate information. Econornetrica 1986, 54, 109–127. [Google Scholar] [CrossRef]
  17. Samet, D. Ignoring Ignorance and Agreeing to Disagree. J. Econ. Theory 1990, 52, 190–207. [Google Scholar] [CrossRef] [Green Version]
  18. Heifetz, A. Comment on consensus without common knowledge. J. Econ. Theory 1996, 70, 273–277. [Google Scholar] [CrossRef]
  19. Samet, D. Agreeing to disagree: The non-probabilistic case. Games Econ. Behav. 2010, 69, 169–174. [Google Scholar] [CrossRef] [Green Version]
  20. Lehrer, E.; Samet, D. Agreeing to agree. Theor. Econ. 2011, 6, 269–287. [Google Scholar] [CrossRef] [Green Version]
  21. Chen, Y.-C.; Lehrer, E.; Li, J.; Samet, D.; Shmaya, E. Agreeing to agree and Dutch books. Games Econ. Behav. 2015, 93, 108–116. [Google Scholar] [CrossRef]
  22. Shaiderman, D. An upper bound for the 1-variation along the road to agreement. Int. J. Game Theory 2021, 50, 1053–1067. [Google Scholar] [CrossRef]
  23. Tillio, A.D.; Lehrer, E.; Samet, D. Monologues, dialogues, and common priors. Theor. Econ. 2022, 17, 587–615. [Google Scholar] [CrossRef]
  24. Dawid, A.P. Conditional Independence in Statistical Theory. J. R. Stat. Soc. Ser. 1979, 41, 1–31. [Google Scholar] [CrossRef]
  25. Fraser, D.A.S. Sufficient statistics with nuisance parameters. Ann. Math. Stat. 1956, 27, 838–842. [Google Scholar] [CrossRef]
  26. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufman Publishers: San Francisco, CA, USA, 1988. [Google Scholar]
  27. Dawid, A.P. On the concepts of sufficiency and ancillarity in the presence of nuisance parameters. J. R. Stat. Soc. Ser. B 1975, 37, 248–258. [Google Scholar] [CrossRef]
  28. Basu, D. On Partial Sufficiency: A Review. J. Stat. Plan. Inference 1978, 2, 1–13. [Google Scholar] [CrossRef]
  29. Leyton-Brown, K.; Milgrom, P.; Segal, I. Economics and Computer Science of a Radio Spectrum Reallocation. Proc. Natl. Acad. Sci. USA 2017, 114, 7202–7209. [Google Scholar] [CrossRef]
  30. de Prado, M.L. Advances in Financial Machine Learning; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bhattacharya, A. Posteriors in Limited Time. AppliedMath 2022, 2, 700-710. https://doi.org/10.3390/appliedmath2040041

AMA Style

Bhattacharya A. Posteriors in Limited Time. AppliedMath. 2022; 2(4):700-710. https://doi.org/10.3390/appliedmath2040041

Chicago/Turabian Style

Bhattacharya, Ayan. 2022. "Posteriors in Limited Time" AppliedMath 2, no. 4: 700-710. https://doi.org/10.3390/appliedmath2040041

Article Metrics

Back to TopTop