Next Article in Journal
Modeling Poker Challenges by Evolutionary Game Theory
Next Article in Special Issue
Economic Harmony: An Epistemic Theory of Economic Interactions
Previous Article in Journal / Special Issue
Epistemically Robust Strategy Subsets
Article Menu

Export Article

Games 2016, 7(4), 38; doi:10.3390/g7040038

Article
Probabilistic Unawareness
Université Paris-Est Créteil (EA LIS), Institut Universitaire de France & IHPST (UMR 8590); 61, avenue du général de Gaulle, 94000 Créteil, France
Academic Editor: Paul Weirich
Received: 9 September 2016 / Accepted: 10 November 2016 / Published: 30 November 2016

Abstract

:
The modeling of awareness and unawareness is a significant topic in the doxastic logic literature, where it is usually tackled in terms of full belief operators. The present paper aims at a treatment in terms of partial belief operators. It draws upon the modal probabilistic logic that was introduced by Aumann (1999) at the semantic level, and then axiomatized by Heifetz and Mongin (2001). The paper embodies in this framework those properties of unawareness that have been highlighted in the seminal paper by Modica and Rustichini (1999). Their paper deals with full belief, but we argue that the properties in question also apply to partial belief. Our main result is a (soundness and) completeness theorem that reunites the two strands—modal and probabilistic—of doxastic logic.
Keywords:
unawareness; epistemic logic; probabilistic logic
JEL Classification:
C70; C72; D80

1. Introduction

Full (or categorical) beliefs are doxastic attitudes, like those ascribed when one says:
Pierre believes that  ϕ .
Modal logic provides a way of modeling full beliefs. It is well known that it suffers from two main cognitive idealizations. The first one is logical omniscience: a family of properties such as the closure of beliefs under logical consequence (from the premise that ϕ implies ψ, infer that B ϕ implies B ψ , also known as the rule of monotonicity) or substitutability of logically equivalent formulas (from the premise that ϕ is equivalent to ψ, infer that B ϕ is equivalent to B ψ , also known as the rule of equivalence). The second cognitive idealization is full awareness, which is more difficult to characterize precisely. As a first approximation, let us say that, according to this assumption, the agent is supposed to have a full understanding of the underlying space of possibilities and of the propositions that can be built upon them.
Logicians and computer scientists have devoted much attention to the weakening of logical omniscience. This literature is surveyed in [1], and in particular the two main extant solutions: structures with subjective, logically-impossible states1 and awareness © structures introduced by R. Fagin and J. Halpern [4]2. The very same awareness © structures are used to weaken the full awareness assumption. More recently, game theorists have become interested in weakening awareness in epistemic and doxastic logic and in related formalisms ([5,6,7,8,9,10])3. For a recent and detailed survey of models of unawareness, see [14].
Doxastic logic is a rather coarse-grained model of doxastic attitudes, because it excludes partial beliefs, i.e., the fact that an agent believes that it is unlikely that ϕ or very likely that ψ. The main formalism for partial beliefs makes use of probabilities in their subjective or epistemic interpretation, where probability values stand for degrees of belief. There is a noteworthy contrast between modal doxastic logic and those probabilistic models: whereas the former make beliefs explicit (part of the formal language), they are left implicit in the latter. However, one may enrich the syntax with explicit partial belief operators. For instance, R. Aumann has introduced in [15] an operator L α ϕ interpretable as
the agent believes at least to degree α that ϕ
A possible-world semantics is given for these operators (which is inspired by [16])4. This semantics has been axiomatized by [19] under the form of a weak (soundness and) completeness theorem. This probabilistic logic is the true counterpart of Kripkean epistemic logic for degrees of beliefs, and it is the framework of this paper.
This probabilistic logic suffers from the same cognitive idealizations as doxastic logic: logical omniscience and full awareness. In a preceding paper [20], we dealt with the problem of logical omniscience in probabilistic logic. Our proposal was mainly based on the use of so-called impossible states, i.e., subjective states where the logical connectives can have a non-classical behavior. The aim of the present paper is to enrich probabilistic logic with a modal logic of unawareness. Our main proposal is a generalization of Aumman’s semantics that uses impossible states like those of [5] and provably satisfies a list of intuitive requirements. Our main result is a weak completeness theorem like the one demonstrated by [19], but adapted to the richer framework that includes awareness. To our knowledge, [21] is the closest work to ours: in this paper, the state-space model with “interactive unawareness” introduced in [8] is extended to probabilistic beliefs in order to deal with the issue of speculative trade. One of the differences with our paper is that their framework is purely set-theoretical, whereas we rely on a formal language.
The remainder of the paper proceeds as follows. In Section 2, we try to provide some intuitions about the target attitudes, awareness and unawareness. Section 3 presents briefly probabilistic logic, and notably the axiom system of [19] (that will be called ‘system H M ’). In Section 4, we vindicate a slightly modified version of the Generalized Standard Structures of [5]. Section 5 contains the main contribution of the paper: a logic for dealing with unawareness in probabilistic logic5. Our axiom system (named ‘system H M U ’) enriches the probabilistic logic with an awareness operator and accompanying axioms. Section 6 concludes.

2. Awareness and Unawareness

2.1. Some Intuitions

Unawareness is a more elusive concept than logical omniscience. This section gives insights on the target phenomena and puts forward properties that a satisfactory logic of (un)awareness should embody. Following the lead of [5], we may say that there is unawareness when
  • there is “ignorance about the state space”
  • “some of the facts that determine which state of nature occurs are not present in the subject’s mind”
  • “the agent does not know, does not know that she does not know, does not know that she does not know that she does not know, and so on...”
Here is an illustrative example. Pierre plans to rent a house for the next holiday, and from the observer’s point of view, there are three main factors relevant to his choice:
  • p: the house is no more than 1 km far from the sea
  • q: the house is no more than 1 km far from a bar
  • r: the house is no more than 1 km far from an airport
There is an intuitive distinction between the two following doxastic states:
  • State (i): Pierre is undecided about r’s truth: he neither believes that r, nor believes that ¬ r ; there are both r-states and ¬ r -states that are epistemically accessible to him.
  • State (ii): the possibility that r does not come up to Pierre’s mind. Pierre does not ask himself: ‘is there an airport no more than 1 km far from the house?”.
The contrast between the two epistemic states can be rendered in terms of a state space with either a fine or coarse grain6. The observer’s set of possible states is:
  • where each state is labeled by the sequence of literals that are true in it. This state is also Pierre’s in doxastic State (i). The doxastic State (ii), on the other hand, is:
  • Some states in the initial state space have been fused with each other; those that differ only in the truth value they assign to the formula the agent is unaware of, namely r.7

2.2. Some Principles in Epistemic Logic

More theoretically, what properties should one expect awareness to satisfy? In what follows:
  • B ϕ means “the agent believes that ϕ,
  • A ϕ means “the agent is aware that ϕ”.
Here is a list of plausible properties for the operators B and A:
A ϕ A ¬ ϕ (symmetry)
A ( ϕ ψ ) A ϕ A ψ (distributivity over ∧)
A ϕ A A ϕ (self-reflection)
¬ A ϕ ¬ A ¬ A ϕ (U-introspection)
¬ A ϕ ¬ B ϕ ¬ B ¬ B ϕ (plausibility)
¬ A ϕ ( ¬ B ) n ϕ n N (strong plausibility)
¬ B ¬ A ϕ (BU-introspection)
Natural as they are, these properties cannot be jointly satisfied in Kripkean doxastic logic. This has been recognized by [6], who show that it is impossible to have both:
(i)
a non-trivial awareness operator that satisfies plausibility, U-introspection and BU-introspection and
(ii)
a belief operator that satisfies either necessitation or the rule of monotonicity8.
Of course, the standard belief operator of epistemic logic does satisfy both necessitation and the rule of monotonicity. The main challenge is therefore to build a logic of belief and awareness that supports the above intuitive principles. Since necessitation and the rule of monotonicity are nothing but forms of logical omniscience, it becomes a major prerequisite to weaken the latter. Indeed, both the generalized standard structures of [5] and the awareness © structures of [4] do weaken logical omniscience.

2.3. Some Principles in Probabilistic Logic

Probabilistic logics are both lesser known than and not so well unified as modal doxastic logics. The syntactic framework, in particular, varies from one to another. The logic on which this paper is based relies on a language that is quite similar to that of doxastic logic, and it therefore can be seen as constituting a probabilistic modal logic: its primary doxastic operators are L a , where a is a rational number between zero and one (“the agent believes at least to degree a that...”)9 We can express the relevant intuitive principles for L a as:
A ϕ A ¬ ϕ (symmetry)
A ( ϕ ψ ) A ϕ A ψ (distributivity over ∧)
A ϕ A A ϕ (self-reflection)
¬ A ϕ ¬ A ¬ A ϕ (U-introspection)
¬ A ϕ ¬ L a ϕ ¬ L a ¬ L a ϕ (plausibility)
¬ A ϕ ( ¬ L a ) n ϕ n N (strong plausibility)
¬ L a ¬ A ϕ ( L a U-introspection)
L 0 ϕ A ϕ (minimality)
Seven of these eight principles are direct counterparts of those put forward for modal doxastic logic, minimality being the exception. On the one hand, if an agent believes to some degree (however small) that ϕ, then he or she is aware of ϕ. This conditional is intuitive for a judgmental rather than for a purely behavioral conception of partial beliefs, according to which degrees of beliefs are causal determinants of behavior, which may or may not be consciously grasped by the agent10. The reverse conditional roughly means that an agent aware of ϕ has some degree of belief toward ϕ. This directly echoes Bayesian epistemology. These eight principles may be seen as a set of requirements for a satisfactory probabilistic logic11.

3. Probabilistic (Modal) Logic

This section briefly reviews the main concepts of probabilistic logic following [15,19]12. Probabilistic logic is of course related to the familiar models of beliefs where the doxastic states are represented by a probability distribution on a state space (or on the formulas of a propositional language), but the doxastic operator is made explicit here. Syntactically, as we have already said, this means that the language is endowed with a family of operators L a . Semantically, there are sets of possible states (or “events”) corresponding to the fact that an agent believes (or does not believe) such and such formula at least to such and such degree.

3.1. Language

Definition 1 (probabilistic language).
The set of formulas of a probabilistic language L L ( A t ) based on a set A t of propositional variables is defined by:
ϕ : : = p | | | ¬ ϕ | ϕ ϕ | L a ϕ
where p A t and a [ 0 , 1 ] Q .
From this, one may define two derived belief operators:
  • M a ϕ = L 1 - a ¬ ϕ (the agent believes at most to degree a that ϕ)
  • E a ϕ = M a ϕ L a ϕ (the agent believes exactly to degree a that ϕ)13

3.2. Semantics

Probabilistic structures (PS), as introduced by [15], aim at interpreting the formal language we just defined. They are the true probabilistic counterpart of Kripke structures for epistemic logic. In particular, iterated beliefs are allowed because a probability distribution is attributed to each possible state, very much like a Kripkean accessibility relation. We follow the definition of [19]:
Definition 2 (probabilistic structures).
A probabilistic structure for L L ( A t ) is a four-tuple M = ( S , Σ , π , P ) where:
(i) 
S is a state space
(ii) 
Σ is a σ-field of subsets of S
(iii) 
π : S × A t { 0 , 1 } is a valuation for S s.t. π ( . , p ) is measurable for every p A t
(iv) 
P : S Δ ( S , Σ ) is a measurable mapping from S to the set of probability measures on Σ endowed with the σ-field generated by the sets
{ μ Δ ( S , Σ ) : μ ( E ) a } E Σ , a [ 0 , 1 ] .
Definition 3.
The satisfaction relation, labeled, extends π to every formula of the language according to the following conditions:
(i) 
M , s p iff π ( p , s ) = 1
(ii) 
M , s ϕ ψ iff M , s ϕ and M , s ψ
(iii) 
M , s ¬ ϕ iff M , s ϕ
(iv) 
M , s L a ϕ iff P ( s ) ( [ [ ϕ ] ] ) a
As usual, [ [ ϕ ] ] denotes the set of states where ϕ is true, or the proposition expressed by ϕ. From a logical point of view, one of the most striking features of probabilistic structures is that compactness does not hold. Let Γ = { L 1 / 2 - 1 / n ϕ : n 2 , n N } and ψ = ¬ L 1 / 2 ϕ . For each finite Γ Γ , Γ { ψ } is satisfiable, but Γ { ψ } is not. As a consequence, an axiomatization of probabilistic structures will provide at best a weak completeness theorem.

3.3. Axiomatization

Explicit probabilistic structures were not axiomatized in [15]. To deal with this issue, an axiom system was proposed in [19] that is (weakly) complete for these structures. We coin it the system H M 14.
        System H M
 
(PROP) Instances of propositional tautologies
(MP) From ϕ and ϕ ψ , infer ψ
 
(L1) L 0 ϕ
(L2) L a
(L3) L a ϕ ¬ L b ¬ ϕ ( a + b > 1 )
(L4) ¬ L a ϕ M a ϕ
(DefM) M a ϕ L 1 - a ¬ ϕ
(RE) From ϕ ψ infer L a ϕ L a ψ
 
(B) From ( ( ϕ 1 , . . . , ϕ m ) ( ψ 1 , . . . , ψ n ) ) infer
( ( i = 1 m L a i ϕ i ) ( j = 2 n M b j ψ j ) L ( a 1 + . . . + a m ) - ( b 1 + . . . + b n ) ψ 1 )
The inference rule (B) deserves attention. The content and origin of (B) is explained in [19], so we can be brief. The pseudo-formula ( ( ϕ 1 , . . . , ϕ m ) ( ψ 1 , . . . , ψ n ) ) is an abbreviation for:
k = 1 m a x ( m , n ) ϕ ( k ) ψ ( k )
where:
ϕ ( k ) = 1 < l 1 . . . < l k m ( ϕ l 1 . . . ϕ l k )
(if k > m , by convention ϕ ( k ) = ) . Intuitively, ϕ ( k ) “says” that at least k of the formulas ϕ i is true. The meaning of (B) is simpler to grasp when it is interpreted set-theoretically. Associate ( E 1 , . . . , E m ) and ( F 1 , . . . , F n ) , two sequences of events, with the sequences of formulas ( ϕ 1 , . . . , ϕ m ) and ( ψ 1 , . . . , ψ n ) . Then, the premise of (B) is a syntactical rendering of the idea that the sum of the characteristic functions is equal, i.e., i = 1 m I E i = j = 1 n I F j . If P ( E i ) α i for i = 1 , . . . , n and P ( F j ) β j for j = 2 , . . . , m , then P ( F 1 ) has to “compensate”, i.e.,
P ( F 1 ) ( α 1 + . . . + α n ) - ( β 2 + . . . + β m )
The conclusion of (B) is a translation of this “compensation”. It is very powerful from the probabilistic point of view and plays a crucial role in the (sophisticated) completeness proof. In comparison with the modal doxastic logic, one of the issues is that it is not easy to adapt the usual proof method, i.e., that of canonical models. More precisely, with Kripke logics, there is a natural accessibility relation on the canonical state space. Here, we need to prove the existence of a canonical probability distribution from relevant mathematical principles. This step is linked to a difficulty that is well known in the axiomatic study of quantitative and qualitative probability: how to ensure a (numerical) probabilistic representation for a finite structure of qualitative probability. A principle similar to (B) has been introduced in an set-theoretical axiomatic framework by [32] and imported into a probabilistic modal logic (with a qualitative binary operator) by [33].

4. A Detour by Doxastic Logic without Full Awareness

Our probabilistic logic without full awareness is largely an adaptation of the generalized standard structures (GSS) introduced by [5] to deal with unawareness in doxastic logic. Actually, we will slightly modify the semantics of [5] and obtain a partial semantics for unawareness. Before giving our own logical system, we remind the reader of GSSs of the case of doxastic logic.

4.1. Basic Heuristics

Going back to the motivating example, suppose that the “objective” state space is based on the set of atomic formulas A t = { p , q , r } as in Figure 1. Suppose furthermore that the actual state is s = p q r and that in this state, Pierre believes that p, is undecided about q and is unaware of r. In the language used in [5] , one would say that the actual state is “projected” to a subjective state ρ ( s ) = p q of a “subjective” state space based on the set of atomic formulas that the agent is aware of, i.e., p and q (Figure 2). In Kripkean doxastic logic, the agent’s accessibility relation selects, for each possible state s, the set of states R ( s ) that are epistemically possible for him or her. GSSs define an accessibility relation on this subjective state space. In Figure 3, projection is represented by a dotted arrow and accessibility by solid arrows.
This picture does not represent all projections between objective states and subjective states, and it corresponds to only one subjective state space. Generally, there are as many subjective state spaces S X as there are subsets X of the set A t on which the objective state space is based15. It is crucial to specify in the right way the conditions on the projection ρ ( . ) from objective to subjective states. Suppose that another objective state s is projected to p q , as well; then, two conditions should be obtained. First, s = p q r and s should agree on the atomic formulas the agent is aware of; so, for instance, s could not be ¬ p q r , since the agent is aware of p. Second, the states accessible from s and s should be the same. Another natural assumption is that all of the states accessible from a given state are located in the same “subjective” state space (see (v)(2) in Definition 4 below). Figure 4 pictures a GSS more faithfully than the preceding one.

4.2. Generalized Standard Structures

The next definition is a precise rendering of the intuitive ideas. We have followed [7] rather than [5], notably because he makes clear that GSSs can be seen as structures with impossible states.
Definition 4.
A GSS is a t-tuple M = ( S , S , π , R , ρ ) :
(i) 
S is a state space
(ii) 
S = X A t S X (where S X are disjoint) is a (non-standard) state space
(iii) 
π : S × A t { 0 , 1 } is a valuation for S
(iv) 
R : S ( S ) is an accessibility relation for S
(v) 
ρ : S S is an onto map s.t.
  • (1) if ρ ( s ) = ρ ( t ) S X , then (a) for each atomic formula p X , π ( s , p ) = π ( t , p ) and (b) R ( s ) = R ( t ) and
  • (2) if ρ ( s ) S X , then R ( s ) S X
One can extend R and π to the whole state space:
(vi) 
π : S × A t { 0 , 1 } is a valuation for S s.t. for all X A t for all s S X , π ( s , p ) = 1 iff (a) p X and (b) for all s ρ - 1 ( s ) , π ( s , p ) = 1 . We note π * = π π .
(vii) 
R : S ( S ) is an accessibility relation for S s.t. for all X A t for all s S X , R ( s ) = R ( s ) for some s ρ - 1 ( s ) . We note R * = R R .
In comparison with Kripke structures, a modification is introduced as regards negation. In a subjective state s S X , for a negated formula ¬ ϕ to be true, it has to be not only that ϕ is not true, but also that ϕ belongs to the sub-language induced by X. Semantic partiality follows: it may be the case that in some s , neither ϕ, nor ¬ ϕ is true (this is why subjective states are impossible states). The main reason why, following [7], we introduce this semantics is that it is a very simple way of inducing the “right” kind of partiality. This will be shown in the next subsection. In the sequel, L B A ( X ) denotes the language containing the operators B (full beliefs) and A (awareness) and based on the set X of propositional variables.
Definition 5.
The satisfaction relation for GSS is defined for each s * S * = S S :
(i) 
M , s * p iff π * ( s * , p ) = 1
(ii) 
M , s * ϕ ψ iff M , s * ϕ and M , s * ψ
(iii) 
M , s * ¬ ϕ iff M , s * ϕ and either s * S , or s * S X and ϕ L B A ( X )
(iv) 
M , s * B ϕ iff for each t * R * ( s * ) , M , t * ϕ
(v) 
M , s * A ϕ M , s * B ϕ B ¬ B ϕ
Example 1.
In Figure 3, let us consider s = p q r . M , ρ ( s ) r given Clause (vi) of the definition of GSS. However, M , ρ ( s ) ¬ r given Clause (iii) of the definition of the satisfaction relation. Clause (iv) of the satisfaction relation and Clause (v)(2) of the GSS imply that M , ρ ( s ) B r and that M , s B r . However, Clause (iii) of the satisfaction relation again implies that M , ρ ( s ) ¬ B r . The same holds in the state accessible from ρ ( s ) . Therefore, M , s B ¬ B r . By Clause (v) of the satisfaction relation, this implies that M , s ¬ A r .

4.3. Partial Generalized Standard Structures

The preceding definition characterizes awareness in terms of beliefs: Pierre is unaware of ϕ if, and only if, he does not believe that ϕ and does not believe that he does not believe that ϕ. This is unproblematic when one studies, as [5] do, partitional structures, i.e., structures where the accessibility relation is an equivalence relation and, thus, induces a partition of the state space. Game theorists rely extensively on this special case, which has convenient properties. In this particular case, the fact that an agent does not believe that ϕ and does not believe that she/he does not believe that ϕ implies that she/he does not believe that she/he does not believe...that she/he does not believe that ϕ-, at all levels of iteration16. However, without such an implication, the equivalence between the fact that an agent is unaware of ϕ and the fact that she/he does not believe that ϕ and does not believe that she/he does not believe that ϕ is dubious, at least in one of the two directions.
We therefore need a more general characterization of awareness and unawareness. Our proposal proceeds from the following observation: in the definition of satisfaction for GSS, the truth-conditions for negated formulas introduce (semantic) partiality. If p X and s * S X , then neither M , s * p nor M , s * ¬ p obtain. Let us indicate by M , s * ϕ that the formula ϕ is undefined at s * and by M , s * ϕ that it is defined. The following is true:
Fact 1.
Let M be a GSS and s * S X for some X A t . Then:
M , s * ϕ   i f f   ϕ L B A ( X )
Proof. 
see Appendix A.2. ☐
We suggest to keep the underlying GSS, but change the definition of (un)awareness. Semantical partiality, stressed above, is an attractive guide. In our introductory example, one would like to say that the possible states that Pierre conceives of do not “answer” the question “Is is true that r?”, whereas they do answer the questions “Is it true that p?” and “Is it true that q?”. In other words, the possible states that Pierre conceives of make neither r, nor ¬ r true. Awareness can be defined semantically in terms of partiality:
M , s A ϕ   iff   M , ρ ( s ) ϕ
Of course, the appeal of this characterization depends on the already given condition: if ρ ( s ) S X , then R ( s ) S X . Let us call a partial GSS a GSS where the truth conditions of the (un)awareness operator are in terms of partiality:
Fact 2.
Symmetry, distributivity over, self-reflection, U-introspection, plausibility and strong plausibility are valid under partial GSSs. Furthermore, BU-introspection is valid under serial partial GSS.
Proof. 
This is left to the reader. ☐

5. Probabilistic Logic without Full Awareness

5.1. Language

Definition 6 (Probabilistic language with awareness).
The set of formulas of a probabilistic language with awareness L L A ( A t ) based on a set A t of propositional variables is defined by:
ϕ : : = p | | | ¬ ϕ | ϕ ϕ | L a ϕ | A ϕ
where p A t and a [ 0 , 1 ] Q .

5.2. Generalized Standard Probabilistic Structures

Probabilistic structures make a full awareness assumption, exactly in the same way that Kripke structures do. An obvious way to weaken this assumption is to introduce in the probabilistic setting the same kind of modification as the one investigated in the previous section. The probabilistic counterpart of generalized standard Structures are the following generalized standard probabilistic structures (GSPS):
Definition 7 (Generalized standard probabilistic structure).
A generalized standard probabilistic structure for L L A ( A t ) is a t-tuple
M = ( S , ( S X ) X A t , ( Σ X ) X A t , π , ( P X ) X A t ) where:
(i) 
S is a state space.
(ii) 
S X where X A t are disjoint “subjective” state spaces. Let S = X A t S X .
(ii) 
For each X A t , Σ X is a σ-field of subsets of S X .
(iii) 
π : S × A t { 0 , 1 } is a valuation.
(iv) 
P X : S X Δ ( S X , Σ X ) is a measurable mapping from S X to the set of probability measures on Σ X endowed with the σ-field generated by the sets { μ Δ ( S X , Σ X ) : μ ( E ) a } for all E Σ X , a [ 0 , 1 ] .
(v) 
ρ : S S is an onto map s.t. if ρ ( s ) = ρ ( t ) S X , then for each atomic formula p X , π ( s , p ) = π ( t , p ) . By definition, P * ( s * ) = P * ( ρ ( s * ) ) if s S and P * ( s * ) = P X ( s * ) if s * S X .
(vi) 
π : S × A t { 0 , 1 } extends π to S as follows: for all s S X , π ( s , p ) = 1 iff p X and for all s ρ - 1 ( s ) , π ( s , p ) = 1 17. For every p A t , s S X , π ( . , p ) is measurable w.r.t. ( S X , Σ X ) .
Two comments are in order. First, Clause (iv) does not introduce any special measurability condition on the newly-introduced awareness operator (by contrast, there is still a condition for the doxastic operators). The reason is that in a given subjective state space S X , for any formula ϕ, either there is awareness of ϕ everywhere, and in this case [ [ A ϕ ] ] S X = S X , or there is never awareness of ϕ, and in this case [ [ A ϕ ] ] S X = . These two events are of course already in any Σ X . Second, Clause (v) imposes conditions on the projection ρ. With respect to GSSs, the only change is that we do not require something like: if ρ ( s ) S X , then R ( s ) S X . The counterpart would be that if ρ ( s ) S X , then S u p p ( P ( s ) ) S X . However, this is automatically satisfied by the definition.
Example 2.
In Figure 5, the support of the probability distribution associated with s = p q r is { ρ ( s ) = p q , p ¬ q } , P { p , q } ( ρ ( s ) ) ( p q ) = a and P { p , q } ( ρ ( s ) ) ( p ¬ q ) = 1 - a .
Definition 8 (Satisfaction relation for GSPS).
The satisfaction relation for GSPS is defined for each s * S * = S S :
(i) 
M , s * p iff π ( s * , p ) = 1
(ii) 
M , s * ϕ ψ iff M , s * ϕ and M , s * ψ
(iii) 
M , s * ¬ ϕ iff M , s * ϕ and either s * S , or s * S X and ϕ L L A ( X )
(iv) 
M , s * L a ϕ iff P * ( s * ) ( [ [ ϕ ] ] ) a and M , ρ ( s ) ϕ
(v) 
M , s * A ϕ iff M , ρ ( s * ) ϕ
The following fact is the counterpart for GSPS of what was proven above for GSS.
Fact 3.
Let M be a GSPS and s = ρ ( s ) S X for some s S . Then:
M , s ϕ   i f f   ϕ L L A ( X )
Proof. 
The proof is analogous to the one provided for Fact 1 above.  ☐
One can show that all of the properties mentioned in Section 2.3 are valid under GSPSs.
Proposition 1.
For all GSPS M and all standard states s S , the following formulas are satisfied:
A ϕ A ¬ ϕ (symmetry)
A ( ϕ ψ ) A ϕ A ψ (distributivity over)
A ϕ A A ϕ (self-reflection)
¬ A ϕ ¬ A ¬ A ϕ (U-introspection)
¬ A ϕ ¬ L a ϕ ¬ L a ¬ L a ϕ (plausibility)
¬ A ϕ ( ¬ L a ) n ϕ n N (strong plausibility)
¬ L a ¬ A ϕ ( L a U-introspection)
L 0 ϕ A ϕ (minimality)
Proof. 
This is left to the reader. ☐

5.3. Axiomatization

Proposition 1 suggests that GSPSs provide a plausible analysis of awareness and unawareness in a probabilistic setting. To have a more comprehensive understanding of this model, we need to investigate its logical properties. It turns out that an axiom system can be given that is weakly complete with respect to GSPS. We call it system H M U 18.
           System H M U
 
(PROP) Instances of propositional tautologies
(MP) From ϕ and ϕ ψ , infer ψ
 
(A1) A ϕ A ¬ ϕ
(A2) A ( ϕ ψ ) A ϕ A ψ
(A3) A ϕ A A ϕ
(A4 L ) A ϕ A L a ϕ
(A5 L ) A ϕ L 1 A ϕ
 
(L1 U ) A ϕ L 0 ϕ
(L2 U ) A ϕ L a ( ϕ ¬ ϕ )
(L3) L a ϕ ¬ L b ¬ ϕ ( a + b > 1 )
(L4 U ) ( ¬ L a ϕ A ϕ ) M a ϕ
 
(RE U ) From ϕ ψ and V a r ( ϕ ) = V a r ( ψ ) , infer ( L a ϕ L a ψ )
 
(B U ) From ( ( ϕ 1 , . . . , ϕ m ) ( ψ 1 , . . . , ψ n ) ) , infer:
( ( i = 1 m L a i ϕ i ) ( j = 2 n M b j ψ j ) ( A ψ 1 L ( a 1 + . . . + a m ) - ( b 1 + . . . + b n ) ψ 1 ) )
Some comments are in order. (a) Axioms (A1)–(A5 L ) concern the awareness operator and its relationship with the doxastic operators. The subscript “ L ” indicates an axiom that involves a probabilistic doxastic operator, to be distinguished from its epistemic counterpart indicated by “ B ” in the axiom system for doxastic logic reproduced in Appendix A.3. The other axioms and inference rules were roughly part of probabilistic logic without the awareness operator appearing in them. Subscript “ U ” indicates a modification with respect to the System H M due to the presence of the awareness operator. (b) Axiom ( L 1 U ) substitutes A ϕ L 0 ϕ for L 0 ϕ . This means that in our system, the awareness operator can be defined in terms of the probabilistic operator. However, this does not imply that a non-standard semantics is not needed: if ones defines A ϕ as L 0 ϕ , A ϕ is valid for any ϕ in standard semantics19. (c) The relationship between logical omniscience and full awareness comes out clearer in the restriction we have to impose on the rule of equivalence (RE)20. It is easy to see why in semantical terms why the rule of equivalence no longer holds universally. Suppose that L 1 / 2 p holds at some state of some model. From propositional logic, we know that p ( p q ) ( p ¬ q ) . However, if the agent is not aware of q, it is not true that L 1 / 2 ( ( p q ) ( p ¬ q ) ) . (d) For the same kind of reason, the inference rule (B) no longer holds universally. Consider for instance ϕ 1 = ( p ¬ p ) , ϕ 2 = ( r ¬ r ) ψ 1 = ( q ¬ q ) and ψ 2 = ( p ¬ p ) . Additionally, suppose that that the agent is aware of p and r, but not of q. Clearly, the premise of (B), i.e., ( ( ϕ 1 , ϕ 2 ) ( ψ 1 , ψ 2 ) ) is satisfied. Furthermore, the antecedent of (B)’s conclusion, i.e., L 1 ϕ 1 and M 1 ψ 2 is satisfied, as well. However, since the agent is unaware of q, we cannot conclude what we should conclude were (B) valid, i.e., that L 1 ψ 1 .
We are now ready to formulate our main result.
Theorem 1
(Soundness and completeness of H M U ). Let ϕ L L A ( A t ) . Then:
G S P S ϕ   i f f   H M U ϕ
Proof. 
See the Appendix B.  ☐

6. Conclusions

This study of unawareness in probabilistic logic could be unfolded later in several directions. First, we did not deal with the extension to a multi-agent framework, an issue tackled recently by [8]; second, we did not investigate applications to decision theory or game theory. However, we would like to end by stressing another issue that is less often evoked and nonetheless conceptually very challenging: the dynamics of awareness. The current framework, as much of the existing work, is static, i.e., it captures the awareness and doxastic states at a given time. It does not tell anything of the fact that, during an inquiry, an agent may become aware of some new possibilities21.
Let us consider our initial example where Pierre is aware of p and q, but not of r, and let us suppose that Pierre’s partial beliefs are represented by some probability distribution on a subjective state space S { p , q } . Assume that at some time, Pierre becomes aware of r; for instance, someone has asked him whether he thinks that r is likely or not. It seems that our framework can be extended to accommodate the situation: Pierre’s new doxastic state will be represented on a state space S { p , q , r } appropriately connected to the initial one S { p , q } (see Figure 6). Typically, a state s = p q will be split into two fine-grained states s 1 = p q r and s 2 = p q ¬ r . However, how should Pierre’s partial beliefs evolve? Obviously, a naive Laplacian rule according to which the probability assigned to s is equally allocated to s 1 , and s 2 will not be satisfactory. Are there rationality constraints capable of determining a new probability distribution on S { p , q , r } ? Or should we represent the new doxastic state of the agent by a set of probability distributions?22 We leave the answers to these questions for future investigation.

Acknowledgments

I would like to thank for their comments or advices two anonymous referees, C. Dégremont, A. Heifetz, B. Hill, M. Meier, T. Sadzik, B. Schipper and P. Weirich and audiences at the Institut d’Histoire et de Philosophie des Sciences et des Techniques (Paris), the London School of Economics (workshop “Decision, Games and Logic”, London), the Institute for Logic, Language and Computation (seminar “Logic and Games”, Amsterdam), the Laboratoire d’Algorithmique, Complexité et Logique (Université Paris-Est Créteil). Special thanks are due to Philippe Mongin, who helped me a lot to improve the penultimate version. This work was supported by the Institut Universitaire de France, the ANR-10-LABX-0087 IEC and ANR-10-IDEX-001-02PSL* grants.

Conflicts of Interest

The author declare no conflict of interest.

Appendix A. Doxastic Logic: Proofs and Illustrations

Appendix A.1. Illustration

Let us consider a GSS M where:
  • the actual state is s S
  • s is projected in s 1 S X for some X A t
  • R ( s 1 ) = { s 2 , s 3 } , R ( s 2 ) = { s 2 } and R ( s 3 ) = { s 3 }
  • M , s 2 p and M , s 3 ¬ p
The relevant part of the model is represented in Figure A1.
Figure A1. Unawareness without partitions.
Figure A1. Unawareness without partitions.
Games 07 00038 g007
It is easy to check that M , s U p and M , s B ( B ¬ p B p ) , since M , s 2 B p and M , s 3 B ¬ p .

Appendix A.2. Proof of Fact 1

For greater convenience, we give the proof for partial GSSs, but Fact 1 holds for original GSSs, as well. We have to show that if M is a partial GSS and s * S X for some X A t , then M , s * ϕ iff ϕ L B A ( X ) . The proof is by induction on the complexity of formulas:
  • if ϕ : = p , then M , s * p or M , s * ¬ p iff ( p L B A ( X ) and π ( ρ - 1 ( s * ) , p ) = 1 ) or ( p L B A ( X ) and not π ( ρ - 1 ( s * ) , p ) = 1 ) iff p L B A ( X ) .
  • if ϕ : = ¬ ψ , then M , s * ϕ or M , s * ¬ ϕ iff M , s * ¬ ψ or M , s * ¬ ¬ ψ iff ( ψ L B A ( X ) and M , s * ψ ) or ( ¬ ψ L B A ( X ) and M , s * ¬ ψ ) iff ( ψ L B A ( X ) and M , s * ψ ) or ( ψ L B A ( X ) and M , s * ψ ) iff ψ L B A ( X ) iff ¬ ψ L B A ( X )
  • if ϕ : = ψ 1 ψ 2 , then M , s * ϕ or M , s * ¬ ϕ iff ( M , s * ψ 1 and M , s * ψ 2 ) or ( ψ 1 ψ 2 L B A ( X ) and ( M , s * ψ 1 or M , s * ψ 2 )) iff by IH ( ψ 1 ψ 2 L B A ( X ) and M , s * ψ 1 and M , s * ψ 2 ) or ( ψ 1 ψ 2 L B A ( X ) and not ( M , s * ψ 1 and M , s * ψ 2 )) iff ψ 1 ψ 2 L B A ( X )
  • if ϕ : = B ψ , then M , s * ϕ or M , s * ¬ ϕ iff (for each t * R * ( s * ) , M , t * ϕ ) or ( B ψ L B A ( X ) and M , s * B ψ ) iff, by the induction hypothesis and since each t * R * ( s * ) belongs to S X - ( B ψ L B A ( X ) M , s * B ψ ) or ( B ψ L B A ( X ) and M , s * B ψ ) iff B ψ L B A ( X )
  • if ϕ : = A ψ , then M , s * ϕ or M , s * ¬ ϕ iff M , s * ψ or ( A ψ L B A ( X ) and M , s * A ψ ) iff (by Induction Hypothesis) ψ L B A ( X ) or ( A ψ L B A ( X ) and M , s * ψ ) iff (by the induction hypothesis) ψ L B A ( X ) or ( A ψ L B A ( X ) and ψ L B A ( X ) ) iff ψ L B A ( X ) iff A ψ L B A ( X ) .

Appendix A.3. An Axiom System for Partial GSSs

We may obtain a complete axiom system for serial partial GSS thanks to [37] who relates GSS and awareness © structures. Actually, one obtains a still closer connection with serial partial GSS. Let us first restate the definition of awareness © structures.
Definition A1.
An awareness © structure is a t-tuple ( S , π , R , A ) where
(i) 
S is a state space,
(ii) 
π : A t × S { 0 , 1 } is a valuation,
(iii) 
R S × S is an accessibility relation,
(iv) 
A : S F o r m ( L B A ( A t ) ) is a function that maps every state in a set of formulas (“awareness © set”).
The new condition on the satisfaction relation is the following:
M , s B ϕ iff t R ( t ) M , t ϕ and ϕ A ( s )
Let us say that an awareness © structure M = ( S , R , A , π ) is propositionally determined (p.d.) if (1) for each state s, A ( s ) is generated by some atomic formulas X A t , i.e., A ( s ) = L B A ( X ) , and (2) if t R ( s ) , then A ( s ) = A ( t ) .
Proposition A1 (Adapted from Halpern 2001 Theorem 4.1).
1.
For every serial p.d. awareness © structure M , there exists a serial partial GSS M based on the same state space S and the same valuation π s.t. for all formulas ϕ L B A ( A t ) and each possible state s
M , s a © ϕ iff M , s p G S S ϕ
2.
For every serial partial GSS M , there exists a serial p.d. awareness © structure M based on the same state space S and the same valuation π s.t. for all formulas ϕ L B A ( A t ) and each possible state s
M , s p G S S ϕ iff M , s a © ϕ
An axiom system has been devised in [37] that is (sound and) complete with respect to p.d. awareness © structures. An axiom system for serial p.d. awareness © structures can be devised by enriching this axiom system with:
( D U ) B ϕ ( ¬ B ¬ ϕ A ϕ )
The resulting axiom system, coined K D U , is this one:
         System K D U
 
(PROP) Instances of propositional tautologies
(MP) From ϕ and ϕ ψ , infer ψ
 
(K) B ϕ B ( ϕ ψ ) B ψ
(Gen)From ϕ, infer A ϕ B ϕ
 
(D U ) B ϕ ( ¬ B ¬ ϕ A ϕ )
(A1) A ϕ A ¬ ϕ
(A2) A ( ϕ ψ ) ( A ϕ A ψ )
(A3) A ϕ A A ϕ
(A4 B ) A B ϕ A ϕ
(A5 B ) A ϕ B A ϕ
(Irr)If no atomic formulas in ϕ appear in ψ, from U ϕ ψ , infer ψ
The following derives straightforwardly from Proposition 2.
Proposition A2 (Soundness and completeness theorem).
Let ϕ L B A ( A t ) . Then:
s p G S S ϕ   i f f   K D U ϕ

Appendix B. Probabilistic Logic: Proof of the Completeness Theorem for HMU

Proof. 
( ) . Soundness is easily checked and is left to the reader. ( ) . We have to show that if G S P S ϕ , then H M U ϕ . The proof relies on the well-known method of filtration. First, we define a restricted language L [ ϕ ] L A as in [19]: L [ ϕ ] L A contains:
  • as atomic formulas, only V a r ( ϕ ) , i.e., the atomic formulas occurring in ϕ,
  • only probabilistic operators L a belonging to the finite set Q ( ϕ ) of rational numbers of the form p / q , where q is the smallest common denominator of indexes occurring in ϕ and
  • only formulas of epistemic depth smaller than or equal to that of ϕ (an important point is that we stipulate that the awareness operator A does not add any epistemic depth to a formula: d p ( A ψ ) = d p ( ψ ) ).
As we will show, the resulting language L [ ϕ ] L A is finitely generated: there is a finite subset B of L [ ϕ ] L A called a base, such that ψ L [ ϕ ] L A , there is a formula ψ in the base, such that H M U ψ ψ . In probabilistic structures, it is easy to construct such a base23. The basic idea is this:
(1)
consider D 0 , the set of all the disjunctive normal forms built from B 0 = V a r ( ϕ ) , the set of propositional variables occurring in ϕ.
(2)
B k is the set of formulas L a ψ for all a Q ( ϕ ) where ψ is a disjunctive normal form built with “atoms” coming from B 0 to B k - 1 .
(3)
the construction has to be iterated up to the epistemic depth n of ϕ, hence to B n . The base B is D n , i.e., the set of disjunctive normal forms built with “atoms” from B 0 to B n .
Obviously, B is finite. It can be shown by induction that L [ ϕ ] L A is finitely generated by B . For formulas with a Boolean connective as top connective, this is obvious. For formulas of the form L a ψ , it comes from substitutability under logically-equivalent formulas: by the induction hypothesis, there is a formula ψ equivalent to ψ in B . Therefore, there is in B a formula equivalent to L a ψ . However, since H M ψ ψ , it follows that H M L a ψ L a ψ . We will now show how to unfold these ideas formally.
Definition B1.
Let X = { ψ 1 , . . . ψ m } be a finite set of formulas. D N F ( X ) is the set of disjunctive normal forms that can be built from X, i.e., the set of all possible disjunctions of conjunctions of the form e 1 ψ 1 . . . e m ψ m where e i is a blank or ¬. The members of X are called the atomic components of D N F ( X ) .
Definition B2.
The base B for a language L [ ϕ ] L where d p ( ϕ ) = n is defined as D n in the following doubly-inductive construction:
(i) 
B 0 = V a r ( ϕ ) ( B 0 is the set of atomic components of epistemic depth 0)
(i’) 
D 0 = D N F ( B 0 ) ( D 0 is the set of disjunctive normal forms based on B 0 )
(ii) 
B k = { L a ψ : ψ D k - 1 }
(ii’) 
D k = D N F ( l = 0 k B l ) .
Notation B1.
Let ψ D N F ( X ) and X Y . The expansion of ψ in D N F ( Y ) is the formula obtained by the replacement of each conjunction e 1 ψ 1 . . . e m ψ m occurring in ψ by a disjunction of all possible conjunctions built from e 1 ψ 1 . . . e m ψ m by adding literals of atomic components in Y - X .
For instance, consider X = { p } and Y = { p , q } : the DNF p is expanded in ( p q ) ( p ¬ q ) .
Fact B1.
(i) 
k , ψ D k B k , d p ( ψ ) = k .
(ii) 
For each ψ D k , in each D l , l > k , there is a formula ψ , which is equivalent to ψ
Proof. 
(i) is obvious; (ii) follows from the fact that any formula ψ D k can be expanded in ψ from D l , l > k and that (by propositional reasoning) ψ and ψ are equivalent.  ☐
It can be proven that B = D n is a finite base for L [ ϕ ] L . First, for each ψ L [ ϕ ] L , there is a formula ψ in D l s.t. H M ψ ψ where d p ( ψ ) = l . Since ψ can be expanded in a logically-equivalent formula ψ D n , it is sufficient to conclude that for each ψ L [ ϕ ] L , there is an equivalent formula in the base.
(i)
ψ : = p : ψ is obviously equivalent to some DNF in D 0 and d p ( p ) = 0 .
(ii)
ψ : = ( χ 1 χ 2 ) : by the induction hypothesis, there is χ 1 equivalent to χ 1 in D d p ( χ 1 ) and χ 2 equivalent to χ 2 in D d p ( χ 2 ) . Suppose w.l.o.g.that d p ( χ 2 ) > d p ( χ 1 ) and, therefore, that d p ( ψ ) = d p ( χ 2 ) . Then, χ 1 can be expanded in χ 1 D d p ( χ 2 ) . Obviously, the disjunction of the conjunctions occurring both in χ 1 and χ 2 is in D d p ( χ 2 ) and equivalent to ψ.
(iii)
ψ : = L a χ : by IH, there is χ equivalent to χ in D d p ( χ ) . Note that d p ( χ ) < n = d p ( ϕ ) . By construction, L a ( χ ) B d p ( χ ) + 1 . Consequently, there will be in D d p ( χ ) + 1 a DNF ψ equivalent to L a ( χ ) . Since d p ( χ ) + 1 n , this DNF can be associated by expansion to a DNF in the base D n . Furthermore, since H M χ χ and H M L a χ ψ , it follows by the rule of equivalence that H M L a χ ψ .
There are changes needed to deal with unawareness. (1) First, the awareness operator A has to be included. This is not problematic given that for any formula ψ, H M U A ψ m A p m where V a r ( ψ ) = { p 1 , . . . , p m , . . . p M } . Consequently, the only modification is to include any formula A p with p V a r ( ϕ ) in B 0 . (2) With unawareness, it is no longer true that if H M U ψ χ , then H M U L a ψ L a χ . For instance, it is not true that H M U L a p L a ( ( p q ) ( p ¬ q ) ) : the agent may be unaware of q. Nevertheless, the rule holds restrictedly: under the assumption that V a r ( ψ ) = V a r ( χ ) , then if H M U ψ χ , then H M U L a ψ L a χ (RE U ). We can use this fact to make another change to the basis: instead of considering only the disjunctive normal forms built from the whole set V a r ( ϕ ) , we consider the disjunctive normal forms built from any non-empty subset X V a r ( ϕ ) .
Definition B3.
Let X V a r ( ϕ ) ;
(i) 
B 0 X = X { A p : p X }
(i’) 
D 0 X = D N F ( B 0 )
(ii) 
B k X = { L a ψ : ψ D k - 1 } and
(ii’) 
D k X = D N F ( l = 0 k B l X ) .
Fact B2.
(i) 
k n , ψ D k X where X V a r ( ϕ ) , d p ( ψ ) = k .
(ii) 
X V a r ( ϕ ) , ψ D k X , D l X , l > k , there is a formula ψ D l X , which is equivalent to ψ.
(iii) 
X Y V a r ( ϕ ) , if ψ D k X , then there is a formula ψ , which is equivalent to ψ in D k Y .
(iv) 
X V a r ( ϕ ) , ψ D k X , V a r ( ψ ) = X .
Proof. 
(i)–(ii) are similar to the classical case; (iii) is straightforwardly implied by Clause (ii’) of Definition 12; (iv) is obvious.  ☐
We are now ready to prove that B = X V a r ( ϕ ) D n X is a basis for L [ ϕ ] L A . We will actually show that for any ψ L [ ϕ ] L A with d p ( ψ ) = k , there are X V a r ( ϕ ) and ψ D k X s.t. H M U ψ ψ and d p ( ψ ) = d p ( ψ ) = k and V a r ( ψ ) = V a r ( ψ ) (Induction Hypothesis, IH).
(i)
ψ : = p : ψ is obviously equivalent to some DNF ψ in D 0 { p } . Clearly, d p ( ψ ) = d p ( ψ ) and V a r ( ψ ) = V a r ( ψ ) .
(ii)
ψ : = ( χ 1 χ 2 ) : by IH,
- there is χ 1 s.t. H M U χ 1 χ 1 and V a r ( χ 1 ) = V a r ( χ 1 ) = X 1 and χ 1 D d p ( χ 1 ) X 1
- there is χ 2 s.t. H M U χ 2 χ 2 and V a r ( χ 2 ) = V a r ( χ 2 ) = X 2 and χ 2 D d p ( χ 2 ) X 2
Let us consider X = X 1 X 2 and suppose without loss of generality that d p ( χ 2 ) > d p ( χ 1 ) . One may expand χ 1 from D d p ( χ 1 ) X 1 to D d p ( χ 1 ) X and expand the resulting DNF to χ 1 D d p ( χ 2 ) X . On the other hand, χ 2 may be expanded to χ 2 D d p ( χ 2 ) X . ψ is the disjunction of the conjunctions common to χ 1 and χ 2 . Obviously, d p ( ψ ) = d p ( ψ ) and V a r ( ψ ) = V a r ( ψ ) .
(iii)
ψ : = A χ : by IH, there is χ equivalent to χ in D d p ( χ ) X with V a r ( χ ) = V a r ( χ ) . A χ is equivalent to m A p m where V a r ( χ ) = { p 1 , . . . , p m , . . . p M } . Each A p m is in B 0 X , so by expansion in D d p ( χ ) X , there is a DNF equivalent to it and, therefore, a DNF equivalent to m A p m .
(iv)
ψ : = L a χ : by IH, there is χ equivalent to χ in D d p ( χ ) X with d p ( χ ) = d p ( χ ) and V a r ( χ ) = V a r ( χ ) . Note that d p ( χ ) < n = d p ( ϕ ) . By construction, L a ( χ ) B d p ( χ ) + 1 X . Consequently, there will be in D d p ( χ ) + 1 X a DNF ψ logically equivalent to L a ( χ ) . Since d p ( χ ) + 1 n , there will be in the base a formula ψ logically equivalent to ψ . Furthermore, since H M U χ χ and V a r ( χ ) = V a r ( χ ) and H M U L a χ ψ , it follows that H M U L a χ ψ .
We will now build: (1) the objective state space; (2) the subjective states spaces and the projection ρ; and (3) the probability distributions.
(1) The objective state space:
The objective states of the ϕ-canonical structure are the intersections of the maximally-consistent sets of formulas of the language L L A ( A t ) and the restricted language L [ ϕ ] L A :
S ϕ = { Γ     L [ ϕ ] L A : Γ   is   a   maximal   H M U -consistent   set }
First, let us notice that the system H M U is a “modal logic” ([38], p. 191): a set of formulas (1) that contains every propositional tautologies; (2) such that the Lindenbaum lemma holds.
Definition B4.
(i) 
A formula ϕ is deducible from a set of formulas Γ, symbolized Γ H M U ϕ , if there exists some formulas ψ 1 , . . . , ψ n in Γ s.t. H M U ( ψ 1 . . . ψ n ) ϕ .
(ii) 
A set of formulas Γ is H M U -consistent if it is false that Γ H M U
(iii) 
A set of formulas Γ is maximally H M U -consistent if (1) it is H M U -consistent and (2) if it is not included in a H M U -consistent set of formulas.
Lemma B1 (Lindenbaum Lemma).
If Γ is a set of H M U -consistent formulas, then there exists an extension Γ + of Γ that is maximally H M U -consistent.
Proof. 
See, for instance, [38] (p.199).  ☐
Notation B2.
For each formula ψ L [ ϕ ] L A , let us note [ ψ ] = { s S ϕ : ψ s }
Lemma B2.
The set S ϕ is finite.
Proof. 
This Lemma is a consequence of the fact that L [ ϕ ] L A is finitely generated.
(a)
Let us say that two sets of formulas are Δ-equivalent if they agree on each formula that belongs to Δ. S ϕ identifies the maximal H M U -consistent sets of formulas that are L [ ϕ ] L A -equivalent. S ϕ is infinite iff there are infinitely many maximal H M U -consistent sets of formulas that are not pairwise L [ ϕ ] L A -equivalent.
(b)
If B is a base for L [ ϕ ] L A , then two sets of formulas are L [ ϕ ] L A -equivalent iff they are B -equivalent. Suppose that Δ 1 and Δ 2 are not L [ ϕ ] L A -equivalent. This means w.l.o.g. that there is a formula ψ s.t. (i) ψ Δ 1 , (ii) ψ Δ 2 and (iii) ψ L [ ϕ ] L A . Let ψ B be a formula s.t. H M U ψ ψ . Clearly, ψ Δ 1 and ψ L [ ϕ ] L A and ¬ ψ Δ 2 . Therefore, Δ 1 and Δ 2 are not B -equivalent. The other direction is obvious.
(c)
Since B is finite, there are only finitely many maximal H M U -consistent sets of formulas that are not pairwise B -equivalent. Therefore, S ϕ is finite.
(2) The subjective state spaces and the projection ρ ( . ) :
As it might be expected, the subjective state associated with an objective state Γ S ϕ will be determined by the formulas that the agent is aware of in Γ.
Definition B5.
For any set of formulas Γ, let V a r ( Γ ) be the set of atomic formulas that occur in the formulas that belong to Γ. For any Γ S ϕ , let:
(i) 
A + ( Γ ) = { ψ : A ψ Γ } and A - ( Γ ) = { ψ : ¬ A ψ Γ }
(ii) 
a + ( Γ ) = { p V a r ( L [ ϕ ] L A ) : A p Γ } and a - ( Γ ) = { p V a r ( L [ ϕ ] L A ) : ¬ A p Γ } .
Lemma B3.
Let Γ S ϕ .
(i) 
A + ( Γ ) = L [ ϕ ] L A ( a + ( Γ ) )
(ii) 
A + ( Γ ) A - ( Γ ) = L [ ϕ ] L A
Proof. 
(i) Follows from (A1)-(A4 L ); (ii) follows from (i) and the fact that since Γ comes from a maximal consistent set, ¬ ψ Γ iff ψ Γ .  ☐
One may group the sets that have the same awareness profile into equivalence classes: | Γ | a = { Δ S ϕ : a + ( Δ ) = a + ( Γ ) } . The sets that belong to the same equivalence class | Γ | a will be mapped in the same subjective state space S | Γ | a . We are now ready to define the projection ρ and these subjective states.
Definition B6.
The projection ρ : S ϕ Γ S ϕ S | Γ | a is defined by:
ρ ( Γ ) = Γ A + ( Γ )
where S | Γ | a = { Δ A + ( Γ ) L [ ϕ ] L A : Δ is a maximal H M U -consistent set and a + ( Δ ) = a + ( Γ ) } .
Note that in the particular case where the agent is unaware of every formula, A + ( Γ ) = . Therefore, each objective state where the agent is unaware of every formula will be projected in the same subjective state S = { } . More importantly, one can check that ρ is an onto map: suppose that Λ S | Γ | a where Γ S ϕ . By definition, for some Δ (a maximal H M U -consistent set), Λ = Δ A + ( Γ ) L [ ϕ ] L A and a + ( Δ ) = a + ( Γ ) . As a consequence, A + ( Δ ) = A + ( Γ ) , and therefore, Λ = Δ A + ( Δ ) L [ ϕ ] L A . Hence, Λ = ρ ( Δ L [ ϕ ] L A ) . One can show also the following lemma.
Lemma B4.
(i) 
For each Γ S ϕ , S | Γ | a is finite.
(ii) 
For each subset E S | Γ | a , there is ψ A + ( Γ ) L [ ϕ ] L A s.t. E = [ ψ ] S | Γ | a where [ ψ ] S | Γ | a denotes the set of states of S | Γ | a to which ψ belongs.
(iii) 
For all ψ 1 , ψ 2 A + ( Γ ) L [ ϕ ] L A , [ ψ 1 ] S | Γ | a [ ψ 2 ] S | Γ | a iff H M U ψ 1 ψ 2
Proof. 
(i) Follows trivially since the objective state space is already finite; (ii) let us pick a finite base B Γ for A + ( Γ ) L [ ϕ ] L A . For each element β of this base and each Δ S | Γ | a , either β Δ or ¬ β Δ . Two sets Δ and Δ S | Γ | a differ at least by one such formula of B Γ . Let C ( Δ ) = m e m β m where β m B Γ and e m is a blank if β m Δ and ¬ if β m Δ . For two distinct sets Δ and Δ , C ( Δ ) C ( Δ ) . For each event E S | Γ | a , one can therefore consider the disjunction k C ( Δ k ) for each Δ k E . Such a formula belongs to each Δ k and only to these Δ k . (iii) ( ) . For each formula ψ A + ( Γ ) L [ ϕ ] L A and each Δ S | Γ | a , ¬ ψ Δ iff ψ Δ . Therefore, there are two possibilities for any Δ: either ψ Δ or ¬ ψ Δ . (a) If ψ 1 Δ , then by hypothesis ψ 2 Δ and given the construction of the language, ¬ ψ 1 ψ 2 Δ , hence ψ 1 ψ 2 Δ . (b) If ψ 1 Δ , then ¬ ψ 1 Δ , hence ψ 1 ψ 2 Δ . This implies that for any Δ, ψ 1 ψ 2 Δ . Given the definition of S | Γ | a and the properties of maximal consistent sets, this implies that H M U ψ 1 ψ 2 . ( ) . Given the construction of the language, if ψ 1 , ψ 2 A + ( Γ ) L [ ϕ ] L A , then ψ 1 ψ 2 A + ( Γ ) L [ ϕ ] L A . Since H M U ψ 1 ψ 2 , for each Δ, ψ 1 ψ 2 Δ . If ψ 1 Δ , clearly ψ 2 Δ , as well. Therefore, [ ψ 1 ] S | Γ | a [ ψ 2 ] S | Γ | a .  ☐
(3) The probability distributions:
Definition B7.
For Γ S ϕ and ψ L [ ϕ ] L A , let:
  • a ˜ = max { a : L a ψ Γ }
  • b ˜ = min { b : M b ψ Γ }
In the classical case [19], a ˜ and b ˜ are always defined. This is not so in our structure with unawareness: if the agent is not aware of ψ, no formula L a ψ will be true because of (A0 U ) A ψ L 0 ψ . Given (A1) and (DefM), one can derive:
H M U A ψ M 1 ψ
The construction of the language implies that for any Γ, A ψ Γ iff L 0 ψ Γ iff M 1 ψ Γ . Therefore, a ˜ and b ˜ are defined iff A ψ Γ .
Lemma B5.
Let us suppose that A ψ Γ .
(i) 
c Q ( ϕ ) , c a ˜ implies L c ψ Γ , and c b ˜ implies M c ψ Γ
(ii) 
There are only two cases: (i) either a ˜ = b ˜ and E a ˜ ψ Γ while E c ψ Γ for c a ˜ , (ii) or a ˜ < b ˜ and E c ψ Γ for any c Q ( ϕ ) .
(iii) 
b ˜ - a ˜ 1 q (where q is the common denominator to the indexes)
Proof. 
See [19]; the modifications are obvious.  ☐
Definition B8.
Given Γ S ϕ and ψ L [ ϕ ] L A , if A ψ Γ , let
I ψ Γ be either { a ˜ } if a ˜ = b ˜ or ( a ˜ , b ˜ ) if a ˜ < b ˜ .
Lemma A.5 in [19] can be adapted to show that for each S | Γ | a and Γ S | Γ | a , there is a probability distribution P | Γ | a ( Γ ) on 2 S | Γ | a , such that
(C) for all ψ L [ ϕ ] L A if A ψ Γ , P | Γ | a ( Γ ) ( [ ψ ] S | Γ | a ) I ψ Γ .
The proof in [19] relies on a theorem by Rockafellar that can be used because of the inference rule (B). It would be tedious to adapt the proof here. One comment is nonetheless important. In our axiom system H M U , the inference rule holds under a restricted form (B U ). Therefore, one could wonder whether this will not preclude adapting the original proof, which relies on the unrestricted version (B). It turns out that the answer is negative. The reason is that the formulas involved in the application of (B) are only representatives for each subset of the state space. We have previously shown how to build these formulas in our case, and they are such that the agent is necessarily aware of them. Therefore, the restriction present in (B U ) does not play any role, and we may define the ϕ-canonical structure as follows.
Definition B9.
The ϕ-canonical structure is the GSPS M ϕ = ( S ϕ , S ϕ , ( 2 S | Γ | a ) Γ S ϕ , π ϕ , ( P | Γ | a ϕ ) Γ S ϕ ) where:
(i) 
S ϕ = { Γ L [ ϕ ] L A : Γ is a maximal H M U -consistent set}
(ii) 
S ϕ = Γ S ϕ S | Γ | a where S | Γ | a = { Δ A + ( Γ ) L [ ϕ ] L A : Δ is a maximal H M U -consistent set, and a ( Δ ) = a ( Γ ) }
(iii) 
for each Γ S ϕ , ρ ( Γ ) = Γ A + ( Γ )
(iv) 
for all state Γ S ϕ S ϕ and atomic formula p A t , π ϕ ( p , Γ ) = 1 iff p Γ
(v) 
for Γ S ϕ , P | Γ | a ϕ is a probability distribution on 2 S | Γ | a satisfying Condition (C)24.
We are now ready to state the crucial truth lemma.
Lemma B6 (Truth lemma).
For every Γ S ϕ and every ψ , L [ ϕ ] L A ,
M ϕ , Γ ψ   i f f   ψ Γ
Proof. 
The proof proceeds by induction on the complexity of the formula.
  • ψ : = p ; following directly from the definition of π ϕ .
  • ψ : = ¬ χ . Since Γ is a standard state M ϕ , Γ ϕ ¬ χ iff M ϕ , Γ χ iff (by IH) χ Γ . We shall show that χ Γ iff ¬ χ Γ . (⇒) Let us suppose that χ Γ ; χ is in L [ ϕ ] L A ; hence, given the properties of maximally-consistent sets, ¬ χ Γ + where Γ + is the extension of Γ to L L A ( A t ) (the whole language). Additionally, since Γ = Γ + L [ ϕ ] L A , ¬ χ Γ . (⇐) Let us suppose that ¬ χ Γ . Γ is coherent, therefore χ Γ .
  • ψ : = ψ 1 ψ 2 . (⇒). Let us assume that M ϕ , Γ ψ 1 ψ 2 . Then, M ϕ , Γ ψ 1 and M ϕ , Γ ψ 2 . By IH, this implies that ψ 1 Γ and ψ 2 Γ . Given the properties of maximally-consistent sets, this implies in turn that ψ 1 ψ 2 Γ . (⇐). Let us assume that ψ 1 ψ 2 Γ . Given the properties of maximally-consistent sets, this implies that ψ 1 Γ and ψ 2 Γ and, therefore, by IH, that M ϕ , Γ ψ 1 and M ϕ , Γ ψ 2 .
  • ψ : = A χ . We know that in any GSPS M , if s = ρ ( s ) S X for some s S , then M , s χ iff χ L L A ( X ) . In our case, s = Γ , s = ρ ( Γ ) and X = a + ( Γ ) . Therefore, M ϕ , Γ A χ iff χ L L A ( a + ( Γ ) ) . However, given that A χ L [ ϕ ] L A , χ L L A ( a + ( Γ ) ) iff A χ Γ .
  • ψ : = L a χ . By definition M ϕ , Γ L a χ iff P | ρ ( Γ ) | a ( Γ ) ( [ [ χ ] ] ) a and M ϕ , ρ ( Γ ) χ .(⇐) Let us suppose that P | ρ ( Γ ) | a ( Γ ) ( [ [ χ ] ] ) a and M ϕ , ρ ( Γ ) χ . Hence, a ˜ is well defined. It is clear that a ˜ a given our definition of P | ρ ( Γ ) | a ( Γ ) . It is easy to see that H M U L a ψ L b ψ for b a . As a consequence, L a ψ Γ . (⇒) Let us suppose that L a ψ Γ . This implies that A ψ Γ and, therefore, that M ϕ , ρ ( Γ ) χ . By construction, a a ˜ , and therefore, P | ρ ( Γ ) | a ( Γ ) ( [ [ χ ] ] ) a . Hence, M ϕ , Γ L a χ .
 ☐
Proof. 
  • If ϕ : = p , then M , s p or M , s ¬ p
    iff ( p L L A ( X ) and π * ( s , p ) = 1 ) or ( p L L A ( X ) and π * ( s , p ) = 0 )
    iff p L B A ( X )
  • If ϕ : = ¬ ψ , then M , s ϕ or M , s ¬ ϕ
    iff M , s ¬ ψ or M , s ¬ ¬ ψ
    iff ( ψ L L A ( X ) and M , s ψ ) or ( ¬ ψ L L A ( X ) and M , s ¬ ψ )
    iff ( ψ L L A ( X ) and M , s ψ ) or ( ψ L L A ( X ) and M , s ψ )
    iff ψ L L A ( X )
    iff ¬ ψ L L A ( X )
  • If ϕ : = ψ 1 ψ 2 , then M , s ϕ or M , s ¬ ϕ
    iff ( M , s ψ 1 and M , s ψ 2 ) or ( ψ 1 ψ 2 L L A ( X ) and ( M , s ψ 1 or M , s ψ 2 ))
    iff by the induction hypothesis ( ψ 1 ψ 2 L L A ( X ) and M , s ψ 1 and M , s ψ 2 ) or ( ψ 1 ψ 2 L L A ( X ) and not ( M , s ψ 1 and M , s ψ 2 ))
    iff ψ 1 ψ 2 L L A ( X )
  • if ϕ : = B ψ , then M , s ϕ or M , s ¬ ϕ
    iff (for each t * R * ( s ) , M , t * ϕ ) or ( B ψ L L A ( X ) ) and M , s B ψ )
    iff, by the induction hypothesis and since each t * R * ( s ) belongs to S X - ( B ψ L L A ( X ) M , s B ϕ ) or ( B ψ L L A ( X ) and M , s B ϕ )
    iff B ψ L L A ( X )
  • If ϕ : = A ψ , then M , s ϕ or M , s ¬ ϕ
    iff ( M , s ψ ) or ( A ψ L L A ( X ) and not M , s ψ , impossible given the induction hypothesis)
    iff ( M , s ψ )
    iff ψ L L A ( X ) (by the induction hypothesis)
    iff A ψ L L A ( X )
 ☐

References

  1. Fagin, R.; Halpern, J.; Moses, Y.; Vardi, M. Reasoning about Knowledge; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  2. Hintikka, J. Impossible Worlds Vindicated. J. Philos. Log. 1975, 4, 475–484. [Google Scholar] [CrossRef]
  3. Wansing, H. A General Possible Worlds Framework for Reasoning about Knowledge and Belief. Stud. Log. 1990, 49, 523–539. [Google Scholar] [CrossRef]
  4. Fagin, R.; Halpern, J. Belief, Awareness, and Limited Reasoning. Artif. Intell. 1988, 34, 39–76. [Google Scholar] [CrossRef]
  5. Modica, S.; Rustichini, A. Unawareness and Partitional Information Structures. Games Econ. Behav. 1999, 27, 265–298. [Google Scholar] [CrossRef]
  6. Dekel, E.; Lipman, B.; Rustichini, A. Standard State-Space Models Preclude Unawareness. Econometrica 1998, 66, 159–173. [Google Scholar] [CrossRef]
  7. Halpern, J. Plausibility Measures : A General Approach for Representing Uncertainty. In Proceeddings of the 17th International Joint Conference on AI, Seattle, WA, USA, 4–10 August 2001; pp. 1474–1483.
  8. Heifetz, A.; Meier, M.; Schipper, B. Interactive Unawareness. J. Econ. Theory 2006, 130, 78–94. [Google Scholar] [CrossRef]
  9. Li, J. Information Structures with Unawareness. J. Econ. Theory 2009, 144, 977–993. [Google Scholar] [CrossRef]
  10. Galanis, S. Unawareness of theorems. Econ. Theory 2013, 52, 41–73. [Google Scholar] [CrossRef]
  11. Feinberg, Y. Games with Unawareness; Working paper; Stanford Graduate School of Business: Stanford, CA, USA, 2009. [Google Scholar]
  12. Heifetz, A.; Meier, M.; Schipper, B. Dynamic Unawareness and Rationalizable Behavior. Games Econ. Behav. 2013, 81, 50–68. [Google Scholar] [CrossRef]
  13. Rêgo, L.; Halpern, J. Generalized Solution Concepts in Games with Possibly Unaware Players. Int. J. Game Theory 2012, 41, 131–155. [Google Scholar] [CrossRef]
  14. Schipper, B. Awareness. In Handbook of Epistemic Logic; van Ditmarsch, H., Halpern, J.Y., van der Hoek, W., Kooi, B., Eds.; College Publications: London, UK, 2015; pp. 147–201. [Google Scholar]
  15. Aumann, R. Interactive Knowledge. Int. J. Game Theory 1999, 28, 263–300. [Google Scholar] [CrossRef]
  16. Harsanyi, J. Games with Incomplete Information Played by ‘Bayesian’ Players. Manag. Sci. 1967, 14, 159–182. [Google Scholar] [CrossRef]
  17. Fagin, R.; Halpern, J.; Megiddo, N. A Logic for Reasoning About Probabilities. Inf. Comput. 1990, 87, 78–128. [Google Scholar] [CrossRef]
  18. Lorenz, D.; Kooi, B. Logic and probabilistic update. In Johan van Benthem on Logic and Information Dynamics; Baltag, A., Smets, S., Eds.; Springer: Cham, Switzerland, 2014; pp. 381–404. [Google Scholar]
  19. Heifetz, A.; Mongin, P. Probability Logic for Type Spaces. Games Econ. Behav. 2001, 35, 34–53. [Google Scholar] [CrossRef]
  20. Cozic, M. Logical Omniscience and Rational Choice. In Cognitive Economics; Topol, R., Walliser, B., Eds.; Elsevier/North Holland: Amsterdam, Holland, 2007; pp. 47–68. [Google Scholar]
  21. Heifetz, A.; Meier, M.; Schipper, B. Unawareness, Beliefs and Speculative Trade. Games Econ. Behav. 2013, 77, 100–121. [Google Scholar] [CrossRef]
  22. Schipper, B.; University of California, Davis. Impossible Worlds Vindicated. Personal communication, 2013. [Google Scholar]
  23. Sadzik, T. Knowledge, Awareness and Probabilistic Beliefs; Stanford Graduate School of Business: Stanford, CA, USA, 2005. [Google Scholar]
  24. Savage, L. The Foundations of Statistics, 2nd ed.; Dover: New York, NY, USA, 1954. [Google Scholar]
  25. Board, O.; Chung, K. Object-Based Unawareness; Working Paper; University of Minnesota: Minneapolis, MN, USA, 2009. [Google Scholar]
  26. Board, O.; Chung, K.; Schipper, B. Two Models of Unawareness: Comparing the object-based and the subjective-state-space approaches. Synthese 2011, 179, 13–34. [Google Scholar] [CrossRef]
  27. Chen, Y.C.; Ely, J.; Luo, X. Note on Unawareness: Negative Introspection versus AU Introspection (and KU Introspection). Int. J. Game Theory 2012, 41, 325–329. [Google Scholar] [CrossRef]
  28. Fagin, R.; Halpern, J. Uncertainty, Belief and Probability. Comput. Intell. 1991, 7, 160–173. [Google Scholar] [CrossRef]
  29. Halpern, J. Reasoning about Uncertainty; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  30. Vickers, J.M. Belief and Probability; Synthese Library, Reidel: Dordrecht, Holland, 1976; Volume 104. [Google Scholar]
  31. Aumann, R.; Heifetz, A. Incomplete Information. In Handbook of Game Theory; Aumann, R., Hart, S., Eds.; Elsevier/North Holland: New York, NY, USA, 2002; Volume 3, pp. 1665–1686. [Google Scholar]
  32. Kraft, C.; Pratt, J.; Seidenberg, A. Intuitive Probability on Finite Set. Ann. Math. Stat. 1959, 30, 408–419. [Google Scholar] [CrossRef]
  33. Gärdenfors, P. Qualitative Probability as an Intensional Logic. J. Philos. Log. 1975, 4, 171–185. [Google Scholar] [CrossRef]
  34. Van Benthem, J.; Velazquez-Quesada, F. The Dynamics of Awareness. Synthese 2010, 177, 5–27. [Google Scholar] [CrossRef]
  35. Velazquez-Quesada, F. Dynamic Epistemic Logic for Implicit and Explicit Beliefs. J. Log. Lang. Inf. 2014, 23, 107–140. [Google Scholar] [CrossRef]
  36. Hill, B. Awareness Dynamics. J. Philos. Log. 2010, 39, 113–137. [Google Scholar] [CrossRef]
  37. Halpern, J. Alternative Semantics for Unawareness. Games Econ. Behav. 2001, 37, 321–339. [Google Scholar] [CrossRef]
  38. Blackburn, P.; de Rijke, M.; Venema, Y. Modal Logic; Cambridge UP: Cambridge, UK, 2001. [Google Scholar]
  • 1.See [2,3].
  • 2.In the whole paper we use “awareness © ” to denote the model of [4], to be distinguished from the attitude of awareness.
  • 3.There is also a literature that studies games with unaware players. See, for instance, [11,12,13].
  • 4.This is not the only way to proceed. Fagin, Halpern and Moses introduced in [17] an operator w ( ϕ ) and formulas a 1 w ( ϕ 1 ) + . . . + a n w ( ϕ n ) c interpretable as “the sum of a 1 times the degree of belief in ϕ 1 and...and a n times the degree of belief in ϕ n is at least as great as c”. For a recent survey of probabilistic logic, see [18].
  • 5.B. Schipper in [22] pointed out that a similar result has been stated in an unpublished paper by T. Sadzik; see [23]. The framework is slightly different from ours.
  • 6.This is close to the “small world” concept of [24]. In Savage’s language, “world” means the state space or set of possible worlds, itself.
  • 7.Once again, the idea is already present in [24]: “...a smaller world is derived from a larger by neglecting some distinctions between states”. The idea of capturing unawareness with the help of coarse-grained or subjective state spaces is widely shared in the literature; see, for instance, [8] or [9]. By contrast, in the framework of first-order epistemic logic, unawareness is construed as unawareness of some objects in the domain of interpretation by [25]. This approach is compared with those based on subjective state spaces in [26].
  • 8.For a recent elaboration on the results of [6], see [27].
  • 9.By contrast, in [28,29], one considers formulas like a 1 w ( ϕ 1 ) + . . . + a n w ( ϕ n ) b , where a 1 , . . . , a n , b are integers, ϕ 1 , . . . . , ϕ n are propositional formulas and w ( ϕ ) is to be interpreted as the probability of ϕ.
  • 10.For a philosophical elaboration on this distinction, see [30].
  • 11.A similar list of properties is proven in [21].
  • 12.Economists are leading contributors to the study of explicit probabilistic structures because they correspond to the so-called type spaces, which are basic to games of incomplete information since [16]. See [31].
  • 13.Since a is a rational number and the structures will typically include real-valued probability distributions, it may happen in some state that for no a, it is true that E a ϕ . It happens when the probability assigned to ϕ is a real, but non-rational number.
  • 14.The work in [19] calls this system Σ+.
  • 15.In an objective state space, the set of states need not reflect the set of possible truth-value assignments to propositional variables (as is the case in our example).
  • 16.Intuitively: if the agent does not believe that ϕ, this means that not every state of the relevant partition’s cell makes ϕ true. If ϕ belonged to the sub-language associated with the relevant subjective state space, since the accessibility relation is partitional, this would imply that ¬ B ϕ would be true in every state of the cell. However, by hypothesis, the agent does not believe that she/he does not believe that ϕ. We have therefore to conclude that ϕ does not belong to the sub-language of the subjective state space (see Fact 1 below). Hence, no formula B ¬ B ¬ B . . . ¬ B ϕ can be true.
  • 17.It follows from Clause (v) above that this extension is well defined.
  • 18.In what follows, V a r ( ϕ ) denotes the set of propositional variables occurring in ϕ.
  • 19.I thank an anonymous referee for suggesting to me to clarify this point.
  • 20.This restricted rule is reminiscent of the rule R E s a in [5].
  • 21.Note that he/she could become unaware of some possibilities as well, but we will not say anything about that.
  • 22.The dynamics of awareness has been studied by [34] in doxastic logic and by [35] in doxastic logic. See also [36].
  • 23.The work in [19] leaves the construction implicit.
  • 24.In the particular case where A + ( Γ ) = , the probability assigns maximal weight to the only state of S .
Figure 1. An objective state space.
Figure 1. An objective state space.
Games 07 00038 g001
Figure 2. A subjective state space.
Figure 2. A subjective state space.
Games 07 00038 g002
Figure 3. Projection of an objective state in a subjective state space.
Figure 3. Projection of an objective state in a subjective state space.
Games 07 00038 g003
Figure 4. Partial picture of a generalized standard structure (GSS).
Figure 4. Partial picture of a generalized standard structure (GSS).
Games 07 00038 g004
Figure 5. Projection of an objective state to a subjective state space in a probabilistic setting.
Figure 5. Projection of an objective state to a subjective state space in a probabilistic setting.
Games 07 00038 g005
Figure 6. The issue of becoming aware.
Figure 6. The issue of becoming aware.
Games 07 00038 g006
Games EISSN 2073-4336 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top