^{1}

^{2}

^{*}

^{3}

^{4}

^{†}

Current address: Ludwig-Maximilians-Universität München, Munich Center for Mathematical Philosophy, Ludwigstrasse 31, 80539 Munich, Germany.

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/.)

Logic and game theory have had a few decades of contacts by now, with the classical results of epistemic game theory as major high-lights. In this paper, we emphasize a recent new perspective toward “logical dynamics”, designing logical systems that focus on the actions that change information, preference, and other driving forces of agency. We show how this dynamic turn works out for games, drawing on some recent advances in the literature. Our key examples are the long-term dynamics of information exchange, as well as the much-discussed issue of extensive game rationality. Our paper also proposes a new broader interpretation of what is happening here. The combination of logic and game theory provides a fine-grained perspective on information and interaction dynamics, and we are witnessing the birth of something new which is not just logic, nor just game theory, but rather a

For many contemporary logicians, games and social interaction are important objects of investigation.

In this paper we will take one step further, assuming that the reader knows the basics of logic and game theory. We are going to take a look at all these components from a dynamic logical perspective, emphasizing actions that make information flow, change beliefs, or modify preferences—in ways to be explained below. For us, understanding social situations as dynamic logical processes where the participants interactively revise their beliefs, change their preferences, and adapt their strategies is a step towards a more finely-structured theory of rational agency. In a simple phrase that sums it up, this joint off-spring “in the making” of logic and game theory might be called a

The paper starts by laying down the main components of such a theory, a logical take on the dynamics of actions, preferences, and information (Sections 1 and 2). We then show that this perspective has already shed new light on the long-term dynamics of information exchange, Section 3, as well as on the question of extensive game rationality, Section 4. We conclude with general remarks on the relation between logic and game theory, pleading for cross-fertilization instead of competition. This paper is introductory and programmatic throughout. Our treatment is heavily based on evidence from a number of recent publications demonstrating a variety of new developments.

A first immediate observation is that games as they stand are natural models for many existing logical languages: epistemic, doxastic and preference logics, as well as conditional logics and temporal logics of action. We do not aim at encyclopedic description of these systems—[

Even simple strategic games call for logical analysis, with new questions arising at once. To a logician, a game matrix is a semantic model of a rather special kind that invites the introduction of well-known languages. Recall the main components in the definition of a _{i}_{i∈N}_{i}_{1}, …, _{n}_{i}_{i}_{i}_{−i} denotes the choices of all agents except agent _{−i} = (_{1}, …, _{i−1}, _{i+1}, …, _{n}

Now, from a logical perspective, it is natural to treat the set

_{i} σ′

_{i} σ′

_{i} _{−i} = _{−i}: this relation of “action freedom” gives the alternative choices for player

This can all be packaged in a relational structure

The next question is what is the “right” logical language to reason about these structures? The goal here is not simply to formalize standard game-theoretic reasoning. That could be done in a number of ways, often in the first-order language of these relational models. Rather, the logician will aim for a

Our first key component—players' desires or _{i}}_{i∈N}, _{i} ⊆ _{i} should have has been the subject of debate in philosophy: in this paper, we assume that the relation is reflexive and transitive. For each ≥_{i}, the corresponding _{i}.

A modal language to describe betterness models uses modalities 〈≥_{i}〉

ℳ, _{i}_{i}

ℳ, _{i}_{i} _{i}

Standard techniques in modal model theory apply to definability and axiomatization in this modal preference language: we refer to ([

Next, the full modal game language for the above models must also include modalities for the relations that we called the “view of the game” and the “action freedom”. But this is straightforward, as these are even closer to standard notions studied in epistemic and action logics.

Again, we start with a set At of atomic propositions that represent basic facts about the strategy profiles.

_{i}_{i} σ′

_{i}_{i}σ′

_{i}_{i}σ

_{i}_{i}

A language allows us to say things about structures. But what about a calculus of reasoning: what is the logic of our modal logic of strategic games? For convenience, we restrict attention to 2-player games. First, given the nature of our three relations, the separate logics are standard: modal S4 for preference, and modal S5 for epistemic outlook and action freedom. What is of greater interest, and logical delicacy, is the

Thus, the language also has a so-called “universal modality”. Moreover, this modality can be defined in two ways, since we also have that:

the equivalence [∼_{i}][≈_{i}]_{i}][∼_{i}]

This validity depends on the geometrical “grid property” of game matrices that if one can go _{i} y_{i} z_{i} u_{i} z

This may look like a pleasant structural feature of matrices, but its logical effects are delicate. It is well-known that the general logic of such a bi-modal language on grid models is not decidable, and not even axiomatizable: indeed, it is “

However, there are two ways in which these complexity results can be circumvented. One is that we have mainly looked at finite games, where additional validities hold

Here is another interesting point. It is known that the complexity of such logics may go down drastically when we allow more models, in particular, models where some strategy profiles have been ruled out. One motivation for this move has to do with

Against this background of available actions, information, and freedom, the preference structure of strategic games adds further interesting features. One benchmark for modal game logics has been the definition of the strategy profiles that are in Nash Equilibrium. And this requires defining the usual notion of

Then the best response for player _{i}⋂>_{i}

Questions of complexity and complete axiomatization then multiply. But we can also deal with preference structure in other ways. Introduce proposition letters “^{1} ⊨ ^{2} ⊨ ^{n}

Our main point with this warm-up discussion for our logical Theory of Play it that the simple matrix pictures that one sees in a beginner's text on game theory

Just like strategic games, interactive agency in the more finely-structured

The first thing to note is that the sequential structure of players' actions in an extensive game lends itself to logical analysis. A good system to use for this purpose is _{a}_{a}

This syntax recursively defines complex relations in action models:

_{α∪β} := _{α}_{β}

_{α;β} := _{α}_{β}

The key dynamic modality [

As before, a complete logical picture must bring in players' preferences on top of

At end nodes, players already have their values marked. At further nodes, once all daughters are marked, the player to move gets her maximal value that occurs on a daughter, while the other, non-active player gets his value on that maximal node.

The resulting strategy for a player selects the successor node with the highest value. The resulting set of moves for all players (still a function on nodes given our assumption on end nodes) is the “

But to a logician, a strategy is best viewed as a subrelation of the total

When the above algorithm is modified to a relational setting—we can now drop assumptions about unicity at end-points—we find an interesting new feature: special assumptions about players. For instance, it makes sense to take a minimum value for the passive player at a node over all highest-value moves for the active player. But this is a worst-case assumption: my counter-player does not care about my interests after her own are satisfied. But we might also assume that she does, choosing a maximal value for me among her maximum nodes. This highlights an important feature:

One interesting way of understanding the variety that arises here has to do with the earlier modal preference logic. We might say in general that the driving idea of

I do not play a move when I have another whose outcomes I prefer.

But preferences between moves that can lead to different sets of outcomes call for a notion of “lifting” the given preference on end-points of the game to sets of end-points. As we said before, this is a key topic in preference logic, and here are many options: the game-theoretic rationality behind

This says that we choose a move with the highest maximal value that can be achieved. A more demanding notion of preference for a set

Here is what relational

First mark all moves as

At each stage, mark dominated moves in the ∀∀ sense of preference as

Here “reachable endpoints” by a move are all those that can be reached via a sequence of moves that are still active at this stage.

We will analyze just this particular algorithm in our logics to follow, but our methods apply much more widely.

Many logical definitions for the

For each extensive game form, the strategy profile

Here _{i}_{a is an i-move} _{i}

The meaning of the crucial axiom follows by a modal frame correspondence ([

A game frame makes (_{i}_{i}_{i}

No alternative move for the current player

A typical picture to keep in mind here, and also later on in this paper, is this:

More formally,

Now, a simple inductive proof on the depth of finite game trees shows for our cautious algorithm that:

This result is not very deep, but it opens a door to a whole area of research.

We are now in the realm of a well-known logic of computation, viz.

The

Here is the explicit definition in

The crucial feature making this work is a typical logical point: the occurrences of the relation

Fixed-point formulas in computational logics like this express at the same time static definitions of the

This first analysis of the logic behind extensive games already reveals the fruitfulness of putting together logical and game-theoretical perspectives. But it still leaves untouched the dynamics of deliberation and information flow that determine players' expectations and actual play as a game unfolds, an aspect of game playing that both game theorists and logicians have extensively studied in the last decades. In what follow we make these features explicit, deploying the full potential of the fine-grained Theory of Play that we propose.

The background to the logical systems that follow is a move that has been called a “Dynamic Turn” in logic, making informational acts of inference, but also observations, or questions, into explicit first-class citizens in logical theory that have their own valid laws that can be brought out in the same mathematical style that has served standard logic so well for so long. The program has been developed in great detail in [

Players' informational attitudes can be broadly divided into two categories:

Recall that

Rather than directly representing agents' information in terms of syntactic statements, in this paper, we use standard epistemic models for “semantic information” encoded by epistemic “

[Epistemic Model] An _{i}}_{i∈N}, _{i}⊆

A simple modal language describes properties of these structures. Formally, _{EL}_{i}_{i}_{i}_{i}φ

Let ℳ = 〈_{i}}_{i∈N},

ℳ,

ℳ,

ℳ,

ℳ, _{i}φ_{i} v

We call _{i}}_{i∈N},

Given the definition of the dual of _{i}

This says that “

Now comes a simple concrete instance of the above-mentioned “Dynamic Turn”. Typically, hard information can

The most basic type of information change is a _{EL}

[Public Announcement.] Let ℳ = 〈_{i}}_{i∈N}, ^{φ}^{φ}^{φ}

Clearly, if ℳ is an epistemic model then so is ℳ^{φ}^{φ}

Let _{PAL}_{EL}_{EL}^{φ}

Now, in the earlier definition of public announcement, we can also allow formulas from the extended language _{PAL}_{i}ψ_{i}ψ_{PAL}

While this is a broad extension of traditional conceptions of logic, standard methods still apply. A fundamental insight is that there is a strong logical relationship between what is true before and after an announcement, in the form of so-called

On top of the static epistemic base logic, the following reduction axioms completely axiomatize the dynamic logic of public announcement:
_{i}φ_{i}

Going from left to right, these axioms reduce syntactic complexity in a stepwise manner. This recursive style of analysis has set a model for the logical analysis of informational events generally. Thus, information dynamics and logic form a natural match.

Both game theorists and logicians have extensively studied a next phenomenon after the individual notions considered so far:

Following [_{G}φ

In general, we need to add a new operator _{G}φ_{i}}_{i∈N}, _{φ}_{i∈G} ∼_{i}. As for valid laws of reasoning, the complete epistemic logic of common knowledge expresses principles of “reflective equilibrium”, or mathematically, fixed-points:

Fixed-Point Axiom: _{G}φ_{G}C_{G}φ

Induction Axiom: _{G}_{G}φ_{G}φ

Studying group knowledge is just a half-way station to a more general move in current logics of agency Common knowledge is a notion of group information that is definable in terms of what the individuals know about each others. But taking collective agents—a committee, a scientific research community—seriously as logical actors in their own right brings us beyond this reductionist perspective.

Finally, what about dynamic logics for group modalities? Baltag, Moss and Solecki [_{EL}

But rational agents are not just devices that keep track of hard information, and produce indubitable knowledge all the time. What seems much more characteristic of intelligent behaviour, as has been pointed out by philosophers and psychologists alike, is our creative learning ability of having

While there is an extensive literature on the theory of belief revision, starting with [_{i}v_{≼i}_{i}w_{i}. The plausibility ordering ≼_{i}

(Epistemic-Doxastic Models). An _{i}}_{i∈N}, _{i} is a well-founded

plausibility implies possibility: _{i} v then w_{i} v

locally-connected: _{i} v then either w_{i} v or v_{i} w

These richer models can define many basic soft informational attitudes:

_{i}φ_{≼i} ([_{i}), ℳ,

This is the usual notion of belief which satisfies standard properties,

_{i}φ_{i}

Thus,

As noted above, a crucial feature of soft informational attitudes is that they are ^{ψ}

Safe belief can be similarly characterized by restricting the admissible evidence:

ℳ, _{i}

Baltag and Smets [_{i}_{EL}

Let us now turn to the systematic logical issue of how beliefs change under new hard information, _{i}ψ

[Dynamic Belief Change _{1} in the following epistemic-doxastic model:

In this model, the solid lines represent agent 2′s hard and soft information (the box is 2′s hard information ∼_{2} and the arrow represent 2′s soft information ≼_{2}) while the dashed lines represent 1′s hard and soft information. Reflexive arrows are not drawn to keep down the clutter in the picture. Note that at state _{1}, agent 2 _{1} ⊨ _{2}(_{1} ⊨ _{1}_{1}_{1} ⊨ _{1}_{2}_{3} and so we have _{1} ⊨ [_{1}_{2}_{i}(_{i}

The example is also interesting as the announcement of a

Public announcement assumes that agents treat the source of the incoming information as

How to incorporate less-than-conclusive evidence that

Suppose the agent considers all states in

Perhaps the most ubiquitous policy is _{i} _{i} _{i}

In what follows, we will focus on a more radical policy for belief upgrade, between the soft conservative upgrade and hard public announcements. The idea behind such _{i} _{i} _{i} _{i}

The precise definition of radical upgrades goes as follow. Let
_{i} is the equivalence class of _{i}) denote this set of

(Radical Upgrade.). Given an epistemic-doxastic model ℳ = 〈_{i}}_{i∈N}, {≺_{i}_{i∈N}, ^{⇑φ} = ^{⇑φ} = ^{⇑φ}:

_{i} y, and

_{i}

A logical analysis of this type of information change uses modalities [_{i}φ

Here is how belief revision under soft information can be treated:

The dynamic logic of radical upgrade is completely axiomatized by the complete static epistemic-doxastic base logic plus, essentially, the following recursion axiom for conditional beliefs:

This result is from [

Our logical treatment of update with hard and soft information reflects a general methodology, central to the Theory of Play that we advocate here. Information dynamics is about steps of model transformation, either in their the universe of worlds, or their relational structure, or both.

These methods work much more generally than we are able to show here, including model update with information that may be partly private, but also for various other relevant actions, such as

One further important issue is this. Most information flow only makes sense in a longer-term temporal setting, where agents can pursue goals and engage in strategic interaction. This is the realm of

We now discuss a first round of applications of the main components of the Theory of Play outlined in the previous sections. We leave aside games for the moment, and concentrate on the dynamic of information in interaction. These applications have in common that they use single update steps, but then iterate them, according to what might be called “protocols” for conversation, learning, or other relevant processes. It is the resulting limit behavior that will mainly occupy us in this section.

We first consider agreement theorems, well known to game theorists, showing how repeated conditioning and public announcements lead to consensus in the limit. This opens the door a general analysis of fixed-points of repeated attitude changes, raising new questions for logic as well as for interactive epistemology. Next we discuss underlying logical issues, including extensions to scenarios of belief merge and formation of group preferences in the limit. Finally we return to a concrete illustration: viz. learning scenarios, a fairly recent chapter in logical dynamics, at the intersection of logic, epistemology, and game theory.

Agreement Theorems, introduced in [

The logical tools introduced above provide a unifying framework for these various generalizations, and allow to extend them to other informational attitudes. For the sake of conciseness, we will not cover static agreement results in this paper. The interested reader can consult [

For a start, we will focus on a comparison between agreements reached via conditioning and via public announcements, reporting the work of [

The following example, inspired by a recent Hollywood production, illustrates how agreements are reached by repeated belief conditioning:

Cobb and Mal are standing on a window ledge, arguing whether they are dreaming or not. Cobb needs to convince Mal, otherwise dreadful consequences will ensue. For the sake of the example, let us assume that Cobb knows they are not dreaming, but Mal mistakenly believes that they are: state _{1} in

With some thinking, Mal can come to agree with Cobb. The general procedure for achieving this goes as follows: A _{1i}, in the sequence is _{iφ}_{iφ}_{iφ}

Following the zones marked with an arc in _{1}, Mal needs three rounds of conditioning to switch her belief about their waking, and thus reach an agreement with Cobb. Her belief stays the same upon learning that Cobb believes that they are not dreaming. Let us call this fact

Iterated conditioning thus leads to agreement, given common priors. Indeed, conditioning induces a decreasing map from subsets to subsets, which guarantees the existence of a fixed points, where all agent's conditional beliefs stabilize. Once the agents have reached this fixed-point, they have eliminated all higher-order uncertainties concerning the posteriors beliefs about

At the fixed-point

The reader accustomed to static agreement theorems will see that we are now only a small step away from concluding that sequences of simultaneous conditionings lead to agreements, as it is indeed the case in our example. Since common prior and common belief of posteriors suffice for agreement, we get:

Take any sequence of conditioning acts for a formula

This recasts, in our logical framework, the result of [

_{1}. They keep announcing the same thing, but each time, this induces important changes in both agents' higher-order information. Mal is led stepwise to realize that they are not dreaming, and crucially, Cobb also knows that Mal receives and processes this information. As the reader can check, at each step in the process, Mal's beliefs are common knowledge.

One again,

epistemic-doxastic model._{i}φ_{i}φ

In dialogues, just like with belief conditioning, iterated public announcements induce decreasing maps between epistemic-doxastic models, and thus are bound to reach a fixed point, where no further discussion is needed. At this point, the protagonists are guaranteed to have reached consensus:

At the fixed-point ℳ_{n},

For any public dialogue about _{n},

As noted in the literature [

Here are a few points about the preceding scenarios that invite generalization. Classical agreement results require the agents to be “like-minded” [

A final point is this. While agreement scenarios seem special, to us, they demonstrate a general topic, viz. how different parties in a conversation, say a “Skeptic” and an ordinary person, can modify their positions interactively. In the epistemological literature, this dynamic conversational feature has been neglected—and the above, though solving things in a general way, at least suggests that there might be interesting structure here of epistemological interest.

One virtue of our logical perspective is that we can study the above limit phenomena in much greater generality.

For a start, for purely logical reasons, iterated public announcement of any formula

In addition to definability, there is complexity and proof. Van Benthem [

But these scenarios are still quite special, in that the same assertion gets repeated. There is large variety of further long-term scenarios in the dynamic logic literature, starting from the “Tell All” protocols in [

In addition to the limit dynamics of knowledge under hard information, there is the limit behavior of belief, making for more realistic dialog scenarios. This allows for more interesting phenomena in the earlier update sequences. An example is iterated hard information dovetailing agents' opinions, flipping sides in the disagreement until the very last steps of the dialogue (cf. [

All these phenomena get even more interesting mathematically with dialogs involving soft announcements [⇑

Suppose that ^{r}^{r}_{3} is the same model as ℳ_{1}, we have a cycle:

In line with this, players' conditional beliefs may keep changing along the stages of an infinite dialog.

Every iterated sequence of truthful radical upgrades stabilizes all simple non-conditional beliefs in the limit.

Finally, we point at some further aspects of the topics raised here. Integrating agents' orderings through some prescribed process has many similarities with other areas of research. One is

We conclude this section with one concrete setting where many of the earlier themes come together, viz.

The learning setting shows striking analogies with the dynamic-epistemic logics that we have presented in this paper. What follows is a brief summary of recent work in [

Now, it is not hard to recognize many features here of the logical dynamics that we have discussed. The learning function outputs beliefs, that get revised as new hard information comes in (we think of the observation of the evidence stream as a totally reliable process). Indeed, it is possible to make very precise connections here. We can take the possible hypotheses as our possible worlds, each of which allows those evidence streams (histories of investigation) that satisfy that hypothesis. Then observing successive pieces of evidence is a form of public announcement allowing us to prune the space of worlds. The beliefs involved can be modeled as we did before, by a plausibility ordering on the set of worlds for the agent, which may be modified by successive observations.

On the basis of this simple analogy, [

Public announcement-style eliminative update is a universal method: for any learning function, there exists a plausibility order that encodes the successive learning states as current beliefs. The same is true, taking observations as events of soft information, for radical upgrade of plausibility orders.

When evidence streams may contain a finite amount of errors, public announcement-style update is no longer a universal learning mechanisms, but radical upgrade still is.

With these bridges in place, one can also introduce logical languages in the learning-theoretic universe. [

Such combinations of dynamic epistemic logic and learning theory also invite comparison with game theory Learning, for instance, to coordinate on a Nash equilibrium in repeated games, has been extensively studied, with many positive and negative results—see, for example, [

This concludes our exploration of long-term information dynamics in our logical setting. We have definitely not exhausted all possible connections, but we hope to have shown how a general Theory of Play fits in naturally with many different areas, providing a common language between them.

We now return to game theory proper, and bring our dynamic logic perspective to bear on an earlier benchmark example: Backwards Induction. This topic has been well-discussed already by eminent authors, but we hope to add a number of new twists suggesting broader ramifications in the study of agency.

In the light of logical dynamics, the main interest of a solution concept is not its “outcome”, its set of strategy profiles, but rather its “process”, the way in which these outcomes are reached. Rationality seems largely a feature of procedures we follow, and our dynamic logics are well-suited to focus on that.

Here is a procedural line on Backwards Induction as a rational process. We can take

As we saw in Section 3, public announcements saying that some proposition _{|φ} whose domain consists of just those worlds in M that satisfy

“at the current node, no player ever chose a strictly dominated move coming here” (

This makes an informative assertion about nodes in a game tree, that can be true or false. Thus, announcing this formula

Stage 0 rules out

We see how the

In any game tree ℳ, the model (rat, ℳ)^{#} is the actual subtree computed by the BI procedure.

The logical background here is just as we have seen earlier in our epistemic announcement dynamics. The actual

Many foundational studies in game theory view Rationality as choosing a best action

[The

A move

Rationality* (

If game node

This changes the plausibility order, and hence the pattern of dominance-in-belief, so that iteration makes sense. Here are the stages in our earlier example, where letters

In the first game tree, going right is not yet dominated in beliefs for

On finite trees, the Backwards Induction strategy is encoded in the plausibility order for end nodes created by iterated radical upgrade with rationality-in-belief.

Again this is “self-fulfilling”: at the end of the procedure, the players have acquired common belief in rationality. An illuminating way of proving this uses an idea from [

Each sub-relation

More generally, relational strategies correspond one-to-one with “move-compatible” total orders of endpoints. In particular, conversely, each such order ≤ induces a strategy

For any game tree ℳ and any ^{k}^{k}

Thus, the algorithmic view of Backwards Induction and its procedural doxastic analysis in terms of forming beliefs amount to the same thing. Still, as with our iterated announcement scenario, the dynamic logical view has interesting features of its own. One is that it yields fine-structure to the plausibility relations among worlds that are usually taken as primitive in doxastic logic. Thus games provide an underpinning for the possible worlds semantics of belief that seems of interest per se.

We have seen how several dynamic approaches to Backwards Induction amount to the same thing. To us, this means that the notion is logically stable. Of course, extensionally equivalent definitions can still have interesting intensional differences. For instance, the above analysis of strategy creation and plausibility change seems the most realistic description of the “entanglement” of belief and rational action in the behaviour of agents. But as we will discuss soon, a technical view in terms of fixed-point logics may be the best mathematical approach linking up with other areas.

No matter how we construe them, one key feature of our dynamic announcement and upgrade scenarios is this. Unlike the usual epistemic foundation results, common knowledge or belief of rationality is not assumed, but

Our analysis does not just restate existing game-theoretic results, it also raises new issues in the logic of rational agency. Technically, all that has been said in Sections 2 an 3 can be formulated in terms of existing

Game solution procedures need not use the full power of fixed-point languages for recursive procedures. It makes sense to use small decidable fragments where appropriate. Still, it is not quite clear right now what the best fragments are. In particular, our earlier analysis intertwines two different relations on trees: the

In combined logics of action and knowledge, it is well-known that apparently harmless assumptions such as Perfect Recall for agents make the validities undecidable, or non-axiomatizable, sometimes even

These patterns serve as the basic grid cells in encodings of complex “tiling problems” in the logic.

So, what is the complexity of fixed-point logics for players with this kind of regular behaviour? Can it be that Rationality, a property meant to make behaviour simple and predictable, actually makes its theory complex?

The main trend in our analysis has been toward making dynamics explicit in richer logics than the usual epistemic-doxastic-preferential ones, in line with the program in [

In particular, in practical reasoning, we are often only interested in what are our

Can we axiomatize the modal logic of finite game trees with a

Further logical issues in our framework concern extensions to

We end by high-lighting a perhaps debatable assumption of our analysis so far. It has been claimed that the very Backwards Induction reasoning that ran so smoothly in our presentation, is incoherent when we try to “replay” it in the opposite order, when a game is actually played.

Backwards Induction tells us that

Responses to this difficulty vary. Many game-theorists seem under-impressed. The characterization result of [

We are more inclined toward the line of [

This matching up of two directions of thought:

But all this is exactly what the logical dynamics of Section 2 is about. Our earlier discussion has shown how acts of information change and belief revision can enter logic in a systematic manner. Thus, once more, the richer setting that we need for a truly general theory of game solution is a perfect illustration for the general Theory of Play that we have advocated.

Logic and game theory form a natural match, since the structures of game theory are very close to being models of the sort that logicians typically study. Our first illustrations reviewed existing work on static logics of game structure, drawing attention to the fixed-point logic character of game solution methods. This suggests a broader potential for joining forces between game theory and computational logic, going beyond specific scenarios toward more general theory. To make this more concrete, we then presented the recent program of “logical dynamics” for information-driven agency, and showed how it throws new light on basic issues studied in game theory, such as agreement scenarios and game solution concepts.

What we expect from this contact is not the solution of problems afflicting game theory through logic, or vice versa, remedying the aches and pains of logic through game theory. Of course, game theorists may be led to new thoughts by seeing how a logician treats (or mistreats) their topics, and also, as we have shown, logicians may see interesting new open problems through the lense of game theory.

But fruitful human relations are usually not therapeutic: they lead to new facts, in the form of shared offspring. In particular, one broad trend behind much of what we have discussed here is this. Through the fine-structure offered by logic, we can see the dynamics of games as played in much more detail, making them part of a general analysis of agency that also occurs in many other areas, from “multi-agent systems” in computer science to social epistemology and the philosophy of action. It is our expectation that the offspring of this contact might be something new, neither fully logic nor game theory: a

Cobb and Mal on the window ledge.

Cobb and Mal's discussion on the window ledge.

We could also have more abstract worlds, carrying strategy profiles without being identical to them. This additional generality is common in epistemic game theory, see e.g. [

We have borrowed the appealing term “freedom” from [

We cannot go into details of the modern modal paradigm here, but refer to the textbooks [

See [

For example, a proposition

A formula is valid in a class of model whenever it is true at all states in all models of that class. C.f. [

Cf. [

Cf. [

Cf. [

We omit the simple modal “bisimulation”-based argument here.

Cf. [

For further illustrations of logics on strategic games, cf. [

In what follows, we shall mainly work with

"Game frames" here are extensive games extended with one more binary relation

One can use the standard defining sequence for a greatest fixed-point, starting from the total ^{k}

Note that the distinction “hard”

[

Cf. [

Cf. [

Well-foundedness is only needed to ensure that for any set _{i}

We can even prove the following equivalence: _{i} v_{i} _{i} w

Another notion is Strong Belief
_{i} v_{i}_{i}

We can define belief _{i}φ

The key point is stated in ([_{i}ψ

The most general dynamic point is this: “Information update is model transformation”.

Conservative upgrade is the special case of radical upgrade with the modal formula _{i}_{i}_{i}

See [

In terms of the cited literature, we will now engage in concrete “logic of protocols”.

This definition is meant to fix intuition only. Full details on how to deal with

See the remarks on page 66 contrasting public announcement and belief conditioning.

Our analysis also applies to infinite models: see the cited papers.

Thanks to Alexandru Baltag for pointing out this feature to us.

We omit some details with pushing the process through infinite ordinals. The final stage is discussed further in terms of “redundant assertions” in [

Even in the single-step case, characterizing “self-fulfilling” public announcements has turned out quite involved [

Infinite iteration of plausibility reordering is in general a non-monotonic process closer to philosophical theories of truth revision in the philosophical literature [

Van Benthem [

The logical perspective can actually define many further refinements of learning desiderata, such as reaching future stages when the agent's knowledge becomes introspective, or when her belief becomes correct, or known.

Many of these results live in a probabilistic setting, but dynamic logic and probability is another natural connection, that we have to forego in this paper.

Other versions of our scenario would rather make them equi-plausible.

We refer to [

See the dissertation [

Recall our earlier remarks in Section 1 on the complexity of strategic games.

There is a large literature focused on this “paradox” of backwards induction which we do not discuss here. See, for example, [

The drama is clearer in longer games, when

Samet [

One reaction to these surprise events might even be a switch to an entirely new style of reasoning about the game. That would require a more finely-grained