Logics for Strategic Reasoning of Socially Interacting Rational Agents: An Overview and Perspectives

: This paper is an overview of some recent and ongoing developments of formal logical systems designed for reasoning about systems of rational agents who act in pursuit of their individual and collective goals, explicitly speciﬁed in the language as arguments of the strategic operators, in a socially interactive context of collective objectives and attitudes which guide and constrain the agents’ behavior.


Introduction
The concepts of agency and rationality are fundamental for describing and understanding the modern society. Rational agents can be humans, intelligent robots, automated devices, or other autonomous entities that act and interact in pursuit of their individual and collective preferences and goals. In accordance with their various essential attributes, such as beliefs, knowledge, and abilities to observe, learn, remember, reason and communicate, rational agents plan and execute strategies (in AI, often also called "policies") to achieve their goals. The common environment in which agents act and interact is commonly called a multiagent system. This generic concept covers a variety of entities, including teams of agents, social groups and networks, organizations, markets, and entire societies. Nowadays, the modeling and analysis of systems of rational agents is a broad interdisciplinary field of extensive research, bringing together ideas, approaches, and methods from humanities, social sciences, artificial intelligence, decision and game theory, mathematics, and computer science.
Agents and groups (also often called "coalitions") of rational agents in multiagent systems typically have competing, sometimes adversarial to each other, interests and goals. When the goals are purely quantitative, such systems are modeled as multiplayer games, studied in game theory, where rational players act so as to optimize their payoffs. Agents may also have purely qualitative ("win-lose") or combined goals. A fundamental aspect of reasoning both within and about multiagent systems is the reasoning about abilities of agents and coalitions to design and execute strategies to achieve their goals, or to prevent others from achieving theirs, hereafter called (multiagent) strategic reasoning.
Importantly, the behavior of rational agents is usually guided not only by their individual and group goals, but also by those of the others, as well as by common (e.g., societal) goals. Furthermore, their behavior is constrained by various individual, collective, and societal norms (such as rights, obligations, and prohibitions). The totality of common goals, attitudes, norms, and other constraints under which agents act and interact I will call socially interactive context. That context leads to a complex interplay of cooperation and competition between the agents in the system.
Since the 1980s, formal logic has emerged as a useful and efficient methodological and technical tool for modeling, analysis, and reasoning about multiagent rational interaction. A rich variety of logical frameworks for strategic reasoning in, and about, multiagent • The STIT-based approach, building on the theory of "Seeing To It That" (STIT), originating from the seminal work of Belnap and Perloff [3]. For exploring the closer links of the present paper with STIT, the reader is referred to [13] and the overview chapter [14] on using STIT for strategic reasoning, as well as to [15] for temporal extensions of STIT and applications to normative reasoning, to [16] for using STIT for reasoning about social influence, and to [17] for providing semantics of temporal STIT in concurrent game models used in this paper.
• The dynamic logic based approach, originating from Parikh's Game Logic [1], and more recently explored further in [18,19] and other related works.
For further references and a broader view on logics for analyzing games, I also refer the reader to [20].
Now, a few words about the main precursors of the logics presented here. The logic CL was introduced in [4,5] with the explicit intention to formalize reasoning about one-step (local) strategic abilities of agents and coalitions, i.e., strategic abilities to guarantee the achievement of their explicitly specified goals in the immediate outcome of their (individual or collective) action, regardless of the respective actions of all remaining agents. The logic ATL was initially introduced in [6,7] as a logical formalism for formal specification and verification of open (interacting with environment) computer systems where the agents represent concurrently executed processes. Still, it was gradually adopted as one of the most popular and standard logical systems for reasoning about long-term strategic abilities of agents and coalitions in extended concurrent multiplayer games. Technically, ATL can be described as an extension of CL with the long-term temporal operators G, F, and U, which were previously adopted, inter alia, in the branching-time temporal logic CTL (which can be regarded as a single-agent fragment of ATL). Both CL and ATL feature a special type of modal operators, denoted by [C] in CL, respectively by C in ATL, parameterized with a coalition of agents C, such that, given a coalitional goal of C formalized as (ensuring the truth of) a formula φ, the formula [C]φ intuitively says that the coalition C has a collective strategy σ C that guarantees the satisfaction of φ in every outcome state (for CL), respectively, in every outcome play (for ATL) that can occur when the agents in C execute their strategies in σ C , regardless of the choices (strategic or not) of actions of the agents that are not in C.
Thus, both CL and ATL capture reasoning about unconditional strategic abilities of agents within coalitions, acting in full cooperation with each other in pursuit of their coalitional goals, aiming to succeed against any possible behavior of their opponents, which are thus regarded as their complete adversaries (in the context of CL) or as randomly behaving environment (in the original context of ATL). This, however, is a rather narrow and extreme perspective, as agents in the real world are seldom either complete allies or complete adversaries to each other. In reality, all agents acting in the system (except for the environment, or absolute adversaries) have their own goals and act in pursuit of their fulfillment, rather than just to prevent the proponents from achieving their goals. Thus, the strategic interactions of rational agents usually involve a complex interplay of cooperation and competition, both driven by the individual and collective goals of all agents, which may be partly or fully allied or adversarial to each other. To adequately model and capture the behavior and interactions of rational agents within a socially interactive context requires much richer and more versatile formal logical frameworks than those provided by the logics CL, ATL and their close variations. Several richer frameworks have been proposed more recently, including strategy logics [21][22][23][24]; the socially friendly coalitional logic SFCL and the group protecting coalitional logic GPCL [25], with the latter subsequently extended with temporal operators for long-term goals to the temporal logic of coalitional goal assignments TLCGA in [26]; the logic for conditional strategic reasoning ConStR (cf. [27,28]); and several recently proposed more special-purpose logics for strategic reasoning, focusing on knowhow strategies [29], ethical dilemmas [30], responsibility [31], blameworthiness [32], etc.
This paper aims to provide an accessible and not too technical, yet fairly detailed, overview of the recent developments in the area and to outline some perspectives and challenges for further research. To keep the paper within reasonable scope and size, I will not discuss here the aspects of socially interactive context related to agents' knowledge, nor to normative constraints, but will focus on the strategic reasoning guided only by various patterns of cooperation and competition between agents and coalitions. I will usually assume here complete knowledge of the agents about the system and their full observability (with or without memory) and ability to communicate and coordinate during the "play". These assumptions are sometimes not justified, and then substantial further complications arise, both conceptual and computational, with respect to analyzing the agents' strategic abilities and designing their strategies in the context of agents' incomplete or imperfect information. For more on these issues, the reader is referred to, e.g., [8,9,33], and the entire volume [34], where the latter is published.
Structure of the paper: Section 2 provides some technical preliminaries on concurrent game models, including plays and strategies in them, and provides an illustrative example. Then, Section 3 presents a brief informal preview of the main logical systems presented in the next five sections: the basic coalition logic CL in Section 4, its temporal extension ATL in Section 5, the logic of conditional strategic reasoning ConStR in Section 6, the socially friendly coalition logic SFCL in Section 7, and the logic of local coalitional goal assignments LCGA and its temporal extension TLCGA in Section 8. Eventually, Section 9 proposes a uniform logical framework, called basic strategy logic, which involves variables over strategies and quantification over them, and thus enables uniform embedding of all other strategic operators mentioned here. I end with brief concluding remarks in Section 10.

Technical Preliminaries
The reader is expected to have a basic background in propositional classical and modal logics, within the introductory chapters of, e.g., [35]. In addition, this section provides technical preliminaries needed for the definitions and understanding of the formal semantics of the various strategic operators and logics presented and discussed further. Most of it can be omitted by a reader who is only interested in the informal aspects of the topics, and can be consulted only if, and when, necessary for understanding the further content. For more technical details on these preliminaries, the reader is referred to [8,9], as well as [36] (Chapter 9) on concurrent game models and ATL.

Concurrent Game Models
Consider a fixed finite nonempty set of agents Agt = {a 1 , ..., a n }, also called here players, and a fixed countable set of atomic propositions Prop. Subsets of Agt will also be called coalitions. The Cartesian product of a family of sets {X a } a∈Agt is the set of tuples (x 1 , ..., x n ) where x i ∈ X a i or each i = 1, ..., n, denoted, as usual, by Π a∈Agt X a .

Definition 1.
Let O be any nonempty set. A (strategic) game form for the set of agents Agt over the set of outcomes O is a tuple Act is a nonempty set of actions; • act : Agt → P + (Act) is a mapping assigning to each a ∈ Agt a nonempty set act(a) of actions available to the player a; • out : Π a∈Agt act(a) → O is a map assigning to every action profile ζ ∈ Π a∈Agt act(a) a unique outcome in O.

Definition 2. A concurrent game model (CGM) is a tuple
where • S is a nonempty set of (game) states; • Act is a nonempty set of actions; • g is a game map, assigning to each state w ∈ S a strategic game form g(w) = (Act, act w , S, out w ) over the set of outcomes S; • Prop is a countable set of atomic propositions; • L : S → P (Prop) is a labeling function, assigning to every state in S the set of atomic propositions true at that state.
Thus, for each a ∈ Agt and w ∈ S, the set act w (a) consists of the locally available actions for a in w. We can now define the global action function act : Agt ×S → P + (Act) by setting act(a, w) ::= act w (a). We also define the set Act a ::= w∈S act w (a) of possible actions for a.
Given a concurrent game model M = (Agt, S, Act, g, Prop, L), we define the following. • An action profile ζ ∈ Π a∈Agt Act a is available at the state w if it consists of locally available actions, i.e., if ζ ∈ Π a∈Agt act w (a).
The set of all action profiles that are available at w will be denoted by ActProf w . • out M is the global outcome function assigning to every state w and an action profile ζ ∈ ActProf w the unique outcome out M (w, ζ) ::= out w (ζ). When M is fixed by the context, it will be omitted from the subscript. • More generally, given a coalition C ⊆ Agt, a joint action for C in M is a tuple of individual actions ζ C ∈ ∏ a∈C act(a). In particular, for any action profile ζ ∈ Π a∈Agt act(a), ζ| C is the joint action obtained by restricting ζ to C. The joint action ζ| C is locally available at the state w iff every individual action in it is locally available for the respective agent in w.

•
For any w ∈ S, C ⊆ Agt, and joint action ζ C that is available at w, we define the set of possible outcomes from the application of the joint action ζ C at the state w: In the special case when C = Agt, ζ Agt is a full available action profile, so Out[w, ζ Agt ] consists of the single outcome out(w, ζ Agt ).

Remark 3.
Note that we only consider deterministic models, where every available action profile produces a single outcome. This is not an essential restriction, because every nondeterministic model can be made deterministic by adding a fictitious additional agent (say, nature) that settles the nondeterminism with its actions which settle the outcome. However, it should be noted that whether nondeterministic models are considered or not will make a difference on the validities of coalition logic CL, as shown in Section 4.4, and in the various other logics where strategic abilities of coalitions can be expressed.
It is sometimes more convenient to eliminate the "local view" from the definition of concurrent game models, given by the game map assigning the local strategic game forms, and to redefine these models globally, as follows: where the components are defined as above. Clearly, the two definitions are equivalent and will be used interchangeably.
Example 4 (Concurrent game model: Adam and Eve in hotel Life). Figure 1 pictures a concurrent game model formalizing the following story, which the reader is advised to take at face value and with a grain of salt. While it has a clearly allegoric flavor, it is not meant to formalize some meaningful (real or fiction) life story, but just to give an intuitively appealing and sufficiently nontrivial, yet not too complicated, example of a concurrent game model on which to illustrate the components of that concept. Thus, questions about the meaning or idea behind this or that state or transition do not necessarily have good answers.
Two agents, Adam and Eve, live in hotel "Life". It has (at least) three rooms: R1, R2, and R3. Initially, both Adam and Eve are in room R1. Every day, each of them is able to choose either to stay in the same room (action stay, denoted by s), or to move to another room (action move, denoted by m), with some restrictions described further. Whichever room each of them chooses, they stay there for the night, and then each chooses again to either stay in the same room, or move to another room, as specified in the model. In some cases, they also have the option to "retreat" (action retreat, denoted by r).  Here are the formal components of the model, with some brief explanations: • The set of agents is Agt = {Adam, Eve}.

•
The set of game states is } and the names of the states represent the current locations of the agents, e.g., A 1 E 2 means that Adam is in R1 and Eve is in R2, etc. Thus, the names of the states can also be interpreted as pairs of atomic propositions saying "Adam is in room X" and "Eve is in room Y". These atomic propositions, however, will not feature in the example. -T, which is true in a state iff Adam and Eve are in the same room ("together"), i.e., in A 1 E 1 and A 2 E 2 , as indicated in their labels; -H A , meaning "Adam is happy", true in the states where H A is listed in the label; -H E , meaning "Eve is happy", true in the states where H E is listed in the label.

Plays and Strategies in Concurrent Game Models
Given a (globally defined concurrent game model M = (Agt, S, Act, act, Out, Prop, L), a partial play, or a history in M is either an element of S or a finite word of the form: h = w 0 ζ 0 w 1 ...w n−1 ζ n−1 w n where w 0 , ..., w n ∈ S and for each i < n, ζ i is a locally available action profile in ActProf w i . Some partial plays in Example 4 are: The last state in a history h will be denoted by l(h). The set of histories in M is denoted by Hist(M). The sequence of states w 0 w 1 ...w n obtained by forgetting the action profiles in the history h is called the (initial) path induced by h and denoted path(h).
A (memory-based) strategy for player a is a map σ a assigning to each history h = w 0 ζ 0 ...ζ n−1 w n in Hist(M) an action σ a (h) from act(a, w n ). A strategy σ a is path-based, if it assigns actions only based on the path generated by the history (not taking into account the intermediate action profiles), i.e., σ a (h) = σ a (h ) whenever path(h) = path(h ). A strategy σ a is memoryless, or positional, if it assigns actions only based on the current (last) state, i.e., σ a (h) = σ a (h ) whenever l(h) = l(h ).
Given a coalition C ⊆ Agt, a joint strategy for C in the model M is a tuple Σ C of individual strategies, one for each player in C. For every joint strategy Σ C and a history h, we denote by Σ C (h) the joint action prescribed by Σ C on h.
A (global) strategy profile Σ is a joint strategy for the grand coalition Agt, i.e., an assignment of a strategy to each player. We denote the set of all strategy profiles in the model M by StratProf M and the set of all joint strategies for a coalition C in M by Given a strategy profile Σ, the play induced by Σ at w ∈ S is the unique infinite word such that w 0 = w and, for each n < ω we have w n+1 = out(w n , ζ n ), and The infinite word w 0 w 1 w 2 ... obtained by forgetting the action profiles in this infinite play is called the path induced by Σ at w, denoted path(Σ, w).
Here are some simple, informally described, positional strategy profiles and the plays induced by them in Example 4: • Consider the strategy σ 1 which prescribes the action s if Adam and Eve are currently together, or else m, if possible, otherwise, again, s. If both Adam and Eve follow that strategy starting from A 1 E 1 , the induced play is play(A 1 E 1 , (σ 1 , σ 1 )) = A 1 E 1 (s, s)A 1 E 1 (s, s)... Respectively, the induced play starting from Consider the strategy σ 2 which prescribes the action s if the player is currently happy, or else r if possible, otherwise m, if possible, otherwise, s. If both Adam and Eve follow σ 2 starting from A 1 E 1 , the induced play is If Adam follows σ 1 and Eve follows σ 2 , the induced play starting from Lastly, if Adam follows σ 1 and Eve follows the strategy σ 3 which prescribes the action s only if both players are currently happy or if no other action is available, or else r if possible, otherwise m, then the induced play starting from A 1 E 1 is this happy ending: More generally, given a coalition C ⊆ Agt, a state w ∈ S, and a joint strategy Σ C for C, the set of outcome plays induced by the joint strategy Σ C at w is the set of plays Plays(w, Σ C ) = play(w, Σ) | Σ ∈ StratProf M such that Σ(a) = Σ C (a) for all a ∈ C Given a strategy profile Σ, we also denote Plays(w, Σ, C) ::= Plays(w, Σ| C ). We will likewise use the notation paths(w, Σ, C) for the set of paths obtained from the plays in Plays(w, Σ, C). Since these only depend on the strategies assigned to players in C, I shall freely use the notation Plays(w, Σ, C) and paths(w, Σ, C), even when Σ is defined for all members of C, but not necessarily for all other players.

A Variety of Strategic Abilities
This short section is a brief informal preview of the types of strategic abilities and logical operators and systems formalizing them that will be presented further in the paper.
Most of the natural claims about strategic abilities of agents and coalitions (for convenience, hereafter I will often treat individual agents as singleton coalitions) can be stated in terms of existence of a strategy profile Σ (in particular, an action profile ζ) that satisfies certain requirements formalizing these abilities. Here is a natural hierarchy of types of strategic abilities and associated claims about them, which will be discussed further.

1.
Strictly competitive and unconditional, where all agents, respectively, coalitions, act only in pursuit of their own goals and can be assumed to regard all others either as adversaries, or as behaving randomly. An alternative way of thinking here is that these are abilities of a given agent, respectively, coalition, to achieve goals independently of the actions of all other agents. Both interpretations make good sense in the context of this paper. The typical claim for such kind of local (immediate, one step) abilities is: "The coalition A has a joint action to ensure satisfaction of the coalitional goal of A in every outcome state that may result from that joint action". This is the informal semantic reading of the strategic operator [A] in the coalition logic CL [4,5], which will be presented in Section 4.
Respectively, here is a typical long-term strategic ability claim: "The coalition A has a joint strategy to ensure satisfaction of the coalitional goal of A in every outcome play resulting from A following that strategy". This is the informal reading of the strategic operator A in the alternating time temporal logic ATL and family [7], which will be presented in Section 5.

2.
Competitive, but conditional on the other agents' expected actions, where coalitions (respectively, agents) still act only in pursuit of their own goals, but, when deciding on their course of action, they take into account the goals and the respective expected actions of the other players, so they are not treated as adversaries (or, behaving randomly), but as rational agents pursuing their own goals. Here is a typical such claim expressing conditional strategic ability: "For some (or, every) joint action of the coalition A that ensures satisfaction of its goal γ A , the coalition B has a joint action of its own to ensure satisfaction of its goal γ B .
There are (at least) two naturally arising readings of such conditional claims, as "proactive ability" and as "reactive ability", and two respective versions of local strategic operators formalizing them. These were first introduced in [27] (respectively, as "de dicto" and "de re" abilities) and further studied in [28]. Conditional abilities will be discussed in more detail in Section 6.

3.
Socially cooperating abilities, where agents and coalitions still act in pursuit of their own goals, but when deciding on their course of action take into account the goals of other agents in the system and make allowance, if possible, for their satisfaction, too. Thus, agents and coalitions are assumed, i.e., not only rational but also cooperative, whenever possible to reconcile their interests with those of the others. Two natural examples of strategic operators formalizing such abilities are: (a) the "cooperative ability" operator O c , again introduced and studied in [27,28], which, when applied to (disjoint) coalitions A and B with respective goals φ and ψ, formalizes the statement saying that "A has a joint action σ A which guarantees the satisfaction of φ and enables B to also apply a joint action σ B that guarantees ψ".
This will be presented in more detail in Section 6.
The "socially friendly" coalitional operator SF, introduced and studied in [25], which is a somewhat more general version of O c , informally says that "A has a joint action σ A that guarantees satisfaction of its goal and also enables the complementary coalition A to realize any one of a list of secondary goals by applying a respectively suitable joint action".
The operator SF and the logic SFCL built on it will be presented in more detail in Section 7.

4.
Abilities for cooperation, protecting the interests of agents and coalitions. These capture the idea, complementary to that of SFCL, that, while socially responsible rational agents and coalitions contribute with their individual and collective actions to the society of all agents, they wish to perform that in a way that protects their individual and collective interests and goals. Such abilities are expressed by means of the coalitional goal assignment operator [·] , introduced in [25] as a local operator and extended with temporalized long-term goals in [26]. A coalitional goal assignment γ is a mapping which assigns a goal formula to each possible coalition, and the operator [γ] formalizes the claim that there is a strategy profile σ of Agt, the restriction of which to every coalition C is a joint action σ C that guarantees the satisfaction of its goal γ(C) regardless of any possible behavior of the remaining agents. The coalitional goal assignment operator and the logics LCGA and TLCGA built on it will be presented in more detail in Section 8.

Coalition Logic and Unconditional Local Strategic Reasoning
The case of strictly competitive (noncooperative) and unconditional abilities, presented in this section, is the extreme case of strategic abilities, where agents, respectively, coalitions, act only in pursuit of their own goals and regard all others either as adversaries or as behaving randomly. In this section, I will present briefly the basic multiagent logic CL formalizing these abilities and will illustrate its use with some examples.

The Basic Logic for Coalitional Strategic Reasoning CL
As noted earlier, a typical claim for such kind of local strategic abilities is of the following type: "The coalition C has a joint action to guarantee satisfaction of the coalitional goal of C in every outcome state that may occur as a result from that joint action". This is the informal semantic reading of the strategic operator [C] in the coalition logic CL introduced in [4]; cf. also [5,37]. CL extends the classical propositional logic with coalitional strategic modal operators [C], for any coalition of agents C. The formulae of CL are defined by the following formal grammar: All other standard propositional connectives and constants, including ⊥, ∧, →, and ↔, are defined as usual. I will write [i] instead of [{i}].

Expressing Claims about Strategic Abilities in CL
Here are some examples of formalizing statements about agents' strategic abilities in CL, based on the story in Example 4. The reader may wish to check whether they are all intuitively true in the model in Figure 1, say, at state A 1 E 1 .

1.
[Eve]H E → ¬[Adam]¬H E If Eve has an action ensuring that she becomes happy, then Adam cannot prevent Eve from reaching a state where she is happy.

2.
The statement above naturally generalizes (with Win being an atomic proposition) to

[i]Win → ¬[Agt\{i}]¬Win
If the agent i has an action to guarantee reaching a "winning" outcome, then the coalition of all other agents cannot prevent i from reaching a "winning" outcome. It should be intuitively clear that this statement expresses a valid principle of CL. Indeed, we will see shortly that this is the case. 3.

¬[Adam]T ∧ ¬[Eve]T ∧ [{Adam, Eve}]T
Neither Adam nor Eve has an action ensuring that they stay together, but they have a joint action ensuring that.
Neither Adam nor Eve has an action ensuring happiness for himself / herself but they have a joint action ensuring happiness for both.

¬[Adam](H A ∧ ¬[Eve] H E )
Adam cannot act so as to ensure at the outcome state that both Adam is happy and Eve does not have an action to ensure reaching her happiness. 6. [

{Adam, Eve}]([Adam]H A → [Eve] H E )
Adam and Eve can act jointly so that at the outcome state Adam has an action to ensure reaching his happiness only if Eve has an action to ensure reaching her happiness. 7. [ Adam can act to ensure (at the outcome state), that Eve can then act to ensure that they are both happy and that each of them can act so as to keep the other happy.
Thus, we see that the language of CL allows expressing meaningful and nontrivial claims about agents' local (immediate) strategic abilities. However, CL does not allow expressing long-term (temporal) goals and claims, e.g., that Adam and Eve have a joint strategy to stay together forever, or to eventually reach happiness. For that, a more expressive logic is needed, also involving temporal operators. It is introduced in Section 5.

Formal Semantics of CL
The formulae of CL are interpreted in concurrent game models. Consider a CGM M = (Agt, S, Act, act, Out, Prop, L). Truth of a CL-formula ψ at a state s in M, denoted M, s |= ψ, as in modal logic, is defined by structural induction on formulae via the clauses: Note that this definition makes no implicit assumption that the agents in C know the goals of the remaining agents, nor the actions that they may possibly use to achieve these goals.
Here are examples of truth of some of the CL formulae listed in Section 4.2 in the CGM for Example 4, given in Figure 1, which are left for the reader to formally verify.

Axiomatizing the Validities of CL
A CL formula φ is called: Pauly [4,5] obtained a complete axiomatization of the valid formulae of CL, by extending the classical propositional logic PL with the following axiom schemes and rule: This axiom says that the grand coalition Agt can act collectively so as to guarantee any goal that is satisfied in at least one outcome state. Note that the validity of this axiom presupposes that the models are deterministic, implying that the coalition of all agents Agt can enforce any particular possible outcome. • Safety: ¬[C] ⊥ No coalition has the ability to ensure the falsum will become true. • Liveness: [C] Every coalition has the ability to ensure the truth will become true. • Superadditivity: for any C 1 , This axiom scheme says that two disjoint coalitions, each of which has a joint action to guarantee satisfaction of their own goal, can join forces (simply by each of them following their respective joint action) to guarantee satisfaction of both goals. • [C] -monotonicity Rule: Some important derivable validities include: This is derived directly by applying the [C] -monotonicity rule.
This is derived by applying the superadditivity axiom scheme to the coalitions C 1 and C 2 \ C 1 with respective goals ϕ and . Note that this validity together with Agt-maximality also implies the validity of [Agt] ϕ ∨ [Agt] ¬ϕ, for any formula ϕ.
This is derived by applying the superadditivity axiom scheme to the coalitions C and C with respective goals ϕ and ¬ϕ, and then the safety axiom scheme.
The problem of deciding whether a given formula in the logic CL is valid, respectively, satisfiable, is decidable [4,5]. This and other technical results about CL are subsumed by respective results for the more expressive logic ATL, presented in the next section.

Logics for Unconditional Long-Term Strategic Reasoning
The logic CL can only capture strategic reasoning about abilities to achieve local, i.e., one-step goals. It can be naturally extended with temporal operators to reason about long-term unconditional strategic abilities, of the type: "The coalition C has a joint strategy to guarantee achievement of the coalitional goal of C in every outcome play resulting from C following that strategy". This is the informal reading of the strategic operator C in the alternating time temporal logic ATL and family, introduced (independently from the logic CL) in [7].
I will present and illustrate here the family of ATL-related logics. For more detailed and technically involved surveys on these logics, see [8,9]. The full alternating-time temporal logic ATL * , introduced in [7], involves long-term temporal operators, the standard repertoire being "next-time" X , "always" G , and "until" U .
The strategic operator in ATL * is C φ, denoting the claim that "The coalition C has a joint strategy that guarantees the satisfaction of the goal φ on every outcome play induced by that joint strategy", where φ is any formula of ATL * . The language of ATL * is formally defined as follows.
It involves two sorts of formulae: state formulae, that are evaluated at game states, and path formulae, that are evaluated on game plays. These are, respectively, defined by the following grammars, where p ∈ Prop and C ⊆ Agt: Thus, the path formulae are used to formalize goals. The language of ATL * is too expressive, and that both creates problems with its intuitive interpretation (see discussions on [38][39][40]) and raises the complexity cost of computing the truth of an ATL * to 2EXPtime (cf. [7]). To alleviate these, the fragment ATL was also introduced in [7], which only contains state formulae, defined by the grammar: Thus, the coalitional goals in ATL only involve simple patterns with a single temporal operator, which will be called simple temporal goals. We define C F ϕ as C ( Uϕ).
The logic CL embeds into ATL as the fragment where the goals can only be of the type X ϕ, for a state formula ϕ (cf. [41]), i.e., [C] ϕ ::= C X ϕ.
Hereafter, in this section, I will focus on the logic ATL.

Formal Semantics of ATL
Similar to CL, the formulae of ATL * are interpreted in concurrent game models. To keep the presentation less technical, I will only give the formal semantics for the fragment ATL, which consists only of state formulae; for full details of the semantics of ATL * , see, e.g., [8].
Consider a CGM M = (Agt, S, Act, act, Out, Prop, L). Truth of an ATL-formula ψ at a state s in M, denoted M, s |= ψ, is defined by structural induction on formulae. The propositional cases are similar to in CL, and the only new clauses are those for the strategic operators. They all follow the same pattern, which is essentially the semantic clause for · · · in ATL * : M, s |= C γ iff there exists a joint strategy Σ C for C such that γ is true on every play in Plays(s, Σ C ), i.e., every play starting at s and induced by Σ C .
The specific clauses for the temporal operators defining the goal γ are as follows, where s 0 = s: iff there exists a joint strategy Σ C such that for every play s 0 , s 1 , ... ∈ Plays(s, Σ C ) there is i ≥ 0 for which M, s i |= ψ and M, s j |= ϕ for all j such that 0 ≤ j < i.
Satisfiability and validity in ATL are defined similar to in CL. It can be shown (cf., e.g., [36]) that memoryless strategies suffice for the semantics of ATL formulae, but that is no longer the case even when conjunctions of simple temporal goals are allowed. Indeed, consider the ATL * formula φ = a (F p ∧ F q) and a concurrent game model in Figure 2 for the only agent a. Clearly, there is no positional strategy to satisfy φ at s 0 , as it must always prescribe the same action, L or R, there, so either s 1 or s 2 , but not both, will always be visited. On the other hand, there is a simple strategy for a using a bit of memory that can satisfy φ, e.g., prescribing the first time when s 0 is visited the action L, and the next time-the action R.
is true at state s 0 only if the agent a can use some memory.

Expressing Claims about Strategic Abilities in ATL
Here are some examples of formalizing in ATL statements about agents' long-term strategic abilities.

1.
Eve Eve has an action ensuring that she eventually becomes happy, then Adam cannot prevent Eve forever from reaching a state where she is happy. It should be intuitively clear that this statement expresses a valid principle of ATL. Indeed, this is the case. 2. ( If Yin has a strategy to keep the system in safe states forever and has a strategy to eventually achieve its goal, then Yin has a strategy to keep the system in safe states until it achieves its goal. The formula above is not logically valid. Indeed, the strategies of Yin to keep the system in safe states forever and to eventually achieve its goal may be incompatible.

3.
( If Yin has a strategy to keep the system in safe states forever and Yang has a strategy to eventually reach a goal state, then Yin and Yang together have a strategy to stay in safe states until a goal state is reached.
Assuming that Yin and Yang are distinct agents, the formula above is logically valid, unlike the previous one. Indeed, this is due to the independence of the actions of the two agents, and hence of their abilities to execute their respective strategies.

4.
A F B G ¬ϕ The coalition A has a joint strategy to eventually ensure that the coalition B has a strategy to prevent ϕ from ever happening. This example raises the question of how the semantics works in the case of nested strategic operators. Suppose, the coalitions A and B intersect and a is an agent in both of them. Then, the claim of the external strategic operator A requires, inter alia, existence of a strategy for the agent a within a joint strategy σ A for A that guarantees the eventual satisfaction of the subformula B G ¬ϕ. However, when evaluating that subformula, to justify its truth, one has to identify a respective joint strategy σ B for B . Now, the question arises whether the strategy of a within σ B should not be assumed to be already fixed by σ A or, conversely, whether the strategy of a within σ A should not be assumed to be already fixed by σ B . Note that the standard formal semantics for ATL * (in particular, of ATL) presented here does not impose any such constraints, but, rather, treats these strategies independently. That is, the standard semantics of ATL * does not commit the agents in A to the strategies they adopt in order to bring about the truth of the formula A γ. That creates some conceptual issues with the very concept of "strategy", independently addressed in different ways in [38,39,42,43], where several proposals were made in order to incorporate strategic commitment or uncommitment and persistent strategies in the syntax and semantics of ATL * . These raise a number of still open technical problems regarding constructing provably complete axiomatizations, proving decidability, and designing decision procedures for the variations of ATL and ATL * mentioned above.

The Logic of Conditional Strategic Reasoning ConStR
Let us now look at the reasoning of (and about) an agent, conditional on that agent's knowledge of the goals and choices of actions of the other agents. Note that, while knowledge is not explicitly included in the syntax of the logics considered here, it is implicit in their reasoning, or in the reasoning of the external observers reasoning about the agents' abilities, and it is also implicitly assumed in the formal semantics.

Conditional Strategic Reasoning: An Informal Discussion
I first focus on a simple case: two agents, Alice and Bob, acting independently, and possibly concurrently with other agents. Alice has a goal, γ A , to achieve. Suppose that Alice has several possible choices of action that would possibly, or certainly, ensure the achievement of her goal. Bob also has a goal, γ B , of his own and has several possible actions (there may be other agents, besides Alice and Bob, also acting in pursuit of their own goals, but we will ignore them here). Now, based on his knowledge about Alice's goal and possible choices of actions she may take towards that goal, Bob decides on his own choice of action in pursuit of his goal.
Here is a simple illustrating scenario, discussed in [27]. Alice and Bob are students at Downtown University. Alice is coming to campus today, to meet with her supervisor. Bob wants to meet with Alice somewhere on campus today. Alice may not know that, and they may have no direct communication. In turn, Bob may, or may not, know what Alice is going to do on campus. Now, using his knowledge of what, where, and when Alice intends to do, Bob wants to come up with a plan of where and how to meet her.
This calls for a conditional strategic reasoning with statements of the type: For some/every action of Alice that guarantees achievement of her goal γ A , Bob has/does not have an action of his own to guarantee achievement of his goal γ B .
I will focus here on local conditional strategic reasoning, which only refers to the immediate actions of the agents, not to their long-term global strategies.
As we will see further, the standard logics for multiagent strategic reasoning, such as CL and ATL, cannot capture this type of conditional reasoning. A more expressive logic for conditional strategic reasoning is needed here.
Depending on Bob's knowledge about Alice's goal and of her possible or expected choices of action, there can be several possible cases for Bob's reasoning.

Case 1: Bob Does Not Know Alice's Goal or Actions
The simplest case is when Bob does not know Alice's goal, or her available actions, and therefore has no a priori expectations about her choice of action. Then, Bob can only ensure that γ B will occur if he has an action to make γ B true, regardless of how Alice (and all others) act. For instance, in our scenario, suppose the campus has only one entrance. If Bob is standing by the only entrance of the campus, then he is sure to meet Alice when she comes, no matter what she will do there. This can be simply expressed in coalition logic as [B]γ B .

Case 2: Bob Knows Alice's Goal and Possible Actions
Suppose now, that Bob knows Alice's goal, as well as all possible actions of Alice that can ensure the satisfaction of her goal. Thus, Bob knows that Alice will perform one of these actions, but possibly does not know which one. For instance, in our scenario, Bob knows that Alice is coming to campus to meet with her supervisor and she can meet with him either in his office, or in the lecture room, or in the café.
We can express Bob's conditional ability to achieve his goal as follows: "Whichever way Alice acts towards achieving her goal γ A , Bob can act so as to ensure achievement of his goal γ B ".
This claim can no longer be expressed in coalition logic, except in the special case when the satisfaction of Alice's goal guarantees satisfaction of Bob's goal, too (thus, Bob need not do anything about that). That can again be simply expressed in CL as [∅](γ A → γ B ).

Reactive and Proactive Ability
This case above admits two essentially different readings, discussed further: as a reactive ability, and as a proactive ability. (In [27], these were called ability de dicto and ability de re.) Bob's reactive ability. Suppose first, that Bob will know Alice's choice of action when he is to choose his action. Then, Bob's ability to achieve his goal is reactive, meaning that for every action of Alice that ensures her goal γ A , Bob has an action of his, possibly dependent on Alice's action, that would also ensure his goal, γ B . For instance, in our scenario, suppose that Alice's supervisor tells Bob where and when he is going to meet with Alice. Then Bob can wait for Alice at the respective place. This claim cannot be expressed in CL, so a new operator is needed for it.
Bob's proactive ability. Suppose now that Bob will not know Alice's action when he is to choose his. For instance, in our scenario, suppose all that Bob knows is that Alice will meet with her supervisor either in his office, or in the lecture room, or in the café, but does not know where. Now, for Bob to ensure that his goal will be achieved, he must have a uniform choice of action to make γ B true, when applied with any action of Alice that ensures the truth of her goal, γ A . For instance, in our scenario, suppose that all meeting places for Alice and her supervisor are in the same building, so Bob can wait for her at the only entrance of that building. That is, Bob must have a proactive ability to achieve his goal.
This cannot be expressed in CL, either, so a new operator is needed again.

Remark 5.
The notions of proactive and reactive ability, respectively, generalize the notions of α-effectivity and β-effectivity in game theory (cf., e.g., [44]). Indeed, in the context of the Alice and Bob story above, proactive and reactive ability of Bob correspond precisely, respectively, to αeffectivity and β-effectivity when either γ A ≡ (so, any action of Alice is equally good for achieving γ A ) or when Bob does not know γ A and hence can expect any action from Alice. Furthermore, one can still associate proactive and reactive ability with α-effectivity and β-effectivity if one explicitly assumes individual rationality of Alice, implying that she would only choose actions that would ensure γ A . Thus, one can regard proactive and reactive ability, respectively, as conditional α-effectivity and β-effectivity, terminology which will be used further.
Lastly, an important point: even though knowledge of the agent (here, Bob) about the other's goals and possible actions is essential, it will not feature in our formal logical language, nor in the formal semantics, but only in the external reasoner's analysis, in which case conditional strategic ability applies.

Case 3: Assuming Alice's Cooperation
Suppose now that Alice also knows Bob's goal and can choose to cooperate with Bob by selecting a suitable action σ A , that would not only guarantee achievement of her goal γ A , but will also enable Bob to supplement σ A with an action σ B of his, which would then also guarantee achievement of his goal γ B (we also assume that Alice knows enough about Bob's possible actions). This scenario cannot be formalized in CL, either.

Modal Operators for Conditional Strategic Reasoning
I will now present three new binary modal operators for conditional strategic reasoning, for any coalitions A and B, with intuitive semantics corresponding to the three reasoning cases in Section 6.1, i.e., formalizing, respectively, reactive abilities, proactive abilities, and abilities under cooperation.
Below, A and B are coalitions and γ A and γ B are their respective goals. In the special case when A and B are singletons, they are assumed to be different agents.
(O α ) [A] α (γ A ; B γ B ) means that the coalition B \ A of agents who are in B but not in A has a joint action σ B\A such that if A applies any joint action that guarantees the truth of γ A , then B \ A can ensure the truth of γ B by applying σ B .
This operator formalizes the notion of a coalition's proactive ability, discussed in the special case of single-agent coalitions in Section 6.1.3, respectively, to the gametheoretic notion of conditional α-effectivity, hence the notation. Note that the agents in B ∩ A (if any) are assumed to act on behalf of A in its pursuit of γ A . This operator formalizes a claim of the ability of the coalition B to choose a suitable joint action so as to achieve the goal γ B assuming that A acts so as to achieve the goal γ A , if B is to choose their joint action after B learns the joint action of A. In this case, the actions of the agents in B ∩ A (if any) are assumed to be already fixed by γ A .
This corresponds to the notion of agents' reactive ability discussed in Section 6.1.3, respectively, to the game-theoretic notion of conditional β-effectivity, hence the notation.
means that A has a joint action σ A which, when applied, guarantees the truth of γ A and enables B \ A to apply a joint action σ B\A that guarantees γ B when additionally applied by B \ A.
This operator formalizes Case 3 discussed in Section 6.1, where A knows the goal of B and can choose to cooperate with B by selecting an action among those that ensure satisfaction of γ A which is also suitable for B.

The Logic ConStR: Language and Some Definable Operators
We fix a finite nonempty set of agents Agt and a countable set of atomic propositions Prop. The formulae of ConStR, where p ∈ Prop and A, B ⊆ Agt are defined as follows: Here are some definable operators and expressions in ConStR, which can be easily seen from the informal semantics above, and can also be easily verified with the formal semantics introduced further.

•
The coalitional operator from CL is definable by means of each of O c , O α , O β as follows: This claims an unconditional ability of C to choose an action that guarantees φ.
The only strategy of the empty coalition guarantees the satisfaction of .  ∅ ψ), also equivalent to [A] α (φ; ∅ ψ), says that any joint strategy of A that guarantees φ to be true also guarantees ψ to be true. That formalizes the claim of an observer who knows both the goal φ and the possible joint actions of A, that the outcome from the joint action of A will also satisfy ψ. • A (φ|ψ) ::= ¬[A](φ|¬ψ) says that there is a joint strategy of A that ensures the truth of φ and also enables the satisfaction of ψ.
The coalitional operator [A] from CL is a special case of the above: [A]φ ::= A (φ| ).

Formal Semantics of ConStR
Given coalitions A, B ⊆ Agt and joint actions σ A for A and σ B for B, we define the ordered join of σ A and σ B to be the joint action σ A σ B for A ∪ B which equals σ A when restricted to A and equals to σ B when restricted to B \ A. Thus, in particular, As usual, formula φ of ConStR is called valid in ConStR, denoted |= ConStR φ, iff M, u |= φ for every concurrent game model M and state u ∈ M; φ is satisfiable in ConStR, iff M, u |= φ for some concurrent game model M and state u ∈ M.
Here are some easy observations on the relationships between the operators in ConStR: The following validities and nonvalidities in ConStR hold:

A counter-model is shown in Example 8. 4.
Follows from 1 and 3 above.

5.
This holds for the trivial reason that if in a given model the agent a has no action that ensures the truth of p in the given state, then the antecedent is vacuously true, whereas the consequent is false there. 6.
An easy semantic exercise.  Note that: Thus, an agent may have only conditional ability to achieve its goal. On the other hand, if the outcomes of (a 2 , b 1 ) and (a 2 , b 2 ) are swapped, then [a] α (p; b q) becomes true at s 0 in the resulting model.

6.
If the model is modified by making p true also at s 6 (p; b q).
For more examples and observations on the mutual nonexpressiveness of the operators in ConStR, see [28]. Moreover, using the notions of bisimulation defined there for each of these operators, one can show that none of the three binary operators in ConStR can be defined in terms of the other two.

On the Axiomatization and Decidability for ConStR
A system of axioms Ax ConStR for ConStR is presented in [28]. It involves lists of axioms for each of the operators O c , O α , and O β , as well the following two interaction axiom schemes: These schemes are generated by uniform substitutions in the respective validities 1 and 7 in Proposition 7. Notably, the only axiom scheme identified there which distinguishes the operators O α and O β is the following antimonotonicity (with respect to A) scheme: [ For proof of this claim, and for the other axiom schemes and inference rules for each of the operators of ConStR, see [28].
The only additional (to modus ponens) inference rules are the O-monotonicity rules, for O being each of O α , O β , and O c . For instance, here is the O c -monotonicity rule: The completeness proof for Ax ConStR is currently still under development, so its completeness is currently still a conjecture, and so is even the completeness of each of the three main subsystems, one for each of the respective operators. For all that I currently know, these seem to be quite challenging problems.
Another conjecture is the decidability of satisfiability in ConStR, even in each of its three main subsystems. Here, I only claim that it can be proved by a model-theoretic argument, based on the bounded tree-model property, meaning that every satisfiable formula of SFCL is satisfiable in a finite tree-like model of height and branching factor effectively bounded above in terms of the size of the formula. (Detailed proof of this claim is currently under construction, so the claim is, at present, still a working conjecture.)

The Socially Friendly Coalition Logic SFCL
The "socially friendly coalition logic" SFCL was introduced in [25] with the aim to capture and formalize the idea that, while socially responsible (or, "friendly") rational agents and coalitions act in pursuit of their own goals, when deciding on their course of action they can take into account the goals of other agents in the system and make allowance for cooperation enabling their satisfaction, too, whenever it is possible, to reconcile their goals with those of the others. Formally, the idea is implemented by introducing in the formal language a "socially friendly coalitional operator", presented here.
I will call the formula φ above the primary goal of the formula (and of the coalition C) and the formulae ψ 1 , . . . , ψ k -the secondary goals of the formula.
The intuitive meaning of the formula [C](φ; ψ 1 , . . . , ψ k ) is that "C has a joint action σ C that guarantees satisfaction of φ and also enables the complementary coalition C to satisfy any one of the goals ψ 1 , . . . , ψ k by applying a respectively suitable joint action".
The operator SF is a multiagent extension of the modal operator 2(ψ 1 , . . . , ψ k ; φ) (note the different order of the arguments) in the instantial neighborhood logic (INL) introduced and studied in [45].
The special case of the "socially friendly coalitional operator" with a single secondary goal [A](φ; ψ) is equivalent to the operator A (φ|ψ) defined in Section 6.
This definition presumes that if C intersects with C i , then the agents in C ∩ C i are already committed to σ C . On the other hand, the collective actions claimed to exist for each C i need not be compatible, similar to in the intuitive semantics of [C](φ; ψ 1 , . . . , ψ k ), which is a special case of SF1, where C 1 = . . . = C n = Agt \C.
. . . ; C k φ k ] meaning: "C 1 has a collective action to guarantee φ 1 , and given that action ... C k has a collective action to guarantee φ k ". This is a sequential version of SF1 where the coalitions C 1 , ..., C k are arranged in decreasing priority order.
Hereafter, I will consider SF only.

Syntax and Semantics of the Logic SFCL
The formulae of SFCL are defined by the following grammar: The standard definitions of the other propositional connectives apply. In addition, I define [C]φ := [C](φ; ). Given a finite list of formulae Ψ = ψ 1 , . . . , ψ k , I will write [C](φ; Ψ).
The fragment of SFCL containing only formulae [C](φ; ψ) where ψ is a single formula will be denoted by SFCL 1 .
The formal semantics of SFCL is given in terms of truth of an SFCL-formula at a state s of a concurrent game model M inductively, as in CL, with the following main clause: there exists a joint action σ C of C available at s, such that M, u |= φ for each state u in its outcome set O[s, σ C ],

Example 1: Negotiating the Family Budget
Example 9. This example is an adapted and extended version of one from [25]. Consider the family couple Ann and Bill and their son Charlie. Each of them has saved some money. Now, they are sitting in a family meeting and negotiating on how to spend their savings. Ann wishes for a complete kitchen renovation, Bill wants a new car, and Charlie dreams of a holiday trip to Disneyland. Ann considers two options for a kitchen: a cheaper IKEA version, or a designer's luxury version. Bill thinks of three options for a car: a cheap used black Ford, a more expensive new green Tesla, or an even more expensive, vintage pink Cadillac. As for Charlie, he would prefer the whole family to go for an expensive week-long family excursion to Disneyland in Paris, but could also settle for a cheaper 2-day car trip to Disneyland Park in California.
The possible actions of every family member or a group are to pay for any option of their wish that they can afford, and then leave the rest of their savings in the family money pool for the other(s) to use. Let us denote the respective goals by: Lastly, here are some coalitional powers: Now, one may ask, for instance, whether it can be derived from all of the above whether the whole family can afford to buy the pink Cadillac and also drive with it to Disneyland in California, i.e., whether [{Ann, Bill, Charlie}](EC ∧ CT; ); or whether Ann can obtain her dream designer's kitchen and still enable Bill to buy a new Tesla or the family to go to Disneyland in Paris, i.e., [Ann](EK; AC, ET), etc.

Example 2: Job Applicants
Consider four job candidates, Alice, Bob, Carl, and Diana, who have applied to three companies: Banana, Megasoft, and Fakebook, each advertising one position.
Suppose no candidate has strong preferences between these jobs. All candidates have been interviewed by each of the companies. Each company has selected and ranked in a priority order some of them, in which order they are offered the job and have to accept it or not. If an offer is not accepted, it goes to the next-ranked candidate, until the position is filled (if at all).
The rankings have been communicated to the candidates, as follows: Each of the candidates must now make their strategic choice of which offer to accept, if and when given the option. Suppose the candidates can communicate with each other.
Here are some true statements formalized in SFCL, where Alice, Bob, Carl, and Diana are labeled as A, B, C, D and e X means "X obtains a job": • [A](e A ; e B ∧ e D , e C ∧ e D , e B ∧ e C ) Indeed, Alice can take the offer from Fakebook and leave it to the others to decide on the other two positions. Note that Bob can then choose either offer and enable either Carl or Diana to obtain a job, but can also decline both offers and thus enable both of them to obtain a job.
Alternatively, Alice can act selflessly by declining both offers, and thus enable all others to obtain a job (by Bob choosing Fakebook).
• ¬[C](e C ; ) ∧ [C]( ; e C ) Carl cannot be sure to obtain a job, but all others can be sure of this.
Diana cannot be sure to obtain a job and enable the others to see to it that both Bob and Carl obtain a job, but she can ensure that she obtains a job (by accepting the offer from Megasoft) and then the others ensure that either Bob or Carl obtains a job, too.
Alice and Diana together can ensure that they both obtain a job and then either of the other two can obtain a job, too, up to their choice.
Alice and Diana can also be mean and act together so that only they obtain jobs, by accepting the respective offers from Banana and Megasoft and leaving the Fakebook position unfilled.

Socially Friendly Coalition Logic SFCL: A System of Axioms
The system of axioms Ax SFCL for SFCL, proposed in [25], combines Ax CL with a multiagent extension of the axiomatization of the instantial neighborhood logic (INL) [45], plus some additional axiom schemes. Here is just one of these additional schemes from Ax SFCL , where Ψ is a finite list of SFCL-formulae and Ψ, θ is the list obtained by appending θ to Ψ: This is a valid scheme. Indeed, for any formula θ, the strategy for C that ensures φ and enables each formula in the list Ψ either also ensures ¬θ, in which case it can be added conjunctively to φ, or else enables θ, in which case it can be added to the list Ψ.
The only additional (to modus ponens) inference rule is [C]-monotonicity: Note that the rule RE in the axiomatic system for INL in [45] is an admissible rule here, and that can be proved by a routine induction on the structure of ϕ in that rule. For the Boolean cases, that is due to the completeness of the purely propositional fragment (where MP suffices), and the case of ϕ = [C](...) is handled by using the [C]-monotonicity rule. A completeness proof for Ax SFCL was presented in [25], but an error was subsequently found in it (on which an erratum has been posted). The correct proof of completeness is currently still under development, and therefore the completeness claim is currently only a working conjecture.
Another conjecture is the decidability of satisfiability in SFCL. Here, I only claim that it can be proved by a model-theoretic argument, similar to the decidability of ConStR, based on the bounded tree-model property, meaning that every satisfiable formula of SFCL is satisfiable in a finite tree-like model of height and branching factor effectively bounded above in terms of the size of the formula. A proof of this claim, which is, at present, formally still a working conjecture, will be presented in a subsequent work.
Further open problems and directions for further research are related to various natural generalizations of SF, including SF1 and SF2. To my knowledge, they have not been explored and no technical results are even conjectured for them yet, except for some special cases explored in the context of conditional strategic reasoning in Section 6.

The Logic of Local Coalitional Goal Assignments (LCGA)
The logic LCGA was introduced in [25] as the "group protecting coalition logic (GPCL)", with the aim to formalize the idea, complementary to that of SFCL, that, while socially responsible rational agents and coalitions contribute with their individual and collective actions to the society of all agents, they wish to achieve that in a way that protects their individual and collective interests and goals. Formally, the idea is implemented by introducing in the formal language a "coalitional goal assignment operator" (called "group protecting coalitional operator" in [25]), presented here.

The Coalitional Goal Assignments Operator
The coalitional goal assignments operator takes a list of coalitions C 1 , ..., C k and a list of formulae representing their goals φ 1 , ..., φ k and produces a formula of the type: The intuitive meaning of the claim formalized by the formula above is that there is an action profile σ of Agt, such that for each i, the restriction of σ to the coalition C i is a joint action σ i that guarantees the satisfaction of φ i , even if some, or all, other agents deviate from the action profile σ. In particular, Now, I will define a more general and succinct representation of that operator. First, a coalitional goal assignment is a mapping γ : P (Agt) → Γ, where Γ is a set of goal formulae (which may, but need not be, the full logical language under consideration). Thus, for every coalition C, γ(C) expresses the goal of C.
Usually, most of the possible coalitions do not have coalitional goals and hence do not form. That can be formalized by assigning the "truth" as a goal to each of them. Now, the operator defined above naturally generalizes to one expressed simply as [γ] , which is a concise representation of [C 1 γ(C 1 ), ..., C 2 n γ(C 2 n )] , where C 1 , ..., C 2 n is a (fixed) canonical enumeration of the set P (Agt) of all subsets (coalitions) of Agt.

Syntax and Semantics of the Logic LCGA
The formulae of LCGA are given by the following grammar: where p ∈ Prop and γ is a coalitional goal assignment. All other standard propositional connectives are defined as usual. I will use the notation [C 1 φ 1 , ..., C k φ k ] as an explicit record of [γ] where γ is the (unique) coalitional goal assignment defined by γ(C 1 ) = φ 1 , ..., γ(C k ) = φ k , and γ(C) = for every other C ∈ P (Agt). Clearly, both notations for the coalitional goal assignment operator are expressively equivalent. Furthermore, while the alternative option seems generally more succinct, that gain vanishes if [γ] is defined as a partial goal assignment, assigning only the nontrivial goals.
The semantics of LCGA is given in terms of truth of an LCGA-formula at a state s of a concurrent game model M. The only new semantic clause is the one for [·] : M, s |= [γ] iff there exists an action profile σ ∈ Σ Agt , available at s and such that M, u |= γ(C) for every C ⊆ Agt and every u ∈ O[s, σ| C ].
The intuitive reading is spelled out in more detail as follows: "There is a strategy profile of the grand coalition in which all agents participate with their individual strategies in such a way that, each agent or coalition guarantees the satisfaction of its own goal against any possible deviations of all other agents, thus protecting its individual (respectively, coalitional) interests." The clause for [γ] can also be given in terms of extensions, where Σ Agt = ∏ a∈Agt Σ a : Conversely, the operator [C 1 φ 1 , ..., C n φ n ] cannot be expressed in terms of the SFCL operators [C](φ; ψ 1 , . . . , ψ k ), either. These nonexpressiveness claims can be proved by using the respective notions of bisimulations introduced for each of these operators in [25]. 3.
[ On the other hand, [C 1 φ 1 , C 1 φ 2 , ..., C k φ k ] is essentially expressible, up to a natural transformation of the models and semantics, in the group strategic STIT (cf. [13,14]), . This observation (suggested by a reviewer) leads to a new computationally well-behaved fragment of the group strategic STIT, which is known to be not only undecidable, but even nonaxiomatizable; cf. [14].
The following example is from [25], where it was adapted from [1].

Example 10.
[password protected data sharing] Consider a scenario involving two players, Alice (A) and Bob (B). Each of them owns a server storing some data, to which access is passwordprotected. The two players want to exchange passwords, but neither player is sure whether to trust the other. Thus, their common goal is to successfully cooperate and exchange passwords, but each player also has the private goal not to give away their password in case the other one turns out to be untrustworthy and refuses access to their data (a side remark: nowadays, such deals are arranged by smart contracts). Let us write H A for "Alice has access to the data on Bob's server" and H B for "Bob has access to the data on Alice's server". Thus, the best possible outcome for Alice is H A ∧ H B and the worst possible outcome is ¬H A ∧ H B ; symmetrically for Bob. Therefore, can the two players cooperate to exchange passwords in a way that satisfies the provisos above? Clearly, we cannot answer that question in any definitive way without having the precise details of the actual setup.
For example, suppose we define the game as follows: each player chooses a password, which may or may not be the correct one, and sends it to the other player, and they perform that simultaneously. Then, the game ends. In this game, the coalition {A, B} can certainly force an outcome where each player has the other's password, i.e., the LCGA-formula holds true. However, the strategy profile satisfying this goal assignment does not satisfy the players' individual private goals, since each runs the risk of giving away their password without obtaining the other's in return, if the other deviates. Therefore, the problem is more correctly described by the following stronger formula of LCGA: Clearly, in the simple setup above, such a strategy profile does not exist, but suppose we add a second round to the game, in which each player can first verify the other's password received in the first round (without being able to access the data yet) and then can either confirm their own password they have sent, if the other's password is correct, or else change their password (thus making the shared one useless) if the other's is not correct. Then, there is a strategy profile that satisfies the specification above, and thus enables the players to exchange passwords in a way that protects their individual interests, too.

Axiomatic System for LCGA
The axiomatic system Ax LCGA for the logic LCGA presented here was first proposed in [25] (for the logic GPCL) and then refined in the present form in [26]. Note, however, that the notation [C φ] here corresponds to the notation used in [25], not the one in [26].
In addition, I will use further the following notation from [26]: γ[C ψ], to denote the coalitional goal assignment obtained from γ by reassigning the goal of the coalition C to be ψ, and γ| C , to denote the coalitional goal assignment obtained from γ by "restricting it to C", meaning reassigning the trivial goal to all coalitions not included in C.
Here are the axiom schemes of Ax LCGA with brief explanations: (Triv) [γ ] , where γ is the trivial goal assignment, mapping each coalition to .
Even the grand coalition of all agents cannot ensure the falsum to become true.
where C 1 , ..., C n are pairwise disjoint. This axiom scheme generalizes the superadditivity axiom of coalition logic (cf. Section 4.4.) It is valid, because, if the coalitions C 1 , ..., C n are pairwise disjoint, then they can join together their collective strategies for their respective coalitional goals into one strategy profile that ensures satisfaction of all these collective goals.
This axiom scheme is valid because any strategy profile for Agt generates a unique successor state. If a state formula ψ is true in it, then it can be added to the coalitional goal of Agt, otherwise its negation can be added to it.
This scheme says that for any coalition C, state formula ψ, and a strategy profile Σ, either its projection Σ C to C ensures the truth of ψ in all successor states enabled by Σ C -in which case ψ can be added to the goal of C enforced by Σ-or, otherwise, ¬ψ is true in some of these successor states, in which case it can be added to γ| C as the goal of the grand coalition Agt enforced by Σ.
Given any coalition C and subcoalition C , this scheme says that the goal of C can be added for free to the goal of C. Indeed, if there is any strategy profile Σ that ensures that C and C can force their respective goals, then Σ also ensures that C can force the conjunction of these goals.
The rules of inference are modus ponens (MP) and goal monotonicity (G-Mon): The axiomatic system Ax LCGA was first proved sound and complete in [25] (for the logic GPCL) and in the present form in [26]. Moreover, the satisfiability problem for LCGA is decidable, which follows from completeness results together with the finite model property, as shown in [25,26].

The Temporal Logic of Coalitional Goal Assignments TLCGA
LCGA can only express local goals, referring to the immediate outcomes states. Usually, however, agents and coalitions have long-term goals, which need explicit temporal operators to be formalized. This was the motivation to introduce and study the temporal extension TLCGA of LCGA in [26]. It uses the standard temporal operators X, U, G to express temporalized goals, similarly to ATL (Section 5).
It is convenient to classify the formulae of TLCGA in two sorts, state formulae and path formulae, and to define their sets by mutual induction, as follows: StateFor : PathFor : θ ::= Xϕ | (ϕUϕ) | Gϕ where p ∈ Prop and γ : P (Agt) → PathFor is a coalitional goal assignment. Thus, the path formulae are auxiliary, used to express temporalized goals. As before, I will use [C 1 φ 1 , ..., C k φ k ] as an explicit notation for [γ] with support {C 1 , ..., C k }.
Note the change in the notation from LCGA: the goals φ 1 , ..., φ k from now on, refer to the current state, not to a successor. Thus, LCGA is embedded in TLCGA as its X-fragment XCGA, where the path formulae are only of the type Xϕ.

An Aside: Equilibria and Co-Equilibria
The logic TLCGA can be used to express (see [26]) the fundamental game-theoretic concept of Nash equilibrium: a strategy profile ensuring that no player can deviate to improve the outcome with respect to his/her private goal. This concept is good for quantitative goals, but not for qualitative (win/lose) ones, because rational players whose goals are not satisfied (the "losers") can still deviate in weak equilibria. Thus, the arguably more suitable for qualitative goals concept of co-equilibrium was proposed in [26], meaning a strategy profile ensuring that every player (and coalition) achieves their private goal, even if all other agents deviate. Existence of a co-equilibrium is expressed precisely by the TLCGA formula [a 1 φ 1 , ..., a k φ k ] .

An Example
The following example, illustrating the use of TLCGA to express natural statements, is adapted from [26].
Example 11. Three sheep and three wolves are standing on a river bank and all animals want to cross the river (say, because a pride of lions is approaching). There is just one boat, which takes up to two animals. There is no boatman, but the animals can sail the boat across the river. However, if on either side of the river, at any time the wolves outnumber the sheep, then they eat them up there. Of course, the sheep do not want that to happen. Therefore, do the animals have a "safe" strategy to cross the river without any being eaten?
Let us first look at a simple specification in ATL of the claim that such strategy exists: where • c is the atomic proposition "all animals have crossed the river". • e is the atomic proposition "some sheep are eaten".
Thus, the formula above says that all animals have a strategy to eventually all cross the river and meanwhile no sheep are eaten. Is this what we want? Not quite, as the strategy may be such that the wolves can deviate from it at an opportune moment and eat some of the sheep. Thus, here is a better specification, in TLCGA: saying "all animals have a collective goal to cross the river, but the coalition of sheep have their coalitional goal that no sheep is ever eaten", and this is what we really want. Therefore, is there a solution-a strategy profile-satisfying this specification? Well, it depends on a subtle but crucial detail: if all animals act simultaneously, where each animal can either stay on the river bank or board the boat (but if more than two animals board the boat, the boat cannot sail), then it is not difficult to see that such strategy does not exist. However, if at every round the wolves act first, and then the sheep act, then such strategy exists, and I leave the fun to the reader to design such strategy. Note that the strategy of each sheep will now depend also on how the wolves have just acted, so any deviation of the wolves from the agreed-upon "safe" strategy can be counteracted by the strategies of the sheep, and this is the key point. patterns such as Q 1 x a ...Q k x a ...φ, where each Q i is ∃ or ∀, all occurrence of Q i x a containing the innermost one in its scope are vacuous. Now, one can uniformly translate the semantic conditions of the various strategic operators introduced here into formulae in the language of BSL, by using a uniform compositional translation τ, as follows: where C = Agt \C; likewise, for C ϕ.
Note that the joint strategy for B \ A assigned to x B\A is supposed to ensure, together with the joint strategy for A assigned to x A , the truth of the (translation of the) goal formula ψ whenever the joint strategy for A assigned to x A ensures the truth of the (translation of the) goal formula φ against any joint strategy of the agents in A, including those in B \ A.
Note the difference with the previous translation, reflecting the difference between the semantics of (O α ) and (O β ): the joint strategy for B now depends on the joint strategy for A, and the strategies of the agents in B ∩ A may not differ from those already fixed in the latter.
This formula captures the semantics of O c in a straightforward way.
Likewise, the formula above formalizes precisely the semantic definition of SF.

SFCL (SF1)
τ([C; C 1 , . . . , C k ](φ; φ 1 , . . . , φ k )) :: This formula says that C has a collective strategy (in particular, a collective action) assigned to x C that guarantees (the translation of) φ, and is such that, when fixed, each C m has a collective action (already fixed for the agents in C m ∩ C) that guarantees (the translation of) φ m against any behavior of the noncommitted agents, i.e., those in C ∪ C m . This is precisely the semantics of SF1, defined in Section 7.1.
This formula likewise expresses the semantics of the sequential version SF2 of SF1, defined in Section 7.1. To achieve this, every time it claims existence of a joint strategy of C i it assumes that the strategies of all agents in the already previously mentioned coalitions are already fixed, and that joint strategy must succeed against any behavior of the not-yet-committed agents only. Again, this formula captures the formal semantics of [γ] in a straightforward way.
Thus, the logic BSL is more expressive than any of the other logics introduced here. Still, it is not as expressive as the versions of SSL introduced in [23,24], which decouple the strategy variables and quantification over them from the agents to which they are assigned, and thus enable assigning the same strategy to different agents, which (in my view, is unnatural and) BSL cannot do. Still, the second-order quantification over strategies is fully enabled in BSL, so it is to be expected that BSL is not recursively axiomatizable in general. However, in the natural special case where only finite models over a fixed finite set of agents and with positional strategies are considered, there are clearly only finite such strategies in the model, so model checking of BSL in such models is decidable, whereas satisfiability is recursively enumerable. The complexity of model checking, as well as the questions of axiomatizability and decidability, of the validities of BSL in semantics restricted to such models are left to future work.

Concluding Remarks: Outlook and Perspectives
Logic-based strategic reasoning in socially interactive context is still in its relatively early stage of development, and the logical systems presented in this paper are just a representative sample of such special-purpose driven developments. While these systems were introduced with different motivations, they share the common purpose of formalizing natural and important patterns of strategic interaction between rational agents. Furthermore, as seen in the previous section, these can all be treated as fragments of the basic strategy logic BSL, so the natural question arises: why, instead of the many bespoke alternatives presented here, is BSL not adopted as a uniform logical language for formalizing strategic reasoning? Of course, that can be achieved, and the study of some fragments of strategy logic have shown that to be a viable approach. However, as I indicated earlier, there are computational reasons in favor of staying within the purely modal framework of the logics introduced here, where actions and strategies are not explicitly referred to and quantified over in the language, but are only present in the semantics. There are many aspects of strategic reasoning, and a logical language that can adequately capture them all would be too heavy to use, so if one only wants to reason about some such specific aspects, then one can naturally look for a minimal language that suffices for that purpose. The standoff between the two approaches-propositional modal logics vs. quantified strategy logics-is quite analogous to the standoff between plain modal logic and first-order logic, and the pros and cons for one over the other approach in both cases are very similar. Still, it would be interesting and useful to compare in deeper detail the expressiveness and the tradeoff between it and the computational complexity of these two approaches in the case of logics for strategic reasoning, to a degree closer to the very-well-explored parallels between modal logic and first-order logic (cf. relevant chapters in [46] for ample details and further references. ) Some major open problems and directions of further exploration of the logic-based strategic reasoning in socially interactive context include: • Adding agents' knowledge in the semantics, and explicitly in the language, by assuming that the agents reason and act under imperfect information. • Taking into account the normative aspects and constraints of the socially interactive context, including obligations, permission, and prohibitions, which socially responsible rational agents must respect in their strategic behavior. • Analysis of the expressiveness and computational complexity of the basic strategy logic BSL which should eventually determine whether and to what extent BSL may be regarded as a viable alternative to the propositional modal approach behind the logical systems for strategic reasoning presented here.
• In particular, an interesting currently open question is whether there is a finite set of modal strategic operators with semantics that can be translated to BSL which provide expressive completeness, if not for the full language of BSL, then at least for natural and reasonably expressive fragments of it. • Completeness results for some of the systems of axioms mentioned here, including the three main fragments of ConStR, the entire ConStR, and SFCL. Ultimately, a complete axiomatic system for BSL, if possible, or for substantially rich fragments of it. • Finite tree-model property and decidability results for the logical systems presented here, as well as other fragments of BSL. In particular, it would be tableaux-based deductive systems and decision methods for the logics studied here, by adapting such systems developed in, e.g., [47,48].
From a more general perspective, this work can be regarded as a step towards developing a unifying and optimally rich, yet computationally feasible, technical framework for logic-based strategic reasoning of rational agents in a socially interactive context.