Reactive Strategies: An Inch of Memory, a Mile of Equilibria

: We explore how an incremental change in complexity of strategies (“an inch of memory”) in repeated interactions inﬂuences the sets of Nash equilibrium (NE) strategy and payoff proﬁles. For this, we introduce the two most basic setups of repeated games, where players are allowed to use only reactive strategies for which a probability of players’ actions depends only on the opponent’s preceding move. The ﬁrst game is trivial and inherits equilibria of the stage game since players have only unconditional (memory-less) Reactive Strategies (RSs); in the second one, players also have conditional stochastic RSs. This extension of the strategy sets can be understood as a result of evolution or learning that increases the complexity of strategies. For the game with conditional RSs, we characterize all possible NE proﬁles in stochastic RSs and ﬁnd all possible symmetric games admitting these equilibria. By setting the unconditional benchmark as the least symmetric equilibrium payoff proﬁle in memory-less RSs, we demonstrate that for most classes of symmetric stage games, inﬁnitely many equilibria in conditional stochastic RSs (“a mile of equilibria”) Pareto dominate the benchmark. Since there is no folk theorem for RSs, Pareto improvement over the benchmark is the best one can gain with an inch of memory.


Introduction
In the theory of repeated games, restrictions on players' strategies are not usually imposed (see, for example, [1]). However, an interest in bounded rationality and strategic complexity [2] has led to the study of repeated games where players' strategies are assumed to be finite automata or to have finite memory (bounded recall). This paper falls into the strategy-restrictions framework: we make an extensive study of the Nash equilibrium in infinitely repeated 2 × 2 games where payoffs are determined by Reactive Strategies (RSs) and players' payoffs are evaluated according to the limit of means. A player's reactive strategies are strategies that, for each move of the opponent in the previous round, prescribe stationary probabilities to the player of playing his/her own pure actions.
One may naturally think of two-by-two classification of RSs. The first aspect is related to information. Players may ignore the available information about the action of the opponent in the previous round; hence, we distinguish between unconditional and conditional RSs. The second aspect is related to predictability of actions. Deterministic and semi-deterministic RSs allow deterministic responses (probabilities 1 or 0) to actions in the previous rounds, while for stochastic RSs, all responses have non-degenerate probabilities.
The paper explores how an incremental change of complexity of strategies in repeated interactions influences the sets of Nash equilibrium (NE) strategy and payoff profiles. For this, we introduce, perhaps, the two most basic setups of repeated games. In the first game, which is trivial and inherits equilibria of the stage game, players have only memory-less RSs. In the second game, players in addition have conditional stochastic RSs. This extension of the strategy sets can be understood as a result of evolution or learning, resulting in increased complexity of strategies. For the game with conditional RSs, we answer the following questions: What are all possible NE profiles in stochastic RSs? (Q2) What are all possible symmetric games admitting NE in stochastic RSs? (Q3) Do equilibrium profiles in conditional stochastic RSs Pareto improve over equilibrium profiles in unconditional RSs?
Surprisingly, the answers to (Q1)-(Q3), the most basic results for equilibria in reactive strategies, are still missing in the literature, while similar questions were studied for 1-memory (aka memory-one) strategies with discounted payoffs.
Before we proceed to more technical elements of the paper, let us give a simple example where reactive and 1-memory strategies meet together. Example 1. A textbook example of repeated games is the iterated Prisoner's Dilemma, where the stage game has two strategies for both players, C (cooperate) and D (defect). Any round ends with a possible action profile: CC, CD, DC, or DD. As profile i of the previous round is known, player 1 chooses C with probability p i (hence, D with probability 1 − p i ). Then a 4-tuple, say (p CC , p CD , p DC , p DD ), would define a 1-memory strategy of player 1. A reactive strategy of player 1 is a 1-memory strategy that only depends on the opponent's action, i.e., p CC = p CD and p DC = p DD ; this reduces 4-tuples to ordered pairs. By omitting the issue of opening moves, one derives 1-memory form for the strategies that were found to be the most popular in experimental study [3]. These strategies are Tit-for-Tat, (1, 0, 1, 0); Always Defect, (0, 0, 0, 0); both are reactive ones, while Grim, (1, 0, 0, 0), is not.

Related Literature
Reactive strategies were introduced by Karl Sigmund and Martin Nowak (see [4,5]) as an abstraction of bounded rationality. However, while their research has never been focused directly on the Nash equilibrium, we should note that the authors' comprehensive study of evolutionary stable strategies is indeed relevant to (Q1), as the definition of these strategies requires a symmetric Nash equilibrium to be formed. This is why, for the infinitely repeated Prisoner's Dilemma in [4], a full characterization of symmetric Nash equilibria in the class of reactive strategies was obtained for the limit-of-means payoff. Note that these symmetric equilibria are equilibrium points (see [6]) of adaptive dynamics on the set of all reactive strategies. The concept of adaptive dynamics was introduced in [7] to describe evolution in continuous sets of strategies. In this paper, we significantly extend the results from [4] related to the Nash equilibrium by considering arbitrary 2 × 2 stage games and by characterizing both symmetric and non-symmetric equilibria, with non-symmetric equilibria prevailing over symmetric ones.
For the discounted Prisoner's Dilemma, subgame perfect equilibria in the class of deterministic reactive strategies were studied in [8]. In [9], Appendix 5.5, a brief analysis can be found of equilibrium refinements in the class of deterministic reactive strategies (referred to as "memory zero" strategies) for a repeated Prisoner's Dilemma. In [9], R. Aumann considered eight pure strategies for each player (four stationary reactions multiplied by two actions for the initial round). Note that the concepts of reactive strategies and 1-memory strategies in repeated games with a continuum of pure strategies are related to stationary "reaction functions" [10,11], "immediately reactive strategies" [12], and stationary "singleperiod-recall strategies" [13]. In contrast to the studies mentioned in this paragraph, we consider limit-of-means payoffs and reactive strategies mixing over (two) pure actions, that is, reactive strategies mapping the previous opponent's moves into distributions over pure actions.
We also study the problem of the existence of a Nash equilibrium. Note that Kryazhimskiy [14] obtained conditions sufficient for the existence of a Nash equilibrium within subsets of the players' 1-memory strategies in repeated bimatrix games with the limit-ofmeans payoff. The conditions require the sets of players' strategies to have a convexity property and all strategies to be strictly randomized, so the Kakutani fixed-point theorem becomes applicable.
There is evidently much more literature available on the topic of 1-memory strategies than on the topic of reactive strategies. The recent spike of interest in 1-memory strategies was stimulated by the discovery of Zero-Determinant (ZD) strategies, a special class of 1-memory strategies enabling linear relationships between players' payoffs, irrespective of opponent's strategies (see [15] for the case of the limit-of-means payoff). Remarkably, ZD strategies can fix the opponent's payoff or guarantee some surplus that outperforms the opponent's surplus by the chosen constant factor. Hilbe et al. [16] extended the theory of ZD strategies to the case of the discounted payoff; see [16] for the most recent review of the topic.
Within the literature dealing with 1-memory strategies, one of the most relevant studies for our research was conducted by Dutta and Siconolfi [17]. In that study, the ideas, which were independently proposed in the seminal paper [18] to derive a robust folk theorem for the discounted Prisoner's Dilemma, were extended to generic repeated 2 × 2 games. In other words, Dutta and Siconolfi explored a Strong Mixed Equilibrium (SME) in the class of 1-memory strategies. This equilibrium is formed by completely mixed 1-memory strategies and has the salient property that both players are indifferent between their actions after every history. Specifically, in [17], conditions were obtained that are necessary and sufficient for a game to have an SME, and the set of all equilibria payoffs was described. The latter result allowed us to demonstrate that SMEs generally lead to a continuum of equilibrium payoffs but still do not generate a folk theorem. To summarize, Dutta and Siconolfi [17] studied subgame perfect equilibria formed by completely mixed (stochastic) strategies in repeated games with discounted payoffs and 1-memory strategies.
An analysis of subgame perfect equilibria (SPE) in 1-memory strategies focused on the Prisoner's Dilemma was presented in [19]. In particular, SPEs formed by zero-determinant strategies were characterized and the corresponding upper bound for all NE and SPE payoffs was obtained (it is equal to the mutual cooperation payoff).
Moving from 1-memory to finite memory strategies, we should mention finite automata as a common abstraction of bounded rationality; see [2,20,21].
Following [22], we observe different natural levels of complexity, motivating us to distinguish between deterministic and Stochastic Reactive Strategies (SRSs). In this paper, we focus on SRSs as the simplest ones. The higher complexity of some deterministic strategies is due to the fact that the probability of the first action should be a part of the strategy to correctly define expected payoffs, while the payoffs for stochastic strategies do not depend on the first action.
To summarize, the field of finite-memory strategies (and, in particular, reactive strategies) has been studied for the last 30 years. Especially comprehensive results were obtained for 1-memory strategies but even the most basic facts for generic 2 × 2 stage games are still missing for equilibria in RSs. The research not only delivers the missing piece of theory but also demonstrates that even an inch of memory in repeated interactions translates into a mile of new equilibria.

Results and Structure of the Article
First, we provide an expository geometrical representation to draw contrasts between unconditional and conditional SRSs in the repeated games; see Section 2.1. An example of the Prisoner's Dilemma with equal gains from switching is considered in Section 2.2 to illustrate how the representation will be applied to study the Nash equilibrium.
Second, in Section 2.3, we derive a characterization of all Nash equilibria in the class of reactive strategies in terms of solutions to a system of equations and inequalities; see Theorem 1.
Third, in Section 2.4, Theorem 2, we present a characterization of all symmetric games admitting Nash equilibria in the class of reactive strategies; Appendix A contains all related proofs.
Fourth, we elaborate on equilibrium payoff profiles in conditional SRSs; see Section 3. Namely, we demonstrate that: • If there exists an NE formed by a profile of conditional SRSs, then there are infinitely many NE profiles generated by conditional SRSs that, in general, have distinct payoffs, but we do not have a folk theorem. • If there exists an NE formed by a profile of unconditional SRSs, then NE profiles in conditional SRSs either Pareto improve over it or provide the same payoff profile.
For symmetric games, the complete analysis of equilibrium payoff profiles becomes feasible in Section 3.2 and Appendix B. In our setting, an NE in unconditional RSs always exists. Among equilibrium payoff profiles in unconditional RSs, we select the symmetric one with the lowest payoffs as the benchmark. We then check if there exists an equilibrium payoff profile formed by conditional stochastic RSs that Pareto dominates this benchmark. Only in the very special games where both players (by choosing the dominating pure strategy) get the highest possible individual payoff are payoff profiles in conditional SRSs dominated by the benchmark unconditional payoff profiles. This means that it is possible to get new equilibria with lower payoffs while the strategies become more advanced; see an example in Section 3.3. In the remaining cases, if equilibrium profiles in SRSs exist and result in distinct payoff profiles, then there exists an NE profile in SRSs dominating the benchmark. In some cases, all NE profiles in conditional SRSs dominate the benchmark.
Fifth, with the extensive use of examples, we highlight some interesting properties of NE profiles in stochastic RSs; see Appendix C.
In this paper, some reactive strategies are excluded from players' strategy sets. In Section 2.5, we discuss how this decision influences the results.

Definitions of Repeated Games
Consider an arbitrary one-shot game G The first player chooses rows (possible strategies are T and B); the second player chooses columns (possible strategies are L and R); T, B, L, R stand for top, bottom, left, right, respectively. A mixed strategy of player 1 is a probability s 1 of playing action T; similarly, a mixed strategy s 2 of player 2 is a probability of playing action L. Mixed strategies defined by non-degenerate probabilities are called completely mixed strategies. Let us introduce payoff functions J 1 , J 2 : [0, 1] 2 → R for the one-shot game G in mixed strategies by the rules: ∀s 1 , s 2 ∈ [0, 1] (2)

Strategies
In the following, we study a game G ∞ that is a repeated modification of G. To define G ∞ in normal form, we first introduce reactive strategies [4,5,23].

Definition 1.
We define reactive strategies of player 1 (player 2) as arbitrary maps of {L, R} (maps of {T, B}) into sets of all mixed strategies of player 1 (player 2); players' mixed strategies in the current round are completely defined by the preceding opponent's action. To simplify the notation, we understand any reactive strategy of player 1 as an ordered pair u = (u 1 , u 2 ) such that the conditional probability of action T and action B is u 1 and (1 − u 1 ), given that the second player's previous move was L, and the conditional probability of T and B is u 2 and (1 − u 2 ), given that the second player's previous move was R. In a similar way, a reactive strategy of player 2 is a pair v = (v 1 , v 2 ) such that the conditional probability of action L and action R is v 1 and (1 − v 1 ), given that the first player's previous move was T, and the conditional probability of L and R is v 2 and (1 − v 2 ), given that the first player's previous move was B.
Reactive strategies form a subset of 1-memory strategies (see [15][16][17][18]). Figure 1 shows the classification of RSs that we follow in this paper. . We start with the repeated setting where players use only unconditional strategies (both deterministic and stochastic), game G ∞ un . Game G ∞ un is basically memory-less infinite repetition of the stage game. Then we slightly increase the complexity of players' behavior; i.e., we allow players to use also SRSs. This changes G ∞ un into G ∞ . Thus, the set of all possible strategies of player 1 (of player 2) in game G ∞ is defined as U = U ∪ {(0, 0), (1, 1)} (as V = U ∪ {(0, 0), (1, 1)}).

Payoffs
Given any initial action profile (i 0 , j 0 ) corresponding to the first round and a profile of stochastic RSs, we obtain an infinite discrete-time Markov stochastic process (hence the name for strategies) defined on four possible states: (T, L), (T, R), (B, L), and (B, R) (states 1, 2, 3, and 4, correspondingly). For every actual l-step trajectory of actions (i 1 , j 1 ) → (i 2 , j 2 ) → . . . → (i l , j l ), the average payoffs are random variables. According to [14], for l → ∞, the limits of expected l-round average payoffs are well defined for all SRSs and initial profiles. As usual, we call these limits limit-of-means payoffs.
The paper studies equilibrium profiles formed only by SRSs due to two reasons: 1.
In contrast to semi-deterministic and deterministic RSs, the payoffs for profiles of SRSs do not depend on the opening move; 2.
SRSs capture non-deterministic behavior that is the most natural for the domain of evolutionary game theory, where RSs originated to model real-life processes.
Every profile (u, v) of SRSs ensures nice properties for the induced Markov process with the transition probability matrix That is, there exists a stationary distribution π (a row vector) such that π = πM; note that π depends on u and v but not on the initial profile. In [4,5,23], it was proven that stationary distribution π : U × V → R 4 allows the following representation: 1] are defined by the rules Note that the trajectories of actions for profiles of deterministic RSs contain a deterministic cycles (hence the name of the strategies) of action profiles that may depend on the opening moves.
In what follows, we omit arguments of s 1 , s 2 where it is possible. We also refer to (s 1 , s 2 ) as a stationary distribution, since the original stationary distribution π can be easily recovered.
For a profile of SRSs, the limit-of-means payoffs are determined by the stationary distribution that is independent from the initial actions. Hence, for a profile (u, v) of SRSs, players' payoffs in G ∞ are the exact same payoffs as in G if player 1 uses mixed strategy s 1 (u, v) and player 2 uses mixed strategy s 2 (u, v). This helps to specify J ∞ 1 and J ∞ 2 : ∀(u, v) ∈ U × V that denote the first and second player's payoff function in G ∞ , respectively: See the summary for all introduced games in Table 1. Table 1. Summary for two-player games considered in the paper.

Setting Strategies
Payoffs Description the memory-less play of G that is infinitely repeated; G ∞ un is 'equivalent' to G but formalised as repeated interaction.

G ∞ Repeated
Stochastic and unconditional RSs ( U, V) The repeated modification of G where probabilities of actions can be conditioned by the preceding opponent's action. In addition to memory-less strategies from G ∞ un , players get conditional ones.

Equilibria
To summarize, we study a game G ∞ with two players; the sets of strategies are U and V; the payoff functions are J ∞ 1 and J ∞ 2 .
In what follows, a pure (completely mixed) Nash equilibrium stands for a Nash equilibrium in a one-shot game G formed by pure (completely mixed) strategies. In G ∞ , the stationary distribution generated by an NE is called an Equilibrium Stationary Distribution (ESD).

Geometric Intuition and Attainable Sets
Our immediate goal is to illustrate for G ∞ the difference between conditional and unconditional SRSs. First, let us introduce the concept of attainable sets.

Definition 3.
The set of all stationary distributions feasible for a fixed opponent's reactive strategy is called an attainable set for a player. To be precise, ∀u ∈ U, v ∈ V Here, S I (v) is an attainable set for player 1 if player 2 chooses strategy v; see Figure 2b. Similarly, S I I (u) is an attainable set for player 2 if player 1 chooses u; see Figure 2a. Figure 2c. Using (4), we obtain Proof. We will check only the first representation, as, using similar arguments, it easy to demonstrate the second one. Fix an arbitrary v ∈ V; let Letú be the reactive strategy of player 1 such thatú 1 =ú 2 =ś 1 . It follows from (7) that s 1 (ú, v) =ś 1 and s 2 Combining this with (9), we obtain Ω = S I (v). (7) implies that for player 1 to achieve any element of S I (v) ∀v ∈ V, say (s * 1 , s * 2 ) ∈ S I (v), she may use the corresponding unconditional RS (s * 1 , s * 1 ); for player 2, the similar statement holds true. Figure 3a shows us the corresponding example.

Remark 1. Equation
We now see the key difference between unconditional and conditional SRSs. For a memory-less strategy of the first player (i.e., u 1 = u 2 ), the attainable set of the second player is a horizontal line. Similarly, for a memory-less strategy of the second player (i.e., v 1 = v 2 ), the attainable set of player 1 is a vertical line; this is similar for mixed strategies in one-shot games. Hence, memory-less strategies only allow intercepts to be chosen. By contrast, conditional SRSs (i.e., u 1 = u 2 and v 1 = v 2 ) also allow slopes of attainable sets to be set. Thus, conditional SRSs have more flexibility in adjusting slopes of attainable sets to gradients of payoff functions at the stationary distributions (the points where players' attainable sets intersect). This flexibility provides significantly more opportunities for equilibria to emerge (compared to memory-less mixed strategies).

Proposition 1.
For a one-shot game G, any NE profile (u, v) in mixed strategies becomes the corresponding NE profile ((u, u), (v, v)) in both G ∞ and G ∞ un .
Proof. Fix the memory-less RS (u, u) of player 1 corresponding to an equilibrium (u, v) in a one-shot game G. We will show that (v, v) is a best response to (u, u) in G ∞ . By Remark 1, for player 2, deviations in unconditional strategies are "operationally equivalent" to deviations in conditional strategies -that is, if player 2 can deviate, then he can deviate with a strategy of the form (ṽ,ṽ). If player 2 picks an unconditional RS (ṽ,ṽ),ṽ ∈ [0, 1], we arrive at the stationary distribution (u,ṽ). Combining the last fact with (5), we obtain that From the definition of NE in mixed strategies in G, it follows that J 2 (u, ·), as a function defined on [0, 1], attains its maximum at v. By (10) and the above argument on the operational equivalence, Thus, unconditional RS (v, v), which emerges from the stage game NE, "remains" the best reply to (u, u) in G ∞ (and hence in G ∞ un ). Symmetrically, (u, u) is a best response to (v, v), and hence the profile is an NE in G ∞ (hence, in G ∞ un ).

Remark 2.
Note that players are free to move within their attainable sets from one point (stationary distribution) to another in a variety of ways. Moreover, for a stationary distribution within an attainable set of one player, there are infinitely many strategies leading to this stationary distribution for the given opponent's strategy. In turn, the choice among these strategies leads to distinct attainable sets for the opponent. For example, Figure 3 shows the set of all possible strategy profiles leading to the stationary distribution (s 1 ,   Equal gains from switching implies that a 1 = a 2 = 0; clearly, we have For the repeated version of G eg , we have (see (1) and (5)) Lemma A2 tells us that every pair of well-defined SRSs such that , v * 2 ) forms a Nash equilibrium. Figure 4 illustrates that for player 1, curves-of-equal-payoff coincide with attainable sets if player 2 uses the above equilibrium strategies, meaning that equilibrium strategies fix the opponent's payoff: The reactive strategy (1, 1 + b 1 c 1 ) is known as generous tit-for-tat [24]; it leads to the most cooperative symmetric equilibrium.
Note that an equilibrium can only be sustained if both players choose strategies u and v such that , then player 1 has an incentive to climb the attainable set by choosing more cooperative strategies increasing s 1 and s 2 . If v 1 − v 2 < 1/2, then player 1 has an incentive to step down the attainable set; this decreases s 1 and s 2 . Clearly, memoryless SRSs can not support Nash equilibria. For example, the attainable sets of player 1 become vertical lines, giving her clear incentives to defect since her payoff increases moving from top to bottom of the attainable sets. Note that, generally, curves-of-equal-payoff are non-linear; see the example in Appendix C.1.

Characterization of Nash Equilibria in G ∞
Since every NE is formed by strategies that are best responses to each other, we proceed further with a standard analysis using derivatives. Let f x stand for the derivative of a function f (·) w.r.t. x.
is the corresponding ESD, then the following holds true: Using (11)- (17) and the second derivative test, we complete the proof.
Theorem 1 (Characterization of all Nash equilibria). The set of all Nash equilibria coincides with the set of allû,v ∈ R 2 solving the following system: Proof. Let N denote the set of all Nash equilibria in G ∞ ; S stands for the set of all (û,v) solving (18). Let us prove that S = N. Take (û,v) ∈ S. As (v) in (18) holds, (û,v) is a pair of reactive strategies; S ⊂ U × V. Obviously, (iv) ensures that (ŝ 1 ,ŝ 2 ) is the corresponding stationary distribution; (ŝ 1 ,ŝ 2 ) = (s 1 (û,v), s 2 (û,v)). As (iii) in (18) holds true, we have three cases: All cases are composed from similar elements: equalities and strong inequalities. We consider only two of these elements. The analysis for the others is similar. Suppose Take (u, v) ∈ N. Trivially, (v) holds true. Taking into account (4) and Lemma 2, we see that (i)-(iv) are fulfilled. Thus, N ⊂ S; from (19), we have S = N.

Existence of Nash Equilibria in Symmetric Games
In this section, we obtain conditions for the existence of NE when players are identical (one-shot games are symmetric). We fix a symmetric game G Game G I II Recall that for G we have the following versions of (2) and (3): This allows us to introduce the following convenient definitions: If a = 0, then To reach the goal of the section, we must derive conditions on a, b, and c that ensure the existence of a solution of (18). Taking into account inequality (iii) in (18), in Appendix A, we study three possible cases: a = 0, a < 0, a > 0. The next theorem combines Lemmas A1, A7 and A10 (see Appendices A.1, A.2 and A.3, respectively).

Theorem 2.
For repeated symmetric games, there exists a reactive Nash equilibrium iff one of the following conditions holds true: In this paper, we focus on symmetric stage games due to tractability of the corresponding results. Appendix C.2 shows an example of a non-symmetric stage game.

If All RSs Are Available
In this paper, players' strategy sets do not include conditional deterministic and semideterministic RSs, and we focus only on equilibria generated by SRSs. Thus, in G ∞ , the set of all possible profiles was U × V instead of [0, 1] 2 × [0, 1] 2 , the set of all profiles in RSs. Let us discuss how this decision influences the NE profiles in SRSs.
Recall that for G ∞ , the premises of the Kakutani fixed-point theorem do not hold, as the strategy set is not compact, although payoff functions are continuous. Thus, the existence of an NE in an arbitrary game G ∞ is not guaranteed. By simply allowing all RSs, the strategy set becomes compact, but the payoff functions become dependent on initial action profiles, making payoff functions no longer continuous and properly defined. The last fact is due to payoff-relevance of the initial round for some strategy profiles. Thus, one may decide to complement strategies with one extra variable, an initial action, leading to more complicated analysis. There are two approaches that can be used to avoid the complication. The first approach, the one used for discounted payoffs, is to assume that nature draws initial actions. In this case, one may consider only equilibria that hold independently of initial moves (see [17,18]); actual equilibrium payoffs may depend on the initial actions. The second approach, which is possible for the-limit-of-means setting and is used in this article, is to restrict the strategy set such that players' payoffs are independent of initial actions. Both approaches ensure that any equilibrium holds for all opening moves.
Starting with the restricted strategy sets, one may ask, what if we extend the set of strategies to include all RSs? How would it influence the set of all equilibria? One straightforward aspect is that new equilibria may form. The second (less obvious) aspect is that we increase players' abilities to deviate; this may erase some existing equilibria.
Regarding the first aspect, the precise answer requires a thorough analysis that can not be included in this paper since, as we mentioned above, the results will depend on initial actions. More importantly, the value of such results is not obvious.
Regarding the second aspect, the characterization of all Nash equilibria generated by SRSs remains valid independently of initial moves, even if one considers the setting where all RSs are available for players. Let us show this. Consider player 1. If player 2 picks an SRSv, then the corresponding stationary distribution is well defined by (4) for all RSs of player 1 and does not depend on initial actions. For player 1, in terms of payoff-relevant deviations, the set of RSs is equivalent to U.

Equilibrium Payoffs in Conditional SRSs
In this section, we compare equilibrium payoffs in conditional SRSs versus unconditional RSs in the following simple way. First, we fix a stage game G such that Theorem 2 gives us an NE in SRSs. The repeated version G ∞ un inherits all equilibrium payoff profiles from G that are generated by the corresponding unconditional RSs. Among these payoff profiles, we select the symmetric one with the lowest payoffs as the benchmark. We then compare the best symmetric NE payoff profile in G ∞ formed by SRSs against this benchmark. Since we have existence results only for symmetric games, the complete analysis is only feasible for this special case. Nevertheless, the existence of an NE in memory-less SRSs implies the existence of an NE in conditional SRSs that allows us to compare the corresponding payoffs even for non-symmetric games. The next subsection presents this partial result; afterwards, we complete analysis for symmetric games.

Payoffs for NE Profiles of Unconditional and Conditional SRSs
From (4) and Theorem 1, it follows that any (u, v) ∈ R 2 admitting solutions of the system forms a NE. Note that (22) is a system of six equations with eight variables. We consider (x, y) or (s 1 , s 2 ) as free parameters to express the remaining variables; x and y are introduced to simplify the notation. We omit the rather simple case a 1 a 2 = 0 and start solving the system under the assumption that a 1 a 2 = 0. Suppose −1 < x, y < 1; let us express u, v, s 1 , s 2 from (22) in terms of x and y Assume that a completely mixed NE exists in a one-shot game G.
The next proposition shows that for both players, equilibrium payoffs in conditional SRSs are at least equilibrium payoffs in memory-less SRSs.

Proposition 2.
Suppose that a 1 a 2 = 0 and there exists an NE profile (û,v) of memory-less SRSs in G ∞ un (hence in G ∞ ); then, for any NE profile (u, v) of conditional SRSs in G ∞ , the following holds true Let us outline the proof. From (1) and (5), we have J ∞ i as a function of stationary distributions. Since we deal with ESDs, we use formulas for s 1 and s 2 from (23). Hence, J ∞ i , i = 1, 2, become functions of only x and y that are defined by the NE profile (u, v); these equilibrium profiles exists by our assumption. Using (24) and (25), it remains only to perform some algebra and to recall that a 2 x ≤ 0, a 1 y ≤ 0 (see the last line in (23)).

Symmetric Games
For symmetric games, results in Section 2.4 allow us to gain more understanding on how equilibrium payoffs in conditional SRSs compare to the benchmark payoffs in memory-less RSs. Note that a generic one-shot symmetric game with distinct payoffs on the leading diagonal is strategically equivalent to Generic one-shot symmetric games with identical payoffs on the leading diagonal are considered in Appendix B. Using these generic games, we study equilibrium payoffs for games satisfying the conditions of Theorem 2. For G 10 , Figure 5 shows all pairs of (l, g) such that there exists an equilibrium according to Theorem 2. For example, the first condition results in trivial stage games that G 10 cannot be. The second condition (region 2) restricted to Quadrant I, includes prisoner's dilemmas with equal gains from switching. Quadrant I contains all prisoner's dilemmas and the entire region 5 that embodies the ones where cycles (C, D) → (D, C) → (C, D) → . . . are more profitable than cycles (C, C) → (D, D) → (C, C) → . . . . Region 3 completely covers Quadrant II and IV where all stage games with two pure equilibria are located; here, we have completely mixed NE for stage games and, hence, an equilibrium profile in memory-less SRSs for the repeated versions. If −l = 1 + g, by Proposition 2, there exists an NE profile of conditional SRSs that Pareto dominates the benchmark of NE payoffs for memory-less SRSs; else all ESDs (hence, NE payoff profiles) for conditional and unconditional SRSs coincide. Quadrant III contains all deadlock games, which have one dominant strategy with the equilibrium payoff being Pareto optimal. This property makes deadlock games of the least interest for theory since self-interest and mutual benefit are fully compatible. The repeated versions of games from region 4 have NE payoff profiles Pareto dominating this unconditional benchmark. Thus, in all cases considered up to this point, an equilibrium profile in conditional SRSs provides greater or equal payoffs than the benchmark equilibrium profile in memory-less RSs. The exceptions to this rule are region 2, restricted to Quadrant III, and region 6 that contain very special deadlock games: both players (by choosing dominating pure strategy) get the highest possible individual payoff. Because of this unique property, the repeated versions of these one-shot games are the only cases when all NE profile in conditional SRSs are Pareto dominated by any equilibrium profile in memory-less RSs. We provide an example of such game in Section 3.3. Finally, all one-shot games G 10 in region 6, after we swap strategies for both players, comply with condition 7 in Theorem 2; i.e., conditions 6 and 7 cover strategically equivalent deadlock games.
Appendix B provides the similar analysis for generic symmetric games with identical payoffs on the leading diagonal, where we conclude that in the cases when payoffs of the stage game are not diverse enough, and payoffs of NE profiles in SRSs coincide with the memory-less benchmark. Otherwise, there exists an NE profile in SRSs that Pareto dominates the benchmark.
Let us summarize both cases of the generic games. Only in the very special deadlock games where both players (by choosing dominating pure strategy) get the highest possible individual payoff, payoff profiles in conditional SRSs are dominated by the benchmark unconditional payoff profiles; see the next subsection for an example. In the remaining cases, if distinct ESDs do exist, then there exists an NE profile in SRSs dominating the benchmark.

A Game with Pareto-Efficient Equilibrium and Dominant Strategies
The following example highlights one counterintuitive property of NE in SRSs by showing that it is possible to establish equilibria with lower payoffs by introducing a more complex class of strategies. Consider the following symmetric game G NB . Once we scale all payoffs into the range [0, 1], the stage game falls into region 6 of Figure 5. Here a = 10, b = −0.2, and c = −0.7. According to Theorem 2, a NE in SRSs exists; we depict the set of all ESDs in Figure 6. A mixed NE in the stage game does not exist since b < 0. The profile of the memory-less RSs ((1, 1), (1, 1)) forms an NE and corresponds to the Pareto-efficient pure NE in the stage game. This equilibrium is excluded from our characterization as we are focusing on equilibria in SRSs. Remarkably, there is a noticeable gap between this equilibrium payoff, serving as memory-less benchmark,  19). Thus, the example shows that it is possible to establish equilibria with lower payoffs by introducing a more complex class of strategies.

Conclusions
We show analytically that even an incremental change of complexity of strategies (an inch of memory) in repeated interactions has a huge influence on the sets of equilibrium payoff profiles for some classes of stage games that are circumscribed by Theorem 2. The first thing to note is that except for some trivial cases, payoff-relevant indeterminacy holds true [17]; i.e., there now exists a continuum of equilibria in "new" conditional strategies with distinct payoff profiles. Clearly, there is still no folk theorem (see the example in Appendix C.1); this still holds [17] for the next incremental improvement, 1memory strategies. It is also now evident that one should not expect some consistency while incrementally increasing the memory of strategies. For example, in RSs, there exist equilibria with higher payoffs than the upper bound obtained independently of the discount factor in [17] (see the example in Appendix C.1) and the reverse dominance condition [17], which is important for the 1-memory case, does not influence the existence of NE in RSs.
In this paper, we focused on symmetric stage games due to tractability of the corresponding results. Appendix C.2 shows the example of a non-symmetric stage game where the set of all ESDs looks like a special combination of the corresponding sets for two symmetric stage games derived from the payoff matrix. This may be a way to study a broad class of non-symmetric stage games. An alternative approach may be to study specific classes of non-symmetric stage games, e.g., strategically zero-sum games [25].
In all examples considered so far, the region of all ESDs was a connected set. Appendix C.3 demonstrates the example with disconnected regions of equilibria. A branch of literature assumes that we may have some local dynamics on the set of all Nash equilibria, e.g., in the spirit of adaptive dynamics [4] or local cooperation patterns [26]. This example calls for new models to tackle the issue of disconnectedness that may be an obstacle on the way to higher equilibrium payoffs.
As for future work, we note that the idea of a reactive strategy was recently generalized to "reactive learning strategy" [27] allowing a player to gradually change the probability of actions depending on the past actions of the opponent. It is not known how the change from reactive strategies to reactive learning strategies would influence the equilibrium outcomes.

Conflicts of Interest:
The author declares no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

ESD
Equilibrium stationary distribution NE Nash equilibrium RS (SRS) Reactive strategy (stochastic reactive strategy) SME Strong mixed equilibrium SPE Subgame perfect equilibrium ZD Zero-determinant (strategies) If a = 0 and b c = 0, then we have the following game: Here A 1 = A 4 . Trivially, in this case any stationary distribution (s 1 , s 2 ) ∈]0, 1[ 2 is an ESD; the Nash equilibrium is formed by the strategies u = (s 1 , s 1 ) and v = (s 2 , s 2 ), which are repeated mixed strategies. These equilibria are not strict.
The set of all Nash equilibria from Lemma A2 generates the parallelogram of ESDs; see Figure A1. The area of the shaded regions tends to 1 as b c → 0; this is the idea behind the following propositions.
Proposition A1. Every pair of reactive strategies (u, v) such that u 1 − u 2 = v 1 − v 2 forms Nash equilibria in symmetric games with a = 0 and b c = u 2 − u 1 .

Proposition A2.
For every stationary distribution (s 1 , s 2 ) ∈]0, 1[ 2 , there exists γ ∈]0, 1[ such that (s 1 , s 2 ) is an ESD in all symmetric games with a = 0 and | b c | < γ. Though the case a = 0 is degenerate, it is of interest in connection with ZD strategies [15]. These strategies are 1-memory ones. Since reactive strategies can also be treated as 1-memory strategies, we investigate the relationship between equalizer ZD and reactive strategies.
Lemma A3. For symmetric games with c = 0, a first player's reactive strategy u is an equalizer ZD strategy if a = 0 and The proof is based on [15], (8). Here, for the 1-memory strategy (p 1 , p 2 , p 3 , p 4 ), one has to substitute by the reactive one (u 1 , u 2 , u 1 , u 2 ), and for the payoff profile (R, S, T, P) one has to substitute (A 1 , A 2 , A 3 , A 4 ). This gives the system Subtracting the first equation from the second one, we obtain (u 2 − u 1 )a = −a. The final part of the proof is trivial.
Proposition A3. In symmetric games with a = 0, all reactive Nash equilibria are generated by equalizer ZD strategies.
Appendix A.2. Case a < 0 We now proceed with the case a < 0. As before, we derive conditions (on a, b, c) that ensure the existence of solutions of (18). Before we proceed, let us express (22) for the symmetric games in terms of s 1 , s 2 ; 0 < s 1 , s 2 < 1. If c + as 2 = 0, c + as 1 = 0, and (s 1 , s 2 ) is an ESD, then the corresponding NE profile (u, v) is defined by the following system: From Theorem 1 we derive that for any equilibrium pair (u, v), the following holds true: First, we pay attention to a degenerate case. Suppose that there exists a solution to (A3) such that as 1 + c = 0. This is only possible if as 2 + b = 0. Assuming this, we get Since u 1 − u 2 = 1 ∀u ∈ S I , a solution of (A4) may exist only if c − b = 0. Assuming this, we get Then there exists a solution iff 0 < − b a < 1; this corresponds to the existence of the mixed Nash equilibrium. Clearly, the strategies u = (− b a , − b a ) and v = (− b a , − b a ) form a Nash equilibrium. We stress that this is not a unique solution to (A5) under the assumptions. However, all Nash equilibria lead to the same ESD and payoffs. Let us now combine our findings in the following: Lemma A4. If in a symmetric game a = 0 and there is an ESD (ŝ 1 ,ŝ 2 ) such that  Lemma A5. In a symmetric game with a < 0, b = c, for s ∈]0, 1[, there exists a Nash equilibrium with the corresponding ESD (s, s) iff 0 ≤ x(s, s) < 1.
Suppose that 0 ≤ x(s, s) < 1 for a stationary distribution (s, s); recall that x(s, s) = y(s, s). To show that (s, s) is an ESD, one has to ensure validity (i)-(v) in (A3) by demonstrating well-defined strategies u * and v * generating (s, s) such that u * . Let us recall the geometric representation of attainable sets; the difference between the first and second component of reactive strategies completely determines the slopes of the opponent's attainable set (see Figures 2 and 3). We see that it is always possible to draw an attainable set of player 1 (straight line) through the point (s, s) with the slope defined by y(s, s) if 0 ≤ y(s, s) < 1. As a result we obtain the well-defined reactive strategy (v * 1 , v * 2 ). In the same way, we derive well-defined (u * 1 , u * 2 ) such that u * 1 − u * 2 = x(s, s). By construction, u * and v * generate (s, s).
Proof. There are two cases: s 1 = s 2 and s 1 = s 2 . The proof is trivial in the first case. In the second case, by using the symmetry of the game, it is easy to prove the existence of ESD (s 2 , s 1 ). As (s 1 , s 2 ) corresponds to some Nash equilibrium and a < 0, for x = x(s 1 , s 2 ) and y = y(s 1 , s 2 ), we have 0 ≤ x < 1, 0 ≤ y < 1 (see (A2)). Let us now consider x * such that x * = x( s 1 +s 2 2 , s 1 +s 2 2 ). It is easy to show that in this case, Straightforwardly, x * > 0. Moreover, Combining this with Lemma A5, we obtain that ( s 1 +s 2 2 , s 1 +s 2 2 ) is an ESD.
Corollary A1. In symmetric games with a < 0, b = c, there exists a Nash equilibrium iff there exists a pair of identical reactive strategies forming a Nash equilibrium. The last is equivalent to the existence of an ESD (s, s) for some 0 < s < 1.
The next theorem provides a necessary and sufficient condition for the existence of a Nash equilibrium.
Theorem A1. In symmetric games with a < 0, b = c, there exists a Nash equilibrium iff there exists a stationary distribution (s, s) such that The proof is a combination of Lemma A5, Lemma A6, and the last corollary. Further, we derive conditions such that there exists a solution in (A8). By definition, put We stress that if a = 0, then b, c, and sign of a completely characterize Nash equilibria in symmetric games. The solution of (A8) is the union of two regions: To have some s ∈]0, 1[ satisfying (A9) or (A10), we must require, respectively, Elaborating (A11) and (A12) and adding Lemma A4, we arrive at the next Lemma A7. For symmetric games with a < 0, there exists a Nash equilibrium iff Remarkably, some symmetric games allow (s, s) to be an ESD for every s ∈]0, 1[.

Proposition A4.
In a symmetric game, (s, s) is an ESD for all s ∈]0, 1[ if a < 0 and To prove this, we find all b, c such that (A9) or (A10) hold for all s ∈]0, 1[.
An analysis in this case is more involved, but quite similar. It follows from Theorem 1 that for every equilibrium (u, v) the following holds: u 1 − u 2 ≤ 0 and v 1 − v 2 ≤ 0 . For the remainder of this section, we suppose that b = c, as Lemma A4 was obtained independently of the sign of a. We start with an analog of Lemma A5; see also (A7).

Lemma A8.
In symmetric games with a > 0, b = c, for s ∈]0, 1/2] there exists an ESD (s, s) iff In symmetric games with a > 0, b = c, for s ∈]1/2, 1[ there exists an ESD (s, s) iff We omit the proof based on the inference of analytic formulas for tangents (slopes) of border lines similar to the ones in Figure 3a. These formulas provide the lower bounds for x(s, s) in (A14) and (A15). The next step is to derive an analog of Lemma A6.
Proof. There are two cases: s 1 = s 2 and s 1 = s 2 . The proof is trivial in the first case. In the second case, by using symmetry, it is easy to prove the existence of a Nash equilibrium with the ESD (s 2 , s 1 ). We consider three cases: s 1 + s 2 > 1, s 1 + s 2 = 1, and s 1 + s 2 < 1. Analyses of the cases are similar; therefore, we proceed only with case s 1 + s 2 > 1. Without loss of generality, we assume that s 1 < s 2 . Since (s 1 , s 2 ) is a Nash equilibrium, there exists the pair (x, y) such that x = x(s 1 , s 2 ) and y = y(s 1 , s 2 ); x ≤ 0, y ≤ 0. We infer analytic formulas for the tangents of border lines similar to those in Figure 3a to derive the following restriction: We stress that any pair of reactive strategies (u, v) leading to (s 1 , s 2 ) complies with Let us now consider value x * = x( s 1 +s 2 2 , s 1 +s 2 2 ); we have x * = −x−y+2xy −2+x+y . Straightforwardly, x * < 0. Moreover, x * + 1 = 2(xy−1) −2+x+y > 0. Thus, −1 < x * < 0. In contrast to Lemma A6, the last statement is insufficient to apply Lemma A8. We also need to prove a stronger result (see (A15)): First, we obtain derivatives of x * w.r.t. x, y: For all (x, y) ∈] − 1, 0] 2 , functions x * (·, y) and x * (x, ·) are monotonically nondecreas- Recall that we are proving (A16). Since s 1 + s 2 > 1 and (s 1 − s 2 ) 2 < 1, we obtain that It follows from Lemma A8 that ( s 1 +s 2 2 , s 1 +s 2 2 ) is an ESD.
Corollary A2. In symmetric games with a > 0, b = c, there exists a Nash equilibrium iff there exists a pair of identical reactive strategies forming a Nash equilibrium. The latter is identical to the existence of an ESD (s, s) for some 0 < s < 1.
The next theorem provides a necessary and sufficient condition for the existence of a Nash equilibrium.
Theorem A2. In symmetric games with a > 0, b = c, there exists a Nash equilibrium iff there exists an ESD (s, s), 0 < s < 1, such that The proof is a combination of Lemma A8, Lemma A9, and the last corollary. Let us derive conditions ensuring the existence of a solution in (A17). As before, we use the notation b = − b a , c = − c a . We decompose (A17) into two systems of the inequalities: We begin with (A18). Arguing as in the case a < 0, we obtain that the conditions admit solutions of (A18). We proceed to (A19). First, (A19) can be transformed in (an instance of (A18)) by applying the following rules: Substitutingŝ,b, andĉ in (A21), we get ; this chain of inequalities is identical to (A19). Thus, bijective rules (A22) provide an easy way for one-to-one transformation of problem (A19) into problem (A18), which was solved in (A20). Using (A20), we solve the problem for (A21): ).
Lemma A10. If a > 0, then there exists a reactive Nash equilibrium iff

Appendix B. Theorem 2 for Symmetric Games with Equal Payoffs on the Leading Diagonal
A generic one-shot symmetric game with identical payoffs on the leading diagonal is strategically equivalent to Figure A2 shows all pairs of (l, g) such that there exists an equilibrium in the repeated version of G 00 in SRSs by Theorem 2. The regions specify all l and g such that there exists an NE profile in SRSs for the repeated modification of G 00 . Table A1 contains the corresponding case analysis. In general, there exists an NE profile in conditional SRSs that Pareto dominates a benchmark NE profile in memory-less reactive strategies. Thus, an inch of memory for players (upgrading memory-less RSs to conditional SRSs) brings new equilibria where they both better off. Only a very special type of this one-shot games is the exception to this rule, i.e., games with −l = g. Table A1. Summary for Theorem 2 for symmetric games with equal payoffs on the leading diagonal.

One-Shot Game Description
The Benchmark in Memory-Less RSs

Appendix C. Additional Examples
In this section, we provide examples illustrating properties of reactive Nash equilibria.
In what follows, we use definitions given in (2), (3), and (20). Recall that for symmetric Appendix C.1 presents two examples of the Prisoner's Dilemma to demonstrate nonsymmetric equilibria. In the first example we observe equilibrium strategies leading to payoffs that are higher than the mutual cooperation payoff. In the second example, one can see that there exist equilibria that admit any level of mutual cooperation. We then consider an example of non-symmetric games admitting equilibria that may be incorrectly recognized as symmetric ones; see Appendix C.2. Finally, in Appendix C.3, we highlight the example that makes it possible to observe disconnected regions of equilibria. Assume that we have some dynamics on the set of all Nash equilibria in the spirit of adaptive dynamics [4] or local cooperation patterns [26]. This disconnectedness becomes an obstacle on the way to higher equilibrium payoffs.
Appendix C.1. Non-Symmetric Equilibria in Prisoner's Dilemma and Folk Theorem Reactive strategies were introduced to study the evolution of cooperation in the repeated Prisoner's Dilemma. Recall that in [4,5]  A mixed Nash equilibrium does not exist in both games. Game G P1 . Here a = −9, b = −1/9, c = 5/3. This Prisoner's Dilemma is different from the standard versions since A 2 +A 3 2 > A 1 . Hence, contrary to the standard setting, infinite cycles like CD, DC, CD, ... are more profitable for both players than the infinite mutual cooperation. As a result, in some equilibria, both payoffs are strictly better off than the mutual cooperation payoff; see Figure A3a. For example, equilibrium strategies u = ( 13 15 , 1 5 ) and v = ( 13 15 , 1 5 ) lead to the ESD ( 3 5 , 3 5 ) with the payoffs 129 25 , which are greater than the mutual cooperation payoff 5.
Game G P2 . Here, a = −1, b = −2, c = 8. We depict the set of all ESDs in Figure A3b. It follows from Proposition A4 that for any s ∈]0, 1[ there exists a Nash equilibrium such that (s, s) is the ESD. Therefore, there are equilibria admitting any level of mutual cooperation. For example, equilibrium strategies u = v = ( 3498499 3500500 , 1998999 3500500 ) lead to the ESD ( 999 1000 , 999 1000 ).  Figure A3. The blue sets in (a,b) depict the sets of all ESDs for G P1 and G P2 , respectively. The red region corresponds to all stationary distributions such that both players get more than the mutual cooperation payoff (this is possible only in G P1 ). Figure A4 shows that to ensure an equilibrium, player 2 must choose v 1 > v 2 and not the other way round. For an equilibrium with a higher rate of cooperation, player 2 must increase v 1 even further. Generally, the greater the difference (v 1 − v 2 ), the less attractive the defection for player 1. In (a), the strategy v = ( 13 15 , 3 5 ) sets the attainable set of player 1 (the straight line) to be a tangent to the curve of payoffs equal to 129 25 at the ESD ( 3 5 , 3 5 ). In (b), the strategy v = ( 7 15 , 4 5 ) incentivizes player 1 to descend in the attainable set (the straight line) to the unconditional defection.  Figure A4. For G P1 , plots (a,b) depict the first player's attainable sets (the straight lines) for v = ( 13 15 , 3 5 ) and v = ( 7 15 , 4 5 ), respectively. The thin lines correspond to the payoff levels equal to 0, 2, ..., 14.
The range of all possible payoffs for SMEs was obtained in [17] proving that there is no folk theorem (see [17], Corollary 2, Theorem 4). For example, for game G P1 , all SME payoffs are in ]0, 5] for any discount factor. In [28], Section 4, the repeated Prisoner's Dilemma was considered as an example of a game with non-rich action spaces. It was rigorously shown that there are no efficient payoff vectors corresponding to subgame perfect equilibria in deterministic 1-memory strategies, even if the discount factor is close to one. However, by introducing an additional pure action, it becomes possible to obtain an efficient outcome, which is higher than the upper bound for SME payoffs. Figure A5 shows that there are Nash equilibria in G P1 with payoffs higher than would be ensured by the mutual cooperation, although not all individually rational payoffs are supported by reactive equilibria. First, this means that there are ESDs with payoffs for both players that are higher than the upper bound shown in [28] (for equilibria in deterministic 1-memory strategies) and in [17] (for SMEs). Second, similarly to [17], in our case there is no folk theorem. admitting a continuum of ESDs of the form (s, s). We say that ESDs of this form are symmetric ESDs (though this may be not a good choice of words). Symmetric ESDs have an intriguing property. Imagine that in a repeated game, players are in an equilibrium, and we observe only action frequencies (but not conditional frequencies). If the frequencies are symmetric, then it would be natural to conclude that we are observing a symmetric game. This following example, however, shows that the last assumption may not be true. It is easy to see that a 1 = −1, b 1 /a 1 = 2, c 1 /a 1 = −6, a 2 = 1, b 2 /a 2 = −3, c 1 /a 1 = −4. Note that a 1 a 2 < 0; this is impossible for symmetric games. Figure A6c depicts the set of all ESDs; we see an abundance of symmetric ESDs. One can also see a clear connection between the non-symmetric game G ns and two symmetric games (see Figure A6a,b) constructed directly from players' payoff matrices.  Figure A6. Plots (a-c) depict the set of all ESDs for the symmetric game with a < 0, b = −2, c = 6, for the symmetric game with a > 0, b = 3, c = 4, and non-symmetric G ns , respectively.