# Solution Concepts of Principal-Agent Models with Unawareness of Actions

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Model

**Strategies**. Let ${S}_{P}$ and ${S}_{A}$ denote the sets of strategies of the principal (P) and the agent (A), respectively. To incorporate the possibility that each party may determine decisions in many dimensions, here, ${S}_{P}$$\equiv {A}_{P}^{1}\times \dots \times {A}_{P}^{M}$ and ${S}_{A}$$\equiv {A}_{A}^{1}\times \dots \times {A}_{A}^{N}$ with M, $N<\infty $. In the canonical employee compensation example, the employer (the principal) may determine the compensation scheme that comprises the fixed payment and the commission rate for the employee (the agent). The employer may further determine other actions, such as the employee’s housing option or retirement benefit. These decisions directly affect the utilities of the employer and the employee and are included in ${S}_{P}$. On the employee’s side, she may have discretion in determining how much effort to exert in completing the project or whether to receive some external training that improves her productivity. The set, ${S}_{A}$, includes these decisions of the employee.

**Unawareness**. In contrast with the standard principal-agent models, we assume that the agent may be unaware of all the relevant aspects she or the principal is entitled to choose. Along the line of the modeling technique initiated by [3], let ${D}_{i}\equiv \{{A}_{i}^{1},{A}_{i}^{2},\dots \}$ denote the collection of all action sets of party i, and $D\equiv {D}_{P}\cup {D}_{A}$ denotes the collection of all action sets of both the principal and the agent. Let ${W}_{i}$ (${W}_{i}\subseteq {D}_{i}$) denote the set of action sets of i of which the agent is aware before contracting, where $i\in \{P,A\}$. Thus, $W\equiv {W}_{P}\cup {W}_{A}$ represents the collection of action sets of which the agent is aware. We assume that the principal is fully aware of the entire set of strategy profiles, S, and knows the agent’s awareness (i.e., W). In this sense, the principal is omniscient: he has a clear picture of the economic environment, and he knows precisely the agent’s preference and what the agent is aware of.6

**Contract.**We can now formally define a contract offered by the principal. In the following, we use the notation, $\times X$, to denote the Cartesian product of all action sets in $X\subseteq $D, i.e., $\times X\equiv {\Pi}_{Y\in X}Y$.

**contract**is a vector, $\psi (V)$$\in \times (W\cup V),$ where $V\subseteq D\backslash W$.7 Note that $\psi (V)$ specifies all actions that the agent is aware of after observing the contract. Let $\psi (V)\equiv ({\psi}_{P}(V),{\psi}_{A}(V))$, where ${\psi}_{i}(V)$ is composed only of party i’s actions. Following the literature that incorporates unawareness into the contracting framework, we assume that whenever the principal announces some actions of which the agent is unaware, the agent is able to understand the contract immediately and adjust her awareness to account for the additional aspects specified in the contract; see, e.g., [4,5,11].

**Definition**

**1.**

**incomplete**in party i’s strategy if ${W}_{i}\cup {V}_{i}\ne {D}_{i},$ where $i\in \{P,A\}$.

**Rule-guided behavior.**Since the contract is allowed to be incomplete, if $\psi (V)$ is incomplete in the agent’s strategy, the agent can determine the actions specified in the contract accordingly, and she must “choose,” subconsciously, the actions of which she is unaware. In this paper, we assume that if the agent is unaware of some aspect, ${A}_{A}^{k}\notin {W}_{A}\cup {V}_{A}$, after observing the contract, she subconsciously chooses her default action ${\overline{a}}_{A}^{k}$ in this aspect. Likewise, for ${A}_{P}^{k}\notin {W}_{P}\cup {V}_{P}$, the agent subconsciously assumes that the principal will choose the default action, ${\overline{a}}_{P}^{k}$. Since it is subconscious choice, it is natural to assume that the default action is unique. For example, if the agent may be unaware of playing “Second Life” in her office, the default action in this dimension is simply not playing it. If the agent may be unaware that the principal can delay the salary payments, the default action of the principal is not to delay them.

**Perceived utilities.**Given the aforementioned, the agent’s unawareness and rule-guided behavior (and perception), we can then articulate how the agent evaluates a contract, $\psi (V)$. Let ${u}_{i}^{V}:\times (W\cup V)\mapsto \mathbb{R}$, $i\in \{P,A\}$, denote the perceived utility function of party i from the agent’s viewpoint. From the representation, the function, ${u}_{i}^{V}$, clearly depends on the strategy space, V, specified in the contract (and the corresponding actions, $s(V)$). In the presence of the agent’s unawareness, we assume that:

## 3. Solution Concepts

#### 3.1. Preliminaries

**Definition**

**2.**

**feasible**if it satisfies IC and IR. A bundle, $(\psi (V),s)$, is

**coherent**if $\psi (V)=s(V)$ and ${s}_{A}^{C}(V)={\overline{s}}_{A}(V)$.

#### 3.2. Subgame-Perfect Solution

**Definition**

**3.**

**subgame-perfect solution**if the principal chooses ${V}^{*},$${\psi}^{*}({V}^{*})$ and ${s}^{*}$ that maximize ${u}_{P}$ s.t.$\psi (V)$ is feasible and $(\psi (V),s)$ is coherent.

**Theorem**

**1.**

#### 3.3. Justifiable Solution

**Definition**

**4.**

**justifiable**if:

- it is feasible;
- $\forall \tilde{V}\subseteq V$, $\forall \psi (\tilde{V})\in \times (W\cup \tilde{V})$, $\forall \tilde{s}(V)\in \times (W\cup V)$, such that $\psi (\tilde{V})$ is feasible and $(\psi (\tilde{V}),\tilde{s})$ is coherent, we have ${u}_{P}^{V}(\psi (V))\ge {u}_{P}^{V}(\tilde{s}(V))$.

**Definition**

**5.**

**justifiable solution**if the principal chooses ${V}^{*},$${\psi}^{*}({V}^{*})$ and ${s}^{*}$ that maximize ${u}_{P}$ s.t. $\psi (V)$ is justifiable and $(\psi (V),s)$ is coherent.

**Theorem**

**2.**

#### 3.4. Trap-Filtered Solution

**Definition**

**6.**

**trap-filtered**if 1) it is justifiable or 2) it is feasible and IR-Tis satisfied. A bundle, $({\psi}^{*}({V}^{*}),{s}^{*})$, is a

**trap-filtered solution**if the principal chooses ${V}^{*},$${\psi}^{*}({V}^{*})$ and ${s}^{*}$ that maximize ${u}_{P}$ s.t. $\psi (V)$ is trap-filtered, and $(\psi (V),s)$ is coherent.

**Theorem**

**3.**

#### 3.4.1. Interpretation of the Trap-Filtered Solution

#### 3.5. Trap-Filtered Solution with Cognition

**Theorem**

**4.**

**Definition**

**7.**

**trap-filtered with cognition**if it is justifiable or it is feasible and IR-C holds. A bundle, $({\psi}^{*}({V}^{*}),{s}^{*},{c}^{*})$, is a

**trap-filtered solution with cognition**if the principal chooses ${V}^{*},$${\psi}^{*}({V}^{*})$ and ${s}^{*}$ that maximize $\left\{{c}^{*}{\overline{u}}_{p}+(1-{c}^{*}){u}_{P}(s)\right\}$ s.t. $\psi (V)$ is trap-filtered with cognition, $(\psi (V),s)$ is coherent, ${c}^{*}=0$ if $\psi (V)$ is justifiable and ${c}^{*}\in arg{max}_{{c}^{\prime}\in [0,1]}{U}_{A}^{C}({c}^{\prime},\psi (V))$, otherwise.

**Theorem**

**5.**

#### 3.6. Payoff Ordering for the Principal

**Theorem**

**6.**

## 4. Contractual Traps: A Numerical Example

**Problem descriptions**. In this example, both parties have two dimensions of strategies. The first dimension action set, ${A}_{i}^{1}$, is a singleton, $\left\{{\overline{a}}_{i}^{1}\right\}$, that consists of a usual action of the party i. The agent is, however, unaware of the second dimension of actions. For simplicity, let us assume that ${A}_{A}^{2}=$$\left\{0,1\right\}$ and ${A}_{P}^{2}=$$\left\{0,2\right\},$ and the default actions are ${\overline{a}}_{P}^{2}={\overline{a}}_{A}^{2}=0$. The alternative actions, ${a}_{P}^{2}=2$ and ${a}_{A}^{2}=1$, are the unforeseen actions for the agent. In our notation, $W=\left\{{A}_{P}^{1},{A}_{A}^{1}\right\},$ since the agent is only aware of the usual actions of both parties in the first dimension.

**Subgame-perfect solution.**Let us first consider the scenario in which the principal does not announce any new actions (the option of signing a novel contract) to the agent, i.e., $V=\varnothing $. In such a scenario, the agent can only decide between signing the status quo contract (${a}_{A}^{2}=0$) and simply walking away. Given this, since the principal cannot remove the faculty housing option, the principal’s action affects neither the agent nor the principal himself. Therefore, choosing ${a}_{P}^{2}=0$ is the principal’s best response, and as a result, both the principal and the agent obtain a utility of one.

**Justifiable solution.**The discussion above demonstrates how a contractual trap can be implemented, even if the agent is fully naive (and suffers from her lack of awareness). We next apply the idea of justifiability to this example. When the agent is sophisticated, she may feel that the novel contract is “too-good-to-be-true”. This is because, in the agent’s mind, if ${A}_{A}^{2}$ were not specified in the contract, the principal would receive utility ${u}_{P}^{{A}_{A}^{2}}=1$. However, the contract with $V=\left\{{A}_{A}^{2}\right\}$ offers the agent an opportunity to choose an action, ${a}_{A}^{2}$, which benefits the agent herself, but might hurt the principal as the principal receives utility ${u}_{P}^{{A}_{A}^{2}}=0$. Thus the contract in the subgame-perfect solution is not justifiable. Note that from the agent’s perspective, the principal’s utility crucially depends on the offered contract. When $V=\varnothing $, the agent believes that ${u}_{P}=1;$ when $V=\left\{{A}_{A}^{2}\right\}$, it becomes ${u}_{P}=1-{a}_{A}^{2};$ when $V=\left\{{A}_{P}^{2},\phantom{\rule{4.pt}{0ex}}{A}_{A}^{2}\right\}$, the perceived utility becomes ${u}_{P}=1-{a}_{A}^{2}+{a}_{A}^{2}{a}_{P}^{2}$. Note that $\underset{s\in S}{inf}{u}_{A}(s)=0<1=$${\overline{u}}_{A}$. Thus, the agent will indeed reject this non-justifiable contract.

**Trap-filtered solution**. Next, we assume that the agent believes that the non-justifiable contract (regarding the novel contract) may result from the principal’s mistake (with probability $1-\rho $). It follows from straightforward algebra that the contract with $V=\left\{{A}_{A}^{2}\right\}$ is a trap-filtered solution when $\rho \le \frac{1}{2}$. When the probability of the principal’s mistake is high, upon receiving a non-justifiable contract, the agent is more inclined to interpret it as a mistake and, consequently, accepts the contract with $V=\left\{{A}_{A}^{2}\right\}$ although it is too-good-to-be-true. In other words, the principal sets a trap only when the agent believes that the non-justifiability of the contract is more likely due to the principal’s mistake rather than a trap. This coincides with our intuition: in a society where contractual traps are not common, the agent is more inclined to accept non-justifiable contracts. On the principal’s side, the normal principal is (weakly) better off for a higher ρ, as the trap is easier to implement, because the principal may receive a utility of two rather than one (the utility given a smaller $\rho )$.

**Trap-filtered solution with cognition**. Finally, we introduce the cognitive effort. When confronted with two options to choose from (the existing and the new contract), if the agent goes through the cognition stage, she may be able to come up with effective ways to evaluate whether a trap is hidden in the contract. The outcome of the agent’s cognitive effort in this case would eliminate the possibility of a trap, because it forces the principal to fully specify the contract for the new contract.

## 5. Concluding Remarks

## Acknowledgments

## References

- Galanis, S. Unawareness of theorems. Econ. Theory
**2013**, 52, 41–73. [Google Scholar] [CrossRef] - Heifetz, A.; Meier, M.; Schipper, B. Interactive unawareness. J. Econ. Theory
**2006**, 130, 78–94. [Google Scholar] [CrossRef][Green Version] - Li, J. Information structures with unawareness. J. Econ. Theory
**2009**, 144, 977–993. [Google Scholar] [CrossRef] - Filiz-Ozbay, E. Incorporating unawareness into contract theory. Games Econ. Behav.
**2012**, 76, 181–194. [Google Scholar] [CrossRef] - Ozbay, E. Unawareness and Strategic Announcements in Games with Uncertainty. Working paper, University of Maryland. 2008. [Google Scholar]
- Heifetz, A.; Meier, M.; Schipper, B. Dynamic unawareness and rationalizable behavior. Working paper, The Open University of Israel. 2009. [Google Scholar]
- Battigalli, P.; Siniscalchi, M. Strong belief and forward induction reasoning. J. Econ. Theory
**2002**, 106, 356–391. [Google Scholar] [CrossRef] - Kohlberg, E.; Mertens, J.F. On the strategic stability of equilibria. Econometrica
**1986**, 54, 1003–1037. [Google Scholar] [CrossRef] - McKelvey, R.; Palfrey, T. Quantal response equilibria for normal form games. Games Econ. Behav.
**1995**, 10, 6–38. [Google Scholar] [CrossRef] - Tirole, J. Cognition and incomplete contracts. Am. Econ. Rev.
**2009**, 99, 265–294. [Google Scholar] [CrossRef] - Von Thadden, E.; Zhao, X. Incentives for unaware agents. Rev. Econ. Stud.
**2012**, 79, 1151–1174. [Google Scholar] [CrossRef] - Zhao, X. Moral hazard with unawareness. Ration. Soc.
**2008**, 20, 471–496. [Google Scholar] [CrossRef] - Feinberg, Y. Games with Unawareness. Working paper, Stanford University. 2010. [Google Scholar]
- Fagin, R.; Halpern, J. Belief, awareness, and limited reasoning. Artif. Intell.
**1988**, 34, 39–76. [Google Scholar] [CrossRef] - Modica, S.; Rustichini, A. Awareness and partitional information structures. Theory Decis.
**1994**, 37, 107–124. [Google Scholar] [CrossRef] - Modica, S.; Rustichini, A. Unawareness and partitional information structures. Games Econ. Behav.
**1999**, 27, 265–298. [Google Scholar] [CrossRef] - Dekel, E.; Lipman, B.; Rustichini, A. Standard state-space models preclude unawareness. Econometrica
**1998**, 66, 159–174. [Google Scholar] [CrossRef] - Gabaix, X.; Laibson, D. Shrouded attributes, consumer myopia, and information suppression in competitive markets. Q. J. Econ.
**2006**, 121, 505–540. [Google Scholar] [CrossRef] - Grossman, S.; Hart, O. An analysis of the principal-agent problem. Econometrica
**1983**, 51, 7–45. [Google Scholar] [CrossRef] - Holmstrom, B. Moral hazard and observability. Bell J. Econ.
**1979**, 10, 74–91. [Google Scholar] [CrossRef] - Holmstrom, B.; Milgrom, P. Multitask principal-agent analyses: Incentive contracts, asset ownership, and job design. J. Law Econ. Organ.
**1991**, 7, 24–52. [Google Scholar] [CrossRef] - Mirrlees, J. The theory of moral hazard and unobservable behaviour: Part I. Rev. Econ. Stud.
**1999**, 66, 3–21. [Google Scholar] [CrossRef] - Grossman, S.; Hart, O. The costs and benefits of ownership: A theory of vertical and lateral integration. J. Polit. Econ.
**1986**, 94, 691–719. [Google Scholar] [CrossRef] - Hart, O.; Moore, J. Property rights and the nature of the firm. J. Polit. Econ.
**1990**, 98, 1119–1158. [Google Scholar] [CrossRef] - Aghion, P.; Bolton, P. Contracts as a barrier to entry. Am. Econ. Rev.
**1987**, 77, 388–401. [Google Scholar] - Chung, K.; Fortnow, L. Loopholes. Working paper, University of Minnesota and Northwestern University. 2007. [Google Scholar]
- Spier, K. Incomplete contracts and signalling. RAND J. Econ.
**1992**, 23, 432–443. [Google Scholar] [CrossRef] - Anderlini, L.; Felli, L. Incomplete contracts and complexity costs. Theory Decis.
**1999**, 46, 23–50. [Google Scholar] [CrossRef] - Dye, R. Costly contract contingencies. Int. Econ. Rev.
**1985**, 26, 233–250. [Google Scholar] [CrossRef] - Zhao, X. Strategic Mis-selling and Pre-Contractual Cognition. mimeo, Hong Kong University of Science and Technology. 2012. [Google Scholar]
- Bolton, P.; Faure-Grimaud, A. Satisficing contracts. Rev. Econ. Stud.
**2010**, 77, 937–971. [Google Scholar] [CrossRef] - von Thadden, E.; Zhao, X. Multi-task Agency with Unawareness. mimeo, Hong Kong University of Science and Technology. 2013. [Google Scholar]
- Schwartz, A.; Scott, R. Contract theory and the limits of contract law. Yale Law J.
**2003**, 113, 541–619. [Google Scholar] [CrossRef] - Hayek, F. Rules, Perception and Intelligibility; Studies in Philosophy, Politics and Economics, The University of Chicago Press: Chicago, IL, USA, 1967; pp. 43–65. [Google Scholar]
- Vanberg, V. Rational choice vs. program-based behavior: Alternative theoretical approaches and their relevance for the study of institution. Ration. Soc.
**2002**, 14, 7–53. [Google Scholar] [CrossRef] - Modica, S.; Tallon, J.; Rustichini, A. Unawareness and bankruptcy: A general equilibrium model. Econ.Theory
**1998**, 12, 259–292. [Google Scholar] [CrossRef] - Laffont, J.; Martimort, D. The Theory of Incentives: The Principal-Agent Model; Princeton University Press: Princeton, NJ, USA, 2002. [Google Scholar]
- Halpern, J.Y.; Rêgo, L.C. Extensive games with possibly unaware players. Mathematical Social Sciences, forthcoming. 2012. [Google Scholar]
- Rêgo, L.C.; Halpern, J.Y. Generalized solution concepts in games with possibly unaware players. Int. J. Game Theory
**2012**, 41, 131–155. [Google Scholar] [CrossRef] - Camerer, C.; Ho, T.; Chong, J. A cognitive hierarchy theory of one-shot games. Q. J. Econ.
**2004**, 119, 861–898. [Google Scholar] [CrossRef] - Blume, L.; Brandenburger, A.; Dekel, E. Lexicographic probabilities and equilibrium refinements. Econometrica
**1991**, 59, 81–98. [Google Scholar] [CrossRef] - Radner, R. Bounded rationality, indeterminacy, and the theory of the firm. Econ. J.
**1996**, 106, 1360–1373. [Google Scholar] [CrossRef] - Debreu, G. A social equilibrium existence theorem. Proc. Natl. Acad. Sci. USA
**1952**, 38, 886–893. [Google Scholar] [CrossRef] [PubMed] - Glicksberg, I. A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. Proc. Natl. Acad. Sci. USA
**1952**, 3, 170–174. [Google Scholar] [CrossRef] - Fan, K. Fixed-point and minimax theorems in locally convex topological linear spaces. Proc. Natl. Acad. Sci. USA
**1952**, 38, 121–126. [Google Scholar] [CrossRef] [PubMed]

^{1.}Researchers have documented experimental evidence that human beings inevitably make mistakes while choosing among multiple options, even if they are fully aware that some options are better than the others; see, e.g., [9].^{2.}It is worth mentioning that based on our definition of the trap-filtered solution with cognition, the agent does not exert cognitive effort, only if she sees a justifiable contract. In contrast, in [10], the agent will not exert cognitive effort, only if the principal opens the agent’s eyes. Note also that in his framework, there is common knowledge of the game and rationality. This implies that the equilibrium contract is always justifiable. Nevertheless, cognitive effort still occurs even though justifiability is guaranteed. Please see Section 3 for details.^{3.}A unified methodology in games is examined in [13].^{4.}In this way, the games will be finite, as well. The existence of a solution (under the appropriate solution concepts) can be easily established following the classical game theory literature.^{5.}Notably, the general formulation here allows for uncertainty of outcomes. Whatever uncertainty there is regarding the outcome of a contract is assumed to be incorporated into the utility function, the utility should be viewed as the expected utility, with respect to a probability (assumed to be commonly known) on the contingencies. Of course, the agent may in general be unaware of some relevant contingencies, but here, we assume that the agent’s unawareness is solely on actions.^{6.}It is possible to extend our analysis to the case in which the principal is only partially aware, following the approach developed in [12]. Since our focus is on the impact of the agent’s sophistication on the optimal contract design, we exclude the possibility of the principal’s unawareness. Moreover, the situation where the principal is uncertain about the agent’s awareness and, therefore, screens the agent’s awareness is studied by [11] in the single-task optimal contract setting and [32] in the multi-task linear contract setting.^{7.}The order of the elements is based on the following rule: The action sets of the principal precede the action sets of the agent, and $\forall i$, ${A}_{i}^{k}$ precedes ${A}_{i}^{l}$, if and only if $k<l$. For example, if $W\cup V=\{{A}_{A}^{2},{A}_{P}^{3},{A}_{P}^{1}\}$, then $\times \left(W\cup V\right)\equiv {A}_{P}^{1}\times {A}_{P}^{3}\times {A}_{A}^{2}$.^{8.}It is worth mentioning that [10] interprets contract completeness differently. Namely, he argues that a contract is more complete if the agent exerts more cognitive effort before contracting. In contrast, in our paper, a contract is incomplete if it does not specify all the utility-relevant actions.^{9.}This coincides with the Willistonian or “textualist” approach, which argues that the contract is the only document that the court can use to determine the plain meaning of the contracting parties. In a nutshell, the court enforces only the letter, but not the spirit of the contract. See [33].^{10.}An alternative way to model the set of strategy profiles is to define a correspondence, $M:{2}^{S}\mapsto {2}^{S}$, from an announced subset of $S,$ denoted by $Y,$ to the updated action sets, $M\left(Y\right)$, in the agent’s mind after the principal’s announcement. Note that $M\left(Y\right)={M}_{P}\left(Y\right)\times {M}_{A}\left(Y\right)$ specifies both the principal’s and the agent’s strategy sets. By this formulation, a contract, $\phi =\left({\phi}_{P},{\phi}_{A}\right)$, is an element in $M\left(Y\right)$.This alternative model sounds more general and flexible. However, it is not convenient to model how the principal deviates from his specified actions, ${\phi}_{P}$, in the contract in a natural way. In fact, this deviation plays an important role in the problem of contractual traps. On the contrary, our modeling framework avoids this difficulty, since the principal can freely choose any actions in the dimensions of which the agent is unaware, whereas the principal has to fulfill the actions in the dimensions in the contract of which the agent is aware.^{11.}This may not be appropriate in certain scenarios, but modifications are straightforward. For example, if all actions are verifiable, the strategy of the agent can be directly written into the contract and enforceable. Since the principal can directly control the agent’s actions, whether the agent is aware or not does not matter.^{12.}As a convention in the principal-agent models, if the IR constraint is satisfied, the agent is assumed to accept the contract [37].^{14.}For example, for an investor, the worst outcome is zero return, which is usually known by the investor.^{15.}Note that this alternative scenario has its own issue. Facing a non-justifiable contract, the agent is aware that something may go wrong and, therefore, she knows that her awareness is limited. Thus, it is no longer plausible that the agent still employs the derived worst outcome and uses this to compare with her outside option.^{16.}Note that in establishing Theorem 2, we adopt a totally different proof technique from [5].^{17.}If this assumption is violated, i.e., $\underset{s\in S}{inf}{u}_{A}(s)>{\overline{u}}_{A}$, the subgame-perfect solution suffices to be the appropriate solution concept, even if the agent is more sophisticated. In this sense, the forward induction step becomes unnecessary. The assumption that the agent knows the worst case is reasonable in several situations, such as the zero return case in the aforementioned investment example.^{18.}The interpretation automatically applies to the trap-filtered solution with cognition described below, as it can be seen as a special case in which cognitive effort is infinitely costly.^{19.}An alternative interpretation is that the “crazy” principal exists with a positive probability, but the agent is initially unaware of this event together with the event that he is unaware of actions. Upon observing a non-justifiable contract, however, the agent becomes aware of both events. While this interpretation is natural, it entails the agent’s extra unawareness of contingencies. Since we focus on the agent’s unawareness of actions in this paper, we tend to minimize the agent’s bounded rationality in other aspects. Thus, we prefer to adopt the interpretation with the lexicographic probabilistic system.^{20.}Note that it is innocuous to claim to incorporate cognition here. We abstract away from the detailed process of cognition. Yet, the economically relevant effect of cognition is modeled by the reduced form of the cognition cost function.^{21.}Note that in the cognition stage, we have implicitly assumed that it is costless to determine whether or not a contract is justifiable. In this sense, the cost of information processing in this regard is omitted. In principle, it is possible that evaluating the details of the contract to verify its justifiability also requires the agent to spend cognitive effort, as demonstrated in [42]. However, determining whether to evaluate each step itself may then require the agent to spend her own cognitive effort, and determining whether to determine to evaluate each step may again require cognitive effort. This leads to endless iterations and distracts attention from the main issues of this paper.^{22.}This is different from [10] in that cognitive effort occurs when the contract is not justifiable, whereas in [10], the agent exerts cognitive effort only when the agent’s eyes are not opened. In essence, a non-justifiable contract and non-eye-opening information play the same role in the situations where something may go wrong for the agent. However, our formulation allows the possibility of seeing a “too-good-to-be-true” contract that would never occur in [10].^{23.}Housing options are rather subtle in various schools in Hong Kong and Singapore, where the housing market is extremely expensive. Our assumption ensures that if the principal intends to set up a trap, he can only do so upon introducing the novel contract. If the principal is also allowed to remove the faculty housing secretly from a status quo contract, the trap could appear in all scenarios.^{24.}Note that this result is quite general in the sense that it does not depend on the particular cost function, $T(c)$, we employed. Let $T(0)=0$. Since the agent accepts the contract if $\underset{c\ge 0}{max}\left\{\rho c+2(1-\rho )-T(c)\right\}\ge 1$, i.e., $\rho {c}^{*}-T({c}^{*})\ge 2\rho -1$ where ${c}^{*}$ is the optimal level of cognition and $\rho {c}^{*}-T({c}^{*})\ge 0$ (otherwise, the agent chooses the zero cognition level), we have that the cutoff value of ρ being greater than 0.5.

© 2013 by the authors licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Chen, Y.-J.; Zhao, X. Solution Concepts of Principal-Agent Models with Unawareness of Actions. *Games* **2013**, *4*, 508-531.
https://doi.org/10.3390/g4030508

**AMA Style**

Chen Y-J, Zhao X. Solution Concepts of Principal-Agent Models with Unawareness of Actions. *Games*. 2013; 4(3):508-531.
https://doi.org/10.3390/g4030508

**Chicago/Turabian Style**

Chen, Ying-Ju, and Xiaojian Zhao. 2013. "Solution Concepts of Principal-Agent Models with Unawareness of Actions" *Games* 4, no. 3: 508-531.
https://doi.org/10.3390/g4030508