The Optimality of Team Contracts
Abstract
:1. Introduction
2. The Model
3. Collusion
3.1. Example 1
10 | 10 | 5 | 5 | |
1 | 2 | 1 | 2 |
0 | 0 | 1 |
3.2. Collusion Constraints
3.2.1. Example 1 under Strong Collusion
0 | 0 | 1 |
4. Teamwork
4.1. Example 1 with Teamwork
0 | 0 | 1 |
5. The Optimality of Team Contracts
6. Concluding Remarks
Acknowledgments
Conflicts of Interest
References
- Holmstrom, B.; Milgrom, P. Regulating the trade among agents. J. Inst. Theor. Econ. 1990, 146, 85–105. [Google Scholar]
- Itoh, H. Coalitions, incentives and risk sharing. J. Econ. Theory 1993, 60, 410–427. [Google Scholar] [CrossRef]
- Demski, J.; Sappington, D. Optimal incentive contracts with multiple agents. J. Econ. Theory 1984, 33, 152–171. [Google Scholar] [CrossRef]
- Demski, J.; Sappington, D.; Spiller, P. Incentive schemes with multiple agents and bankruptcy constraints. J. Econ. Theory 1988, 44, 156–167. [Google Scholar] [CrossRef]
- Mookherjee, D. Optimal incentive schemes with many agents. Rev. Econ. Stud. 1984, 51, 433–446. [Google Scholar] [CrossRef]
- Ma, C.; Moore, J.; Turnbull, S. Stopping agents from “cheating”. J. Econ. Theory 1988, 46, 355–372. [Google Scholar] [CrossRef]
- Tirole, J. Hierarchies and bureaucracies: On the role of collusion in organizations. J. Law Econ. Organ. 1986, 2, 181–214. [Google Scholar]
- Laffont, J.-J.; Rochet, J.-C. Collusion in organizations. Scand. J. Econ. 1997, 99, 485–495. [Google Scholar] [CrossRef]
- Brusco, S. Implementing action profiles when agents collude. J. Econ. Theory 1997, 73, 395–424. [Google Scholar] [CrossRef]
- Laffont, J.-J.; Martimort, D. Mechanism design with collusion and correlation. Econometrica 2000, 68, 309–342. [Google Scholar] [CrossRef]
- Barlo, M. Essays in Game Theory. Ph.D. Thesis, University of Minnesota, Minneapolis, MN, USA, 2003. [Google Scholar]
- Felli, L.; Hortala-Vallve, R. Preventing Collusion through Discretion; CEPR Discussion Paper No. DP8302; 2011; Available online: http://ssrn.com/abstract=1794892 (accessed on 14 November 2013).
- Holmstrom, B. Moral hazard in teams. Bell J. Econ. 1982, 13, 324–340. [Google Scholar] [CrossRef]
- Itoh, H. Incentives to help in multi-agent situations. Econometrica 1991, 59, 611–636. [Google Scholar] [CrossRef]
- Ramakrishnan, R.T.S.; Thakor, A.V. Cooperation versus competition in agency. J. Law Econ. Organ. 1991, 7, 248–283. [Google Scholar]
- Varian, H.R. Monitoring agents with other agents. J. Inst. Theor. Econ. 1990, 146, 153–174. [Google Scholar]
- Itoh, H. Job design, delegation and cooperation: A principal-agent analysis. Eur. Econ. Rev. 1994, 38, 691–700. [Google Scholar] [CrossRef]
- Baliga, S.; Sjostrom, T. Decentralization and collusion. J. Econ. Theory 1998, 83, 196–232. [Google Scholar] [CrossRef]
- Macho-Stadler, I.; Perez-Castrillo, J.D. Centralized and decentralized contracts in a moral hazard environment. J. Ind. Econ. 1998, 46, 1–20. [Google Scholar] [CrossRef]
- Jelovac, I.; Macho-Stadler, I. Comparing organizational structures in health services. J. Econ. Behav. Organ. 2002, 49, 501–522. [Google Scholar] [CrossRef]
- Hortala-Vallve, H.; Sanchez Villalba, M. Internalizing team production externalities through delegation: The british passenger rail sector as an example. Economica 2010, 77, 785–792. [Google Scholar] [CrossRef]
- Holmstrom, B.; Milgrom, P. Aggregation and linearity in the provision of intertemporal incentives. Econometrica 1987, 55, 303–328. [Google Scholar] [CrossRef]
- Schättler, H.; Sung, J. The fisrt-order approach to the continuous-time principal-agent problem with exponential utility. J. Econ. Theory 1993, 61, 331–371. [Google Scholar] [CrossRef]
- Hellwig, M.; Schmidt, K.M. Discrete-time approximations of the holmstrom-milgrom brownian-motion model of intertemporal incentive provision. Econometrica 2002, 70, 2225–2264. [Google Scholar] [CrossRef]
- Sung, J. Linearity with project selection and controllable diffusion rate in continuous-time principal-agent problems. Rand J. Econ. 1995, 26, 720–743. [Google Scholar] [CrossRef]
- Barlo, M.; Ozdogan, A. Optimality of Linearity with Collusion and Renegotiation; Sabancı University Working Paper: ID SU-FASS-2011/0008; Sabancı University: Istanbul, Turkey, 2011. [Google Scholar]
- Grossman, S.; Hart, O. An analysis of the principal-agent problem. Econometrica 1983, 51, 7–45. [Google Scholar] [CrossRef]
- Fudenberg, D.; Levine, D.K.; Maskin, E. The folk theorem with imperfect public information. Econometrica 1994, 62, 997–1039. [Google Scholar] [CrossRef]
Appendix
A. Example 1
B. Example 2
C. Example 3
D. Example 4
E. Example 5
- 1.Agents jointly control the two moments in non-restricted ways; yet, an interesting case happens when the effort choices of an agent, the sales manager, only increases the mean and those of the other, the finance manager, only decreases the variance.
- 3.Some of the significant studies on teamwork include [1,2,13,14,15,16]. It is indicated that agents’ sharing information unobservable to the principal is necessary for them to benefit from teamwork. This is because when only returns, and not chosen effort levels, are contractible, the principal can offer such contracts with efficient risk sharing herself.
- 4.In [1], agents are engaged in two tasks by providing inputs to both. A performance measure is observed for each activity that depends on the input profile chosen by agents combined with activity-specific and possibly correlated error terms. The principal pays each agent as a function of both performance measures. That study shows that team contracts are beneficial to the principal when compared with incentive contracts with technologically independent production (meaning that the production function of each task depends only on the input of one of the agents) and the sufficiently low correlation coefficient of the error terms. Hence, this result suggests that cooperation may be potentially harmful because of interactions in the production function and/or correlated error terms. In a similar model, [2] shows that when each one of the two agents governs only the mean of his process and his effort choice affects the mean of the other’s returns only through the noise term (both of which are stochastically independent), the principal benefits from the use of team contracts. In an extension, the principal can observe an aggregate output level, the distribution of which depends on the efforts of the two agents. It is found that the same result holds with identical agents. Other relevant papers include [17,18,19,20,21].
- 5.The finite effort set is assumed to abstract from non-fruitful technicalities and to keep numerical programming simple.
- 6.The pioneer work in establishing theoretical justifications for the use of linear contracts obtained from normally distributed returns and exponential utility functions is [22] and was generalized by [23,24]. They involve repeated settings with a single agent, and the lack of income effects due to exponential utility functions is employed to obtain the optimality of linearity: among optimal contracts, there is one that is linear in aggregate output. Thus, the situation, given by a complicated repeated agency setting, is as if the agent chooses the mean of a normal distribution only once, and the principal is restricted to employ linear sharing rules. Sung [25] generalizes this result by allowing the single agent to control the variance, as well. The authors of [26] consider the multi-agent version of this generalization with instantaneous efficient risk sharing and/or collusion possibilities and prove that the optimality of linearity continues to hold, in turn, justifying the analysis of the current study.
- 7.Before and during the accept/reject decisions, we assume that agents cannot communicate with one another. Hence, an agent observes only the contract he is offered and the proposed effort vector. Communication and coordination among agents emerges after the acceptance decisions.
- 8.Not allowing agents the ability to insure each other is rather restrictive, because it entails the restraint of not permitting rational decision makers the use of the legal system. Indeed, even under such a restriction, an alternative way of obtaining such an insurance is as follows: suppose that markets are complete, and agents have access to them. Then, there exists portfolios with returns that are equal to the returns that agents wish to obtain via the insurance contracts. Thus, agents may obtain the insurance they desire by trading them.
- 9.The interiority of the solution to the labor union’s problem is guaranteed as the current setting satisfies Assumptions 1–3 of [27]. Moreover, we thank an anonymous referee for pointing out an alternative interpretation where risk sharing can be seen as the equilibrium of an economy where each agent is endowed with a pair determined by the principal’s original contract. Agents trade these results in the marginal rate of substitution figures being equalized across the agents.
- 10.When the principal is risk averse with a CARA given by , the optimal contract involves the same compensation scheme and the same effort profile as the one with a risk-neutral principal, but, it delivers a CE of to the risk-averse principal.
- 11.To be precise, the implies that case calls for and for . Finally, due to , , resulting in the constraint set being empty.
- 12.The side-contract, , is not incentive-compatible for the second agent, because is strictly lower than . Moreover, in this example, does not hurt the principal. However, a risk-neutral principal (or a risk-averse principal with a sufficiently low CARA) may become strictly worse off by side-contracting, when it involves an effort profile that results in a lower mean. In order for agents to benefit from such an arrangement, they should have sufficiently high CARA coefficients, and the effort profile agreed upon results in a variance low enough to compensate for the decrease in the total surplus, due to a lower mean. To see this, reconsider Example 1 by changing only the mean associated with from 10 to : the optimal incentive contract remains the same and delivers a return of to the risk-neutral principal, and the same side-contract, , is still strictly beneficial to both of the agents and brings about .
- 13.Our formulation is consistent with a model in which the principal would hire agents at the beginning of the project, involving an infinite time horizon, using stationary contracts. Agents would interact in an infinitely repeated manner. The stage game would involve agents individually choosing efforts at the beginning of the period and getting paid at the end of the period according to the contract, which would be based on the project’s daily and normally distributed returns. Agents would have discounted CARA utilities, and one would not need the assumption that agents observe others’ effort choices. The requirement that these choices are observable to others in the next period would be enough. (Due to the Folk Theorem of [28], even more elaborate stage games in which there is imperfect monitoring of others’ previous choices can be incorporated.) There may be inefficient subgame perfect equilibrium payoffs, yet the efficient frontier can be obtained with subgame perfection, and it is the one that we are interested in; and it can be sustained in our one-shot model with the use of binding side-contracts among agents. In such a setting, it is not clear whether or not an agent, a member of the labor union aiming to obtain an efficient payoff, would ever blow the whistle, even if the stage game were to be one with contracts and structures, as in [6]. This is because, now, he could be credibly punished for being a “snitch”.
- 14.The current analysis is appropriate when agents are engaged in a repeated interaction facing a sequence of different short-run principals who cannot observe the history of agents’ actions. As an example, consider a house-owner who wants a renovation job and hires two contractors. The value of the renovated house depends on the effort levels of the contractors, who are compensated based on the sale value of the house. The principal, while not observing the effort choices of the contractors, knows that they can perfectly observe each other’s effort choices and have means, not available to the short-run principal, to verify the effort choices. If there are strictly positive chances that the two contractors would work together in similar renovation works in the future, it is innocuous to assume that they can write binding side-contracts in the corresponding one-shot settings. Moreover, in such dynamic situations, an agent would require a considerable reward to “blow the whistle”.
- 15.When and are to be implemented, the set of constraints is empty, i.e., they cannot be sustained with strong collusion: for , , due to , considering deviations from to , from , considering deviations from to ; for , , due to , considering a deviation from to , and from , considering a deviation from to .
- 16.When the principal is risk-averse with a CARA of , the optimal contract involves the same compensation scheme and the same effort profile as the one given for the risk-neutral principal, but, it delivers a return of to the risk-averse principal. Recall that the optimal contract with , , , and constraints, is given by and with a return of to the risk-averse principal with a CARA of .
- 17.If the principal were to be risk averse with a CARA , the optimal team contract, , involving and delivering the principal a CE of , which is strictly higher than attained by the optimal strong collusion-proof contract, , and higher than provided by , the optimal incentive contract. Hence, the conclusions of Example 1 can be sustained with a risk averse principal.
- 18.We thank an anonymous referee for suggesting the use of a modified version of our Example 1 towards this regard: consider Example 1 with the sole modification of increasing the second agent’s cost of high effort from to one. Then, the contract, , with is a team contract (so a collusion-proof contract), yet does not hold. In essence, agent 1 would obtain strictly more returns from agent 2 choosing , yet these do not suffice to cover the loss of agent 2.
- 19.Then, there exists for whom does not hold, i.e., i contemplates a joint deviation. Yet, this joint deviation does not necessarily lead to the dismissal of the optimal team contract: the situation may be as in the example of footnote 18, where implementing the joint deviation that agent 1 contemplates is too costly. Thus, even though the optimal incentive contract may not be immune against strong collusion, it may still be the optimal (team) collusion-proof contract.
- 20.When is to be implemented with incentive contracts, the ensuring that agent 1 chooses instead of is and holds only if . However, due to , ; so, , an impossibility. For, , the ensuring agent 1 chooses instead of is and cannot be satisfied for any . Finally, for , the making sure that agent 2 chooses instead of is , which is satisfied for every . However, due to , , and thus, the guaranteeing that agent 1 does not choose and over (given by and , respectively) cannot be obtained.
- 21.When the principal desires under strong collusion, the guaranteeing agent 2 to prefer this to is ; so, . However, due to , , and is not compatible with , ensuring agent 1 will prefer to , which is given by .
- 22., and cannot be obtained with incentive contracts. First, note that due to , . For , every (IC) constraints can only be satisfied for , which is not feasible, because cannot exceed one. When is to be sustained, the that guarantees that agent 1 chooses over is , and it cannot be satisfied for any . Finally, for , the implying that agent 2 prefers to is , which requires to be greater or equal to . However, then, all the constraints (which are given by and ) cannot be satisfied for .
- 23.The reason is the very same given in footnote 21.
- 24.In this case, and cannot be implemented. For , implies , and cannot be satisfied for any . Considering , is , and holds only if . Additionally, is and holds whenever . Thus, due to by , they cannot be satisfied simultaneously.
- 25., and cannot be obtained with incentive contracts. First, note that due to , . For , the incentive constraint of the first player that prevents deviations from to cannot be satisfied, because it takes the form . For , the incentive constraint of the second agent preventing his deviation from to cannot be satisfied, as it takes the following form: . Furthermore, cannot be obtained with incentive contracts: the incentive constraint of the second agent preventing him to choose instead of () and the incentive constraint of the first player making sure that he chooses instead of () are not compatible.
- 26., and cannot be obtained with teams contracts. Due to , . Then, considering , we observe that the team constraint preventing deviations to takes the following form and cannot be satisfied: . Next, for , the team constraint preventing deviations to cannot hold, as it is given by . Finally, cannot be obtained by team contracts, as well: the team constraint preventing deviations to cannot be satisfied and is given as follows:.
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
Share and Cite
Barlo, M.; Özdoğan, A. The Optimality of Team Contracts. Games 2013, 4, 670-689. https://doi.org/10.3390/g4040670
Barlo M, Özdoğan A. The Optimality of Team Contracts. Games. 2013; 4(4):670-689. https://doi.org/10.3390/g4040670
Chicago/Turabian StyleBarlo, Mehmet, and Ayça Özdoğan. 2013. "The Optimality of Team Contracts" Games 4, no. 4: 670-689. https://doi.org/10.3390/g4040670
APA StyleBarlo, M., & Özdoğan, A. (2013). The Optimality of Team Contracts. Games, 4(4), 670-689. https://doi.org/10.3390/g4040670