The Optimality of Team Contracts

This paper analyzes optimal contracts in a linear hidden-action model with normally distributed returns possessing two moments that are governed jointly by two agents who have negative exponential utilities. They can observe and verify each others’ effort levels and draft enforceable side-contracts on effort levels and realized returns. Standard constraints, resulting in incentive contracts, fail to ensure implementability, and we examine centralized collusion-proof contracts and decentralized team contracts, as well. We prove that the principal may restrict attention to team contracts whenever returns from the project satisfy a mild monotonicity condition.


Introduction
The design of contracts in situations involving strategic interaction among employees has taught us that teams (decentralization) outperform the standard incentive contracts when employees' performances do not affect the riskiness and are separable with either no or weak interactions or employees are restricted to being identical [1,2].In an agency environment where these restrictions are weakened and the riskiness is determined endogenously, we show that team arrangements may be surpassed by incentive contracts.However, adopting particular formulations of team and collusion contracts, we prove that team contracts beat all other implementable methods of contracting whenever returns from the project satisfy a monotonicity requirement.Moreover, it is shown that whenever teams are outperformed by incentive contracts, optimal incentive contracts are prone to a strong form of collusion.
A novel feature of our setting is the endogenous determination of the riskiness of the endeavor.In general, the principal's (i.e., shareholders' or owners') desired attitude towards risk needs to be reflected in the executive contracts.Then, the following aspects arise: (1) whether or not agents (executives) can employ state-contingent binding side-contracts among themselves in order to alleviate the riskiness of their contracts; (2) whether or not they are better informed than the principal about the managerial details.The first of these holds, because in any free society, agents cannot be prevented from access to the judicial system.Additionally, the second point holds in general, as the executives are often provided with the capacity and responsibility to exploit information technology to collect information about other employees' actions.Then, they may employ enforceable side-contracts based on their joint choices.This enables them to coordinate their actions and to insure one another.Therefore, this paper analyzes the construction of optimal incentives when the agents can write enforceable side-contracts based on effort levels and realized outcomes with the additional feature that the riskiness of returns is endogenously determined.
In our linear hidden-action framework, the returns of the organization are governed by a normal distribution with a mean and variance that depend on the effort profile the two agents choose. 1 All parties have negative exponential utility functions with given coefficients of absolute risk aversion (CARA, henceforth), and the principal may be risk neutral.Agents can observe and verify each other's effort choices and exploit all feasible collusion opportunities via enforceable side-contracts contingent on effort levels and realized outcomes.
First, we establish that incentive contracts, contracts that are individually rational, incentive compatible and involve efficient risk sharing, are not necessarily implementable: agents may jointly deviate to an effort profile, a feasible redistribution of their compensations and obtain strictly higher payoffs, while making the principal strictly worse off.Therefore, we concentrate on optimal contracts free from collusion. 2 A contract is collusion-proof if it is in the core of the strategic interaction it induces, i.e., the principal's proposed effort profile and state-contingent compensation scheme must be such that there should not be a non-empty set of agents and another feasible and participatory side-contract making each of these agents strictly better off.After dealing with these two types of centralized contracts, we examine decentralized team contracts, which can be interpreted as outsourcing or subcontracting, as well.Indeed, agents' side-contracting abilities may be beneficial for the principal.This is because, with teamwork, agents allocating the total share coordinating their choices in order to maximize the sum of their expected utilities implies that their voluntary coordination in effort choices and efficient risk allocation are ensured without the need for incentive compatibility constraints. 3e prove that team contracts provide the principal with the highest returns among implementable contracts whenever returns are monotone and implementing the best effort profile is optimal with incentive contracts.Returns are said to be monotone if the mean is increasing and the variance decreasing separately in the effort levels of both of the agents.Additionally, the best effort profile is one that induces the highest mean, the lowest variance and is not related with costs.It exists whenever returns are monotone.Under these conditions, the principal may restrict attention to decentralization, even when riskiness is determined endogenously by employees who may exploit all collusion opportunities.Moreover, whenever team contracts are strictly outperformed by incentive contracts (e.g., our example in Appendix E), we show that the associated optimal incentive contracts are not immune to strong collusion: a stronger notion of collusion in which strongly collusion-proof contracts are contained in the set of incentive contracts, and every strongly collusion-proof contract is both a team contract and a collusion-proof contract.In order to reach these conclusions, we employ our finding, which provides a full characterization of situations under which incentive contracts are strongly collusion-proof: incentive contracts are strongly collusion-proof provided that returns are monotone and implementing the best effort profile is optimal with incentive contracts; and, these conditions are minimal as examples, in which the optimal incentive contract is not immune to strong collusion, and can be identified whenever any one of their parts does not hold.
Earlier literature displays that the sufficiency conditions, under which the principal prefers team contracts to incentive contracts, involve separable production functions with either no or weak interactions or the limitation to identical agents when a richer set of interactions is allowed. 4The current study, using a monotonicity condition, provides an extension to the optimality of team contracts, as this result holds in environments with agents having different abilities and attitudes towards risk and are jointly determining both the mean and the riskiness in non-restricted ways.
Section 2 presents the model.Section 3 shows that standard constraints fail to eliminate collusion and presents two formulations of collusion constraints.In Section 4, we formulate the principal's problem with teamwork, and in the subsequent section, we present our results.Section 6 concludes.

4
In [1], agents are engaged in two tasks by providing inputs to both.A performance measure is observed for each activity that depends on the input profile chosen by agents combined with activity-specific and possibly correlated error terms.The principal pays each agent as a function of both performance measures.That study shows that team contracts are beneficial to the principal when compared with incentive contracts with technologically independent production (meaning that the production function of each task depends only on the input of one of the agents) and the sufficiently low correlation coefficient of the error terms.Hence, this result suggests that cooperation may be potentially harmful because of interactions in the production function and/or correlated error terms.In a similar model, [2] shows that when each one of the two agents governs only the mean of his process and his effort choice affects the mean of the other's returns only through the noise term (both of which are stochastically independent), the principal benefits from the use of team contracts.In an extension, the principal can observe an aggregate output level, the distribution of which depends on the efforts of the two agents.It is found that the same result holds with identical agents.Other relevant papers include [17-21].

The Model
Ours is a linear two-agent single-task hidden-action model.The principal possesses an asset that delivers observable and verifiable returns drawn from a normal distribution, whose mean and variance are determined by employees' effort choices that the principal cannot observe.Hence, contracts cannot depend on agents' efforts.The return-contingent contracts that the principal offers are all observable and verifiable by the agents employed.We assume that each agent can observe and verify others' efforts.Finally, everyone is assumed to have access to capital markets and not to be wealth-constrained.
We assume that E i , the set of effort levels of agent i = 1, 2, is a finite and ordered set. 5 An asset's effort-contingent returns x 2 < are distributed with F (x | e), such that for all e = (e 1 , e 2 ) 2 E ⌘ is the normal cumulative distribution given by µ(e), the mean and 2 (e), the variance.All have exponential utilities with the following CARA coefficients: R for the principal and r i for agent i = 1, 2, where R 0 and r 1 , r 2 > 0. Indeed, our formulation includes cases with a risk-neutral principal.Agent i's cost of effort is in terms of returns and denoted by c i (e i ), e i 2 E i .Each agent has an outside employment opportunity resulting in a reserve certainty equivalent, W i , i = 1, 2. Given a contract (S i (x)) i=1,2,x2< (the amount of returns given to agent i in state x under S i ), which makes both agents exert the effort level, e 2 E, the expected utilities of the principal and agent i = 1, 2 are: where S(x) = S 1 (x) + S 2 (x).The attention is restricted to feasible linear contracts of the form S i (x) = i x + ⇢ i , where i 2 < + are such that P i=1,2 i 2 [0, 1] and ⇢ i 2 < and e i 2 E i , i = 1, 2. 6 These restrictions entail feasibility by making sure that the principal's asset cannot be inflated and that the agents' efforts are feasible.The certainty equivalent (CE, henceforth) of agent i = 1, 2 is: Similarly, the CE of the principal is: The finite effort set is assumed to abstract from non-fruitful technicalities and to keep numerical programming simple. 6 The pioneer work in establishing theoretical justifications for the use of linear contracts obtained from normally distributed returns and exponential utility functions is [22] and was generalized by [23,24].They involve repeated settings with a single agent, and the lack of income effects due to exponential utility functions is employed to obtain the optimality of linearity: among optimal contracts, there is one that is linear in aggregate output.Thus, the situation, given by a complicated repeated agency setting, is as if the agent chooses the mean of a normal distribution only once, and the principal is restricted to employ linear sharing rules.Sung [25] generalizes this result by allowing the single agent to control the variance, as well.The authors of [26] consider the multi-agent version of this generalization with instantaneous efficient risk sharing and/or collusion possibilities and prove that the optimality of linearity continues to hold, in turn, justifying the analysis of the current study.
In this setting, the individual rationality constraint of agent i = 1, 2 having a reserve CE of W i is: Similarly, the incentive compatibility constraint of agent i = 1, 2 is: i (µ(e) µ(e 0 i , e i )) The principal's use of centralized contracts involves dealing separately with each agent, and this brings about an interesting strategic interaction among the agents.We refer to this as the labor union's problem, and the particular timing employed in its formulation is as follows: first, the principal offers a contract to each agent separately, while letting them know of the effort profile she proposes; then, agents accept or reject separately; and, finally, the labor union is formed by the accepting agents. 7nside the labor union, agents' contracts are revealed to one another, and agents are involved in a utilitarian bargaining with some predetermined bargaining powers that are known by the agents, but not necessarily by the principal.The consideration of voluntary participation into the labor union entails the natural requirement that arrangements emerging in the labor union cannot hurt any agent when compared with the principal's offer (the participation constraint).This is different from individual rationality, because it makes sure that an agent who has already accepted the principal's offer should be voluntarily participating in the arrangements in the labor union.
Since returns from the asset and contracts the principal offers are all observable and verifiable, it is natural to consider efficient risk sharing in the labor union; resulting in contracts with better consumption smoothing. 8Formulating efficient risk sharing among the agents via a utilitarian bargaining game delivers: for any proposed effort profile, e 2 E, and compensation S i : < !<, given bargaining weights ✓ i 2 (0, 1) with ✓ 1 + ✓ 2 = 1 for i = 1, 2, the labor union solves: subject to (1) the feasibility of the side-contracting scheme, y i : < !<, i.e., P i=1,2 y i (x) = S(x) for all x 2 < and i = 1, 2 (where y i (x) denotes the amount of returns given to agent i in state x under side-contract y i ); and (2) participation constraint, i.e., Eu i (y i | e) Eu i (S i | e) for i = 1, 2. Letting the objective function of the labor union's problem be denoted by S,e,✓ , one can show that in an interior solution, it must be that marginal rates of substitutions of the agents between any two states must be equal to each other.That is, side-contracts y i : < !<, i = 1, 2, have to be so 7 Before and during the accept/reject decisions, we assume that agents cannot communicate with one another.Hence, an agent observes only the contract he is offered and the proposed effort vector.Communication and coordination among agents emerges after the acceptance decisions.that (@ S,e,✓ /@y 1 (x)) / (@ S,e,✓ /@y 1 (x 0 )) = (@ S,e,✓ /@y 2 (x)) / (@ S,e,✓ /@y 2 (x 0 )) for any two states, x, x 0 2 <. 9 Consequently, agents' efficient risk sharing is captured by the substitution compatibility constraint, which takes the following simple form, due to the use of CARA utilities, normal distribution and linear contracts: ,2 constitutes a vector of incentive contracts, if and only if (IR i ), (IC i ) and (SC), for all i = 1, 2, are satisfied.
The principal's problem in finding optimal incentive contracts is to maximize Equation (1) by choosing (S i , e i ) i=1,2 subject to (IR i ), (IC i ) and (SC), for all i = 1, 2.
It should be emphasized that a relevant concern about efficient risk sharing being contingent on the effort profiles does not arise in our setting.With more general utility functions and distributions of returns, efficient risk sharing arrangements may have to be contingent on effort profiles.Each possible deviation from the principal's proposed effort profile would possibly induce a different labor union's problem and, hence, different efficient risk sharing conditions.Then, the principal's problem while finding the optimal contracts would have to consider these intriguing aspects.However, we do not encounter such problems with CARA utilities (with separable effort costs) and linear contracts, and normally distributed returns as an efficient risk sharing condition is simplified to a ratio of CARA coefficients.The other point is that agents engage in side trading after effort decisions have been made.Thus, even if they are fully insured, they cannot shirk from the previously made effort choices.

Collusion
We present a numerical example displaying that an optimal incentive contract may be susceptible to collusion before going into the formal execution.

Example 1
Let E i = {e L , e H }, and returns x 2 < are normally distributed with the mean, µ(e), and variance 2 (e), as given in Table 1.The private cost of agents' effort choices are: and the principal is risk-neutral, i.e., R = 0.The reserve CE figures of agents are W 1 = 1 and W 2 = 0.5.In this example, the effort choices of the first agent only affect the mean and those of the second agent, only the variance.Another interesting feature is that the principal would desire high effort from the second agent, only to decrease the risk faced by the first agent.9 The interiority of the solution to the labor union's problem is guaranteed as the current setting satisfies Assumptions 1-3 of [27].Moreover, we thank an anonymous referee for pointing out an alternative interpretation where risk sharing can be seen as the equilibrium of an economy where each agent is endowed with a (⇢ i , i ) pair determined by the principal's original contract.Agents trade these results in the marginal rate of substitution figures being equalized across the agents.When an optimal incentive contract is considered, the (IC i ) and (SC) constraints are: Consequently, Table 2 presents optimal incentive contracts and corresponding CE figures to the principal when a given effort level, e 2 E, is to be implemented with (IR i ), (IC i ) and (SC), i = 1, 2. Thus, the optimal incentive contract, (S ⇤ i ) i=1,2 , involving the implementation of the effort profile, e = (e H , e L ), is given by ( ⇤ 1 , ⇤ 2 ; ⇢ ⇤ 1 , ⇢ ⇤ 2 ) = (0.20, 0.20; 0.40, 1.10) and delivers the principal a return of 6.70. 10 It should be mentioned that when the effort profile (e L , e H ) is to be implemented, the incentive compatibility constraint of the first and second agents result in the set of constraints being empty. 11owever, S ⇤ is not immune to collusion; hence, it is not implementable.There is a feasible side-contract contingent on the agents' effort choices, making both strictly better off.Consider S 0 , involving (e H , e H ) and ( 0 1 , 0 ) and CE P (S 0 ) = 6.70 = CE P (S ⇤ ).Agent 1 identifies a collusion opportunity (a coordinated deviation) by agreeing to make a side transfer to agent 2 in return for her high effort choice, resulting in a lower variance of output; and this, in turn, mitigates the amount of risk to which the principal desires the 10 When the principal is risk averse with a CARA given by 1/2, the optimal contract involves the same compensation scheme and the same effort profile as the one with a risk-neutral principal, ( ⇤ 1 , ⇤ 2 ; ⇢ ⇤ 1 , ⇢ ⇤ 2 ) = (0.20, 0.20; 0.40, 1.10) ; but, it delivers a CE of 6.52 to the risk-averse principal. 11To be precise, the (IC 1 ) implies that case calls for 1  0.20 and (IC 2 ) for 2 p 0.07 = 0.26458.Finally, due to (SC), 1 = 2 , resulting in the constraint set being empty.
agents to be exposed.Therefore, side-contracting agents are able to sustain (e H , e H ), even though the risk-neutral principal finds it costly to make the second agent exert high effort.Such a side-contracting on effort levels enables non-incentive-compatible (yet, participatory) arrangements to be beneficial for the agents. 12

Collusion Constraints
The strategic interaction emerging among the agents is nurtured by their ability to share risk efficiently and observe and verify each other's effort choices.Indeed, [3,5] point out that there can be many Bayesian-Nash equilibria when this interaction is modeled with a Bayesian non-cooperative game and the question of implementation is analyzed by the construction of a non-cooperative message game la Maskin.It turns out that these equilibria are not payoff-equivalent for the principal.The authors of [6]  attack this problem by constructing a specific non-cooperative game theoretic structure featuring the use of "whistle blowers", agents who are to be rewarded if they were to provide verification about the deviation of other agents.Hence, it proposes "a solution" in which the principal does not incur any additional costs.
Our formulation, resulting in "another solution", involves the use of cooperative game theoretic tools in the analysis of the interaction among agents.Indeed, our solution method is based on the core: with two agents, it gets simplified and requires the principal to be restricted in offering individually rational contracts that both of the agents cannot strictly benefit from by deviating jointly to a feasible side-contract and effort profile (via the use of binding side-contracts utilizing their ability to observe and verify each others effort choices).Because the agents can write binding contracts based on effort levels, as well as side trades (for efficient risk sharing), the solution of the labor union's problem must be a strong Nash equilibrium of the corresponding two-player strategic interaction.Due to this formulation, we do not need to consider enriched classes of principal's offers or particular (static or dynamic) non-cooperative interaction structures to eliminate collusion or unreasonable (Bayesian-Nash) equilibria.This, in turn, strengthens our method, since it allows us to avoid specific structural and informational assumptions. 13 12 Te side-contract, S 0 , is not incentive-compatible for the second agent, because 0 2 = 0.20 is strictly lower than p 0.07 = 0.264575.Moreover, in this example, S 0 does not hurt the principal.However, a risk-neutral principal (or a risk-averse principal with a sufficiently low CARA) may become strictly worse off by side-contracting, when it involves an effort profile that results in a lower mean.In order for agents to benefit from such an arrangement, they should have sufficiently high CARA coefficients, and the effort profile agreed upon results in a variance low enough to compensate for the decrease in the total surplus, due to a lower mean.To see this, reconsider Example 1 by changing only the mean associated with (e H , e H ) from 10 to 9.98: the optimal incentive contract remains the same and delivers a return of 6.70 to the risk-neutral principal, and the same side-contract, S 0 , is still strictly beneficial to both of the agents and brings about CE p (S 0 ) = 6.688 < 6.70 = CE p (S ⇤ ). 13 Our formulation is consistent with a model in which the principal would hire agents at the beginning of the project, involving an infinite time horizon, using stationary contracts.Agents would interact in an infinitely repeated manner.The stage game would involve agents individually choosing efforts at the beginning of the period and getting paid at the end of the period according to the contract, which would be based on the project's daily and normally distributed returns.Agents would have discounted CARA utilities, and one would not need the assumption that agents observe others' effort choices.The requirement that these choices are observable to others in the next period would be enough.(Due to the Folk The collusion-proofness with two agents restricts the principal as follows: his offer (S i , e i ) i=1,2 is collusion-proof, if there is no feasible side-contracting arrangement (S 0 i , e 0 i ) i=1,2 (i.e., P i=1,2 S 0 i (x)  S(x) for all x 2 < and e 0 2 E), such that the expected utility of every agent when (S 0 i , e 0 i ) i=1,2 is used strictly exceeds that under (S i , e i ) i=1,2 (i.e., Eu i (S 0 i | e 0 ) > Eu i (S i | e) for all i = 1, 2).The critical point is that the individually rational contract the principal needs to offer must be one that cannot be improved upon in the labor union by joint effort choices and more efficient risk sharing.Therefore, it must be on the efficiency frontier of the agents' utilitarian bargaining game. 14Because u i , i = 1, 2, is twice differentiable and satisfies Assumptions 1-3 of [27] (resulting in the interiority of the solution), the use of linear contracts and normally distributed returns and CARA utilities imply (S i , e i ) i=1,2 , a centralized feasible and linear contract, satisfying the collusion-proofness constraint, if and only if: (1) , 2 and all ẽ 2 E; and (2) there exists a bargaining weight vector ✓ 2 {✓ 0 : ✓ 0 i 2 (0, 1) for all i, and ) for all x 2 <.The latter requirement is nothing but the general form of the substitution compatibility evaluated at the effort profile of the principal's offer.Definition 2. A profile of centralized feasible contracts (S i , e i ) i=1,2 = ( i , ⇢ i , e i ) i=1,2 is said to be collusion-proof, if and only if the following conditions hold: (1) (IR i ) for all i = 1, 2; (2) (SC); and (3) Instead of directly dealing with the complex comparison of incentive contracts with collusion-proof contracts and team contracts, clear-cut answers emerge with a stronger notion of collusion: the principal's offer must be so that no agent should start "contemplating" joint effort deviations.Definition 3. A profile of centralized feasible contracts (S i , e i ) i=1,2 = ( i , ⇢ i , e i ) i=1,2 is strongly collusion-proof, if and only if the following conditions hold: (1) (IR i ) for all i = 1, 2; (2) (SC); and (3) (CC i ) for all i = 1, 2, where it is defined by: i (µ(e) µ(e 0 )) 2 (e 0 ) c i (e i ) c i (e 0 i ), for all e 0 2 E (CC i ) Theorem of [28], even more elaborate stage games in which there is imperfect monitoring of others' previous choices can be incorporated.)There may be inefficient subgame perfect equilibrium payoffs, yet the efficient frontier can be obtained with subgame perfection, and it is the one that we are interested in; and it can be sustained in our one-shot model with the use of binding side-contracts among agents.In such a setting, it is not clear whether or not an agent, a member of the labor union aiming to obtain an efficient payoff, would ever blow the whistle, even if the stage game were to be one with contracts and structures, as in [6].This is because, now, he could be credibly punished for being a "snitch". 14The current analysis is appropriate when agents are engaged in a repeated interaction facing a sequence of different short-run principals who cannot observe the history of agents' actions.As an example, consider a house-owner who wants a renovation job and hires two contractors.The value of the renovated house depends on the effort levels of the contractors, who are compensated based on the sale value of the house.The principal, while not observing the effort choices of the contractors, knows that they can perfectly observe each other's effort choices and have means, not available to the short-run principal, to verify the effort choices.If there are strictly positive chances that the two contractors would work together in similar renovation works in the future, it is innocuous to assume that they can write binding side-contracts in the corresponding one-shot settings.Moreover, in such dynamic situations, an agent would require a considerable reward to "blow the whistle".
We refer to the set of constraints involved in the definition of strong collusion-proofness as (CC); and, the principal's problem under strong collusion is to maximize Equation (1) subject to (CC).

Teamwork
In Example 1, the CE of a risk-averse principal with R = 1/2 under the optimal incentive contract, S ⇤ , is 6.52, strictly lower than 6.61, the CE associated with the side-contract, S 0 , employed in agents' joint deviation from S ⇤ (see footnote10 for the details).Thus, in general, the principal may benefit from side-contracting.This leads to the analysis of situations when the principal uses decentralized contracts by hiring agents as a team, offering a total share from the returns and letting agents coordinate effort choices and allocations themselves.The timing is: first, the principal proposes to the agents as a team; then, before the accept/reject decisions, the team identifies their ideal within-team arrangement given 15 When (e H , e L ) and (e L , e H ) are to be implemented, the set of constraints is empty, i.e., they cannot be sustained with strong collusion: for (e H , e L ), 2 1  0, due to (CC 1 ), considering deviations from (e H , e L ) to (e H , e H ), 1 0.20 from (CC 1 ), considering deviations from (e H , e L ) to (e L , e L ); for (e L , e H ), 5 2 2 0.35, due to (CC 2 ), considering a deviation from (e L , e H ) to (e L , e L ), and 2  0 from (CC 2 ), considering a deviation from (e L , e H ) to (e H , e H ). 16 When the principal is risk-averse with a CARA of 1/2, the optimal contract involves the same compensation scheme and the same effort profile as the one given for the risk-neutral principal, ( ⇤ 1 , ⇤ 2 ; ⇢ ⇤ 1 , ⇢ ⇤ 2 ) = (0.264575, 0.264575; 0.295751, 1.44575) ; but, it delivers a return of 6.39458 to the risk-averse principal.Recall that the optimal contract with (IR i ), (IC i ), i = 1, 2, and (SC) constraints, (S ⇤ , e ⇤ ) is given by e ⇤ = (e H , e L ) and 2 ) = (0.20, 0.20; 0.40, 1.10) , with a return of 6.52 to the risk-averse principal with a CARA of 1/2. the principal's offer; and finally, the team accepts or rejects, where acceptance emerges only if both of the agents agree to accept.Thus, the principal does not need to deal with incentive constraints, because within-team arrangements, while not necessarily incentive-compatible, are binding.A decentralized feasible contract, (T, e T ), consists of T : < !<, where T (x) = T x + ⇢ T , T 2 [0, 1] and ⇢ T 2 <; and e T 2 E. Given a decentralized feasible contract, (T, e T ), a feasible within team allocation (T i , e i ) i=1,2 consists of and (e 1 , e 2 ) 2 E. We say that, given a decentralized feasible contract, (T, e T ), a feasible within-team allocation (T i , e i ) i=1,2 solves the team's problem if: (1) it satisfies (IR i ), for all i = 1, 2; and (2) there is no other feasible within-team allocation ( Ti , êi ) i=1,2 satisfying (IR i ) for all i = 1, 2 and providing both of the agents strictly higher CE figures.That is, given a decentralized feasible contract, (T, e T ), a feasible within-team allocation solves the team's problem, if and only if: (1) (IR i ) holds for all i = 1, 2; (2) (SC) holds; and (3) the team constraint defined below (denoted by (T C)) holds: , for all (e 0 1 , e 0 2 ) 2 E Definition 4. A decentralized feasible contract, (T, e T ), is said to be a team contract if there exists a feasible within-team allocation ( T i , ⇢ T i , ẽT i ) i=1,2 that solves the team's problem and ẽT = e T .
The principal's problem with team contracts is: (e) ⇢ T subject to (T, e T ) being a team contract.

Example 1 with Teamwork
Revisiting Example 1 of Section 3.1, Table 4 presents optimal team contracts and associated CE figures to the principal when a given effort profile, e 2 E, is to be implemented.= (e H , e L ), but a different compensation scheme given by ( ⇤ 1 , ⇤ 2 ; ⇢ ⇤ 1 , ⇢ ⇤ 2 ) = (0.20, 0.20; 0.40, 1.10).Thus, in this particular example, the principal strictly prefers team contracts when compared with incentive contracts. 17

The Optimality of Team Contracts
This section presents our findings concerning the comparison of the principal's welfare under team contracts and centralized contracts.
The key difference between (centralized) collusion-proof contracts and (decentralized) team contracts lies in the way individual rationality is treated.To see this, consider an optimal collusion-proof contract in which individual rationality does not bind, implying that agents are strictly better off in comparison to their reserve levels.Now, in the labor union, agents would veto any arrangement that provides strictly less returns than the principal's offer, even if that arrangement were to be individually rational.(Then, we say that such an arrangement is not "participatory".)That is, the disagreement point of the labor union's bargaining is given by the utilities from the principal's offer.Because team arrangements are decentralized, the associated endeavor only considers individual rationality, as the principal's offer is not a "sunk" offer.That is why every collusion-proof contract is a team contract, as well.On the other hand, the use of CARA utilities brings about the dismissal of income effects in agents' decisions.Hence, it should not be surprising to observe that in the current setting, the individual rationality has to bind (by adjusting the constant terms, ⇢ i , for i = 1, 2).Thus, the participation constraint equals the individual rationality; therefore, we obtain the following result: Lemma 1.The set of team contracts is equal to the set of collusion-proof contracts.
The comparison of team contracts with strongly collusion-proof contracts reveals the following: ) is obtained from the summation of (CC 1 ) and (CC 2 ).Hence, every strongly collusion-proof contract must be a team contract.Yet, there are team contracts that are not strongly collusion-proof. 18This delivers the following: Lemma 2. Strongly collusion-proof contracts constitute a strict subset of the set of team contracts.
Therefore, optimal team contracts provide the principal higher CE levels than those obtained with optimal strongly collusion-proof contracts.However, such a conclusion cannot be made in the 17 If the principal were to be risk averse with a CARA 1/2, the optimal team contract, (S ⇤⇤⇤ i ) i=1,2 , involving e ⇤⇤⇤ = (e H , e L ) and ( ⇤⇤⇤ 1 , ⇤⇤⇤ 2 ; ⇢ ⇤⇤⇤ 1 , ⇢ ⇤⇤⇤ 2 ) = (0.10, 0.10; 1.10, 0.40) , delivering the principal a CE of 6.98, which is strictly higher than 6.39458 attained by the optimal strong collusion-proof contract, S ⇤⇤ , and higher than 6.52 provided by S ⇤ , the optimal incentive contract.Hence, the conclusions of Example 1 can be sustained with a risk averse principal. 18We thank an anonymous referee for suggesting the use of a modified version of our Example 1 towards this regard: consider Example 1 with the sole modification of increasing the second agent's cost of high effort from 0.35 to one.Then, the contract, S ⇤ , with (e H , e L ) is a team contract (so a collusion-proof contract), yet (CC 1 ) does not hold.In essence, agent 1 would obtain strictly more returns from agent 2 choosing e H , yet these do not suffice to cover the loss of agent 2.
comparison between incentive contracts and team contracts: while Example 1 portrays a setting in which team contracts deliver the principal strictly higher returns than incentive contracts, Example 5 provides a situation in which team contracts are outperformed by incentive contracts.Indeed, in the latter example, each agent's effort choice prescribed with the optimal team contract is different from the one associated with the optimal incentive contract, and the principal obtains strictly lower CE levels with the optimal team contract.However, we show that when this happens, the optimal incentive contracts are not necessarily immune to strong collusion, i.e., they may not be implementable.
To that regard, we provide a full characterization of situations in which the principal can ignore strong collusion.We need the following in the statement of our results.Definition 5.The asset of the principal is said to have monotone returns if µ(e 1 , e 2 ) is weakly increasing and 2 (e 1 , e 2 ) is weakly decreasing separately in both e 1 and e 2 .Moreover, define the best effort profile, e 2 E, by µ(e) µ(e 0 ) and 2 (e)  2 (e 0 ) for all e 0 6 = e.
The monotonicity of returns entails the interesting case in which the first agent governs only the mean, i.e., µ(e 1 , e 2 ) = µ(e 1 ), which is weakly increasing in e 1 , and the second only the variance, i.e., 2 (e 1 , e 2 ) = 2 (e 2 ), which is weakly decreasing in e 2 .On the other hand, in general, the best effort profile may not exist.However, because the set of effort levels is finite, there exists a best effort profile whenever returns are monotone.Furthermore, with monotone returns, agents' effort levels can be ordered with respect to their effects on the mean and variance; hence, the best effort profile (not necessarily the most costly) corresponds to (max e i 2E i e i ) i=1,2 , where the maximization is done with this order on E i , i = 1, 2.
Proposition 1. Suppose that returns are monotone and that the optimal incentive contract involves the best effort profile.Then, any incentive contract is strongly collusion-proof.Furthermore, for situations in which any of these conditions are violated, optimal incentive contracts are not necessarily immune to strong collusion, which may make the principal strictly worse off.
Proof.By hypothesis there, exists a best effort profile in E, and it is denoted by e.Moreover, the principal finding it optimal with incentive contracts implies that (CC i ), i = 1, 2, hold, because for i 6 = j: 1 , e 0 2 )) 2 (e 0 i , e j ) c i (e i ) c i (e 0 i ), 8e 0 i 2 E i and the left-hand side of (IC i ) is less than the left-hand side of (CC i ); we conclude that any solution satisfying (IC i ) satisfies (CC i ), i = 1, 2.
The four examples, presented in the Appendix, consider cases when the hypothesis of Proposition 1 is not satisfied and conclude the proof.These examples' features are: (1) returns are monotone, but implementing the optimal incentive contract does not involve the best effort profile (in Appendix A); (2) returns are not monotone, but there exists a best effort profile that the principal finds optimal with incentive contracts (in Appendix B); (3) returns are not monotone, but there is a best effort profile, yet implementing that effort profile with incentive contracts is not optimal (in Appendix C); (4) returns are not monotone, and there is no best effort profile (in Appendix D).Proposition 1 identifies conditions with which incentive contracts are guaranteed to be implementable, as the set of incentive contracts equals the set of strongly collusion-proof contracts.This is because the set of incentive contracts, containing strongly collusion-proof contracts, is a subset of the set of strongly collusion-proof contracts.Since strongly collusion-proof contracts are included in the set of team contracts (Lemma 2), incentive contracts are contained in the set of team contracts.Thus, under the conditions identified in Proposition 1, the optimal team contracts provide the principal with higher CE compared to the optimal incentive contracts.Additionally, they are minimal: any violation enables us to construct an example in which optimal incentive contracts are not strongly collusion-proof. 19herefore, we conclude that team contracts provide the principal with the maximum implementable certainty equivalent under the condition that returns are monotone and the best effort profile is chosen in the optimal incentive contract.Moreover, any violations of these conditions may trigger joint deviations from the optimal team contract and prevent its implementability.

Concluding Remarks
Our first concluding remark concerns the relation of the current model to the standard framework of Holmstrom and Milgrom [1] (HM, henceforth), the details of which can be found in footnote 4. The two models are essentially different.This is because the general version of ours is a single-task agency model in which agents control both the mean and variance of returns.Additionally, obtaining a single-task version from the model of HM with correlated error terms and interactions in the production functions using an aggregation measure (e.g., addition) of the two returns corresponds to a setting that cannot be obtained using our model.This is because the addition of two correlated normal distributions is not necessarily normal.With stochastically-independent error terms (possibly with interactions in the production functions), the single-task version of HM can be associated with the special case of our model in which agents control only the mean.While HM does not provide an optimality of the teamwork result in that setting, the monotonicity condition of Proposition 1 suffices in this regard for our model.Additionally, the single-task version of HM delivers a similar result with the additional feature of technologically-independent production (Proposition 4).
The second concluding remark concerns the role of substitution compatibility.It is clearly not needed in Lemma 2 and does not play a major part in Proposition 1.On the other hand, our examples establishing the minimality of the hypothesis of Proposition 1 utilize the substitution compatibility constraint.
The final concluding remark is about the efficiency properties of the optimal team contracts.Because that team contracts, making use of binding side-contracts within the team, can circumvent the individual incentive compatibility constraints to some extent, whether or not the first best outcome can be achieved is a relevant concern.In our model, the timing of team contracts implies the restriction of (T C), which is nothing but a "team" incentive compatibility constraint.That is precisely why optimal team contracts 19 Then, there exists i = 1, 2 for whom (CC i ) does not hold, i.e., i contemplates a joint deviation.Yet, this joint deviation does not necessarily lead to the dismissal of the optimal team contract: the situation may be as in the example of footnote 18, where implementing the joint deviation that agent 1 contemplates is too costly.Thus, even though the optimal incentive contract may not be immune against strong collusion, it may still be the optimal (team) collusion-proof contract.
are not necessarily best.To see this, consider the case in which agents are identical in every regard.Then, the team would be acting as a single person, but still, the "team" incentive compatibility would matter.In fact, Example 5 of Appendix E depicts a situation in which team contracts are outperformed by incentive contracts; hence, in general, they are not even second-best.On the other hand, using a particular protocol and structure in the formation of the team with "snitches", incentivizing agents to report the other's deviations, can deliver the best, as in [6].
higher shares allocated to the agents, 0.312377.This decreases the principal's optimal CE from 26.7315 to 26.6817.

Table 1 .
The mean and variance figures of Example 1. (e H , e H ) (e H , e L ) (e L , e H ) (e L , e L )

Table 2 .
Optimal incentive contracts of Example 1.

Table 3 .
Optimal strongly collusion-proof contracts of Example 1.