Delegation with Endogenous States Dino Gerardi Lucas Maestri - - PowerPoint PPT Presentation
Delegation with Endogenous States Dino Gerardi Lucas Maestri - - PowerPoint PPT Presentation
Delegation with Endogenous States Dino Gerardi Lucas Maestri Ignacio Monzn (Collegio Carlo Alberto) (FGV EPGE) (Collegio Carlo Alberto) University of Bonn - October 23rd, 2019 Introduction Delegation Delegation problems are
Introduction
Delegation
Delegation problems are widespread:
A party with authority to make a decision (Principal) must rely on a better informed party (Agent) Should the principal give ‡exibility to the agent, or instead
restrict what the agent can choose? Some examples:
CEO selects feasible projects
Manager (better informed about their pro…tability) chooses one
Regulator restricts the prices that a monopolist (better
informed about costs) can charge
Introduction
Moral hazard
Before choosing an action, agent can exert e¤ort and a¤ect
- utcomes
E¤ort is typically unobservable Agent cannot fully control outcomes
Examples:
Manager’s e¤ort a¤ects potential pro…ts of various projects Monopolist can adopt practices that reduce production costs
Introduction
Goal of the paper
How can a principal incentivize the agent to both exert e¤ort
and choose appropriate actions?
Principal chooses a delegation set Cares about e¤ort and actions
We characterize the optimal delegation set
With aligned and misaligned preferences The optimal delegation set has a simple form: actions below a
threshold are excluded
Introduction
Closely related literature
Delegation with misaligned preferences, no moral hazard:
Holmström (1977, 1984) Alonso and Matouschek (2008) Amador and Bagwell (2013)
Delegation with Information Acquisition:
Szalay (2005) Deimen and Szalay (2018)
The model with no bias
The model with no bias. Timing
Principal selects a delegation set A R (A closed) Agent exerts e¤ort e 2 [0, e] at cost c (e) Given e¤ort e, the state γ is realized according to c.d.f. F (γ, e) Agent observes the state γ and chooses an action a 2 A
The model with no bias
Distribution of the state
The support of the state distribution is Γ = h γ, ¯ γ i For every e 2 [0, e] and every γ 2 Γ, f (γ, e) > 0 F (, ) is smooth F satis…es the (strict) monotone likelihood ratio property (MLRP): f (γ0, e0) f (γ, e0) > f (γ0, e) f (γ, e) for all e0 > e and γ0 > γ
The model with no bias
Payo¤s
The parties’ payo¤s are: UP (a, γ, e) = u (a, γ) + v(e) UA (a, γ, e) = u (a, γ) c(e) Assumptions
v() : [0, e] ! R is strictly increasing and strictly concave c() : [0, e] ! R is strictly increasing and strictly convex
The model with no bias
the common payo¤ component u (, ) is C2 and satis…es
for every γ 2
h γ, ¯ γ i , u (, γ) is strictly quasiconcave in a and max
a
u (a, γ) = u (a (γ) , γ) = 0
Limit condition: for every γ 2
h γ, ¯ γ i lim
a!∞ u(a, γ) = lim a!+∞ u(a, γ) = ∞
Single crossing condition: for all (a, γ) 2 R Γ
∂u2(a, γ) ∂γ∂a > 0
The model with no bias
Expected payo¤s
Given a delegation set A and an e¤ort level e, the parties’ expected payo¤s are: VP (A, e) = E [maxa2A u (a, γ) j e] + v (e) VA (A, e) = E [maxa2A u (a, γ) j e] c (e) Notice that v (e) can be thought as E [r (γ) j e] where r () is an increasing function
Results with no bias
Floor Delegation
De…nition A delegation A set is a ‡oor if A = [a, +∞) for some a 2 R. The agent’s optimal action when the delegation set is a ‡oor is ˆ a(γ, a) = arg max
a2[a,+∞) u(a, γ) = max fa, a(γ)g
Results with no bias
Interval and ‡oor delegation sets
Proposition 1 i) Let ˜ A be an optimal delegation set and let ˜ e > 0 be the optimal level of e¤ort. For every γ 2 Γ let ˜ a (γ) = maxa2 ˜
A u (a, γ) denote the action chosen by the agent
when the state is γ. Then the set n a : a = ˜ a (γ) for some γ 2
- γ, ¯
γ
- is convex.
ii) If there is an optimal delegation set, then there is also an
- ptimal ‡oor delegation set.
Results with no bias
Sketch of the proof of Proposition 1
The proof of part i) is by contradiction Assume that
Z ¯
γ γ u (˜
a (γ) , γ) f(γ, ˜ e)dγ >
Z ¯
γ γ u (a (γ) , γ) f(γ, ˜
e)dγ The other case is similar By continuity, there exists a unique a 2
- a
γ
- , a (γ)
i such that
Z ¯
γ γ u (˜
a (γ) , γ) f(γ, ˜ e)dγ =
Z ¯
γ γ u (ˆ
a(γ, a), γ) f(γ, ˜ e)dγ
Results with no bias
Furthermore, quasiconcavity and single crossing of u (, ) guarantee that there exists a unique ˆ γ < (a)1 (a) such that u (˜ a (γ) , γ) > u (ˆ a(γ, a), γ) if and only if γ < ˆ γ If the principal adopts the ‡oor delegation set [a, +∞), the agent prefers ˜ e to lower levels of e¤ort The di¤erence u (ˆ a(γ, a), γ) u (˜ a (γ) , γ) is negative (positive) below (above) ˆ γ Thus, it follows from MLRP that R ¯
γ γ [u (ˆ
a(γ, a), γ) u (˜ a (γ) , γ)] f(γ, ˜ e)dγ > R ¯
γ γ [u (ˆ
a(γ, a), γ) u (˜ a (γ) , γ)] f(γ, e)dγ for every e < ˜ e
Results with no bias
From the optimality of ˜ e given ˜ A we have:
Z ¯
γ γ u (˜
a (γ) , γ) f(γ, ˜ e)dγ c (˜ e) >
Z ¯
γ γ u (˜
a (γ) , γ) f(γ, e)dγ c (e) Combining the two inequalities we obtain:
Z ¯
γ γ u (ˆ
a(γ, a), γ) f(γ, ˜ e)dγ c (˜ e) >
Z ¯
γ γ u (ˆ
a(γ, a), γ) f(γ, e)dγ c (e) for every e < ˜ e
Results with no bias
If ˜ e < ¯ e and the principal adopts the ‡oor delegation set [a, +∞), ˜ e is not optimal (this, again, follows from MLRP) Thus, the optimal e¤ort level e0 must be larger than ˜
- e. We have
VA (a, e0) > VA (a, ˜ e) = VA ˜ A, ˜ e
- VP (a, e0) > VP
˜ A, ˜ e
- If ˜
e = ¯ e, then the agent will continue to choose ¯ e even if the principal adopts the ‡oor delegation set [a ε, +∞) for some small ε > 0. Again, the original delegation set ˜ A is not optimal
Results with no bias
Existence
Proposition 2 There exists an optimal delegation set. We restrict attention to ‡oor delegation sets and show that the principal’s optimization problem admits a solution
Results with no bias
Comparative Statics
Given the ‡oor delegation set [a, +∞), let BR (a) denote the set of
- ptimal e¤ort levels.
Proposition 3 i) If a > a0 then e > e0 for every (e, e0) 2 BR (a) BR (a0) . ii) Consider two bene…t functions, v1 () and v2 () with v0
1 (e) > v0 2 (e) for every e. Let ei be an optimal level of e¤ort for
the model in which v = vi for i = 1, 2. Then e1 > e2. iii) Consider two cost functions, c1 () and c2 () with c1 (0) = c2 (0) = 0 and c0
1 (e) 6 c0 2 (e) for every e. Let Vi P,
i = 1, 2, denote the principal’s payo¤ of the optimal delegation set when the cost is ci () . Then V1
P > V2 P.
Model with bias
The model with bias
Quadratic payo¤ function and uniform distributions with shifting support Agent is biased towards some action: uP (a, γ) = (γ + β a)2 uA (a, γ) = (γ a)2 β > 0 (β < 0): the principal prefers higher (lower) actions than the agent
Model with bias
Consider a simple family of probability distributions When the e¤ort is γ > 0 the state is uniformly distributed in the unit interval [γ, γ + 1] Cost function is quadratic: c (γ) = γ2
2
The bene…t function v(γ) is concave
Model with bias
The delegation set A and the e¤ort level γ induce expected payo¤s: VP (A, γ) = R γ+1
γ
( ˜ γ + β ˆ a ( ˜ γ, A))2 d ˜ γ + v (γ) VA (A, γ) = R γ+1
γ
( ˜ γ + β ˆ a ( ˜ γ, A))2 d ˜ γ γ2
2
where ˆ a ( ˜ γ, A) = arg maxa2A ( ˜ γ a)2
Results with bias
Necessary conditions for optimal e¤ort
Given a delegation set A, the agent solves the following problem: maxγ>0 R γ+1
γ
h maxa2 ˜
A ( ˜
γ a)2i d ˜ γ γ2
2 =
maxγ>0 R γ+1
γ
( ˜ γ ˆ a ( ˜ γ, A))2 d ˜ γ γ2
2
First-order conditions for interior γ :
- γ ˆ
a
- γ, ˜
A 2
- γ + 1 ˆ
a
- γ + 1, ˜
A 2 = γ In general, the …rst-order conditions are not su¢cient (the problem is not necessarily concave)
Results with bias
Concavity under interval delegation
Lemma 1 Suppose that the delegation set is an interval [a, ¯ a] for some a ¯
- a. For every γ, let z (γ) denote the agent’s expected
payo¤ if the e¤ort is γ : z (γ) =
Z γ+1
γ
- max
a2[a,¯ a] uA (a, ˜
γ)
- d ˜
γ γ2 2 The function z () is concave.
Results with bias
Optimal interval delegation
Proposition 4 Let γ > 0 be an optimal level of e¤ort and ˜ A denote the smallest optimal delegation set. Then ˜ A is convex. Moreover, either ˜ A [γ, γ + 1] or ˜ A = f¯ ag with ¯ a > γ + 1. To incentive the agent to exert high e¤ort levels the principal may allow only one action: ˜ A = f¯ ag with ¯ a > γ + 1.
Results with bias
Optimal interval delegation: sketch of the proof
Step 1: If ˜ A \ (γ, γ + 1) = ∅, then ˜ A is a singleton. The delegation set A0 yields to the principal a larger payo¤ than A
Results with bias
We work with a relaxed problem: the agent’s level of e¤ort has to satisfy the …rst-order conditions Step 2: Let ˜ A denote the smallest optimal delegation set and let a denote the smallest element of ˜
- A. Then either ˜
A is a singleton
- r a > γ.
The delegation set A0 yields to the principal a larger payo¤ than A
Results with bias
Step 3: Let ˜ A denote the smallest optimal delegation set and let ¯ a denote the largest element of ˜
- A. Then either ˜
A is a singleton or ¯ a 6 γ + 1. Step 4: Suppose that the optimal delegation set ˜ A is not a singleton ˜ A solves the relaxed problem. Therefore, ˜ A [γ, γ + 1] Suppose that ˜ A has a gap. The principal’s payo¤ increases if the gap is …lled The interval delegation set induces the same e¤ort level as ˜ A (it satis…ed the same …rst-order conditions and the problem is concave)
Results with bias
Floor Delegation
Proposition 5 Let γ > 0 be the optimal level of e¤ort and ˜ A the smallest optimal delegation set. If ˜ A [γ, γ + 1] then ˜ A = [a, γ + 1] for some a > γ. Notice that in this case the ‡oor delegation set [a, ∞] is also
- ptimal
Results with bias
The optimal delegation set is ˜ A = ˆ a
- γ, ˜
A
- , ˆ
a
- γ + 1, ˜
A [γ, γ + 1] The …rst-order conditions
- γ ˆ
a
- γ, ˜
A 2
- γ + 1 ˆ
a
- γ + 1, ˜
A 2 = γ imply
- γ ˆ
a
- γ, ˜
A
- >
- ˆ
a
- γ + 1, ˜
A (γ + 1)
Results with bias
If ˆ a
- γ + 1, ˜
A < γ + 1 then it is possible to increase ˆ a
- γ + 1, ˜
A
- and decrease ˆ
a
- γ, ˜
A
- simultaneously preserving
- γ ˆ
a
- γ, ˜
A 2
- γ + 1 ˆ
a
- γ + 1, ˜
A 2 = γ This change restricts the set of states in which the agent takes a suboptimal action, increasing payo¤s
Results with bias
Discretion and level of e¤ort
Lemma 2 Suppose that the optimal e¤ort level γ is interior. Let ˜ A denote the smallest optimal delegation set. If γ < 1, then ˜ A = [γ + pγ, γ + 1] If γ > 1, then ˜ A = 3γ + 1 2
Results with bias
Suppose that the optimal delegation set is ˜ A = [a, γ + 1] for some γ < a < γ + 1 The e¤ort level γ satis…es the …rst order conditions: (γ a)2 = γ which imply a = γ + pγ < γ + 1 and, thus, γ < 1
Results with bias
On the other hand, if ˜ A = fag for some a > γ + 1 then (a γ)2 (a γ 1)2 = γ which yields a = 3γ + 1 2 > γ + 1 and, thus, γ > 1
Results with bias
The optimal level of e¤ort
For every γ > 0 let VP (γ) denote the principal’s payo¤ if he o¤ers the optimal delegation set that induces the e¤ort level γ We have VP (γ) =
Z γ+pγ
γ
(γ + pγ (s + β))2 ds
Z γ+1
γ+pγ β2ds + v (γ)
for γ < 1, and VP (γ) =
Z γ+1
γ
3γ + 1 2 (s + β) 2 ds + v (γ) for γ > 1
Results with bias
We compute the derivative of VP : V0
P (γ) = β 1
2 pγ + v0 (γ) for γ < 1, and V0
P (γ) = β 1
2γ + v0 (γ) for γ > 1 VP is concave (recall v is concave) and V0
P is continuous
everywhere We set V0
P (γ) = 0 and obtain a unique solution
Results with bias
Proposition 6 Assume that the optimal level of e¤ort is strictly positive. If β 1
2 + v0 (1) < 0, then the optimal delegation set is
[γ + pγ, γ + 1] where the optimal level of e¤ort γ < 1 satis…es β 1 2 p γ + v0 (γ) = 0 If β 1
2 + v0 (1) > 0, then the optimal delegation set is
n
3γ+1 2
- where γ > 1 satis…es
β 1 2γ + v0 (γ) = 0
Results with bias
Corner solution
If β > 0, it is not optimal for the principal to induce an e¤ort level equal to zero The delegation set [η, 1 + η] , with η > 0 and small, yields a strictly larger payo¤ than the delegation set [0, 1] If β < 0, the optimal delegation set that induces zero e¤ort coincides with the optimal delegation set (∞, ¯ a] , ¯ a < 1, when the state is uniformly distributed over the unit interval (no moral hazard)
Results with bias
Comparative Statics
Proposition 7 (For β < 0 assume γ > 0) i) The optimal level of e¤ort γ and the principal’s payo¤ are increasing in β ii) Suppose that c (γ) = 1
2kγ2 for k > 0. Both γ and the
principal’s payo¤ are decreasing in k iii) Suppose that v (γ) = αh (γ) , with α > 0 and h () increasing an concave. Then ∂γ
∂α > 0
Conclusions