delegation with endogenous states
play

Delegation with Endogenous States Dino Gerardi Lucas Maestri - PowerPoint PPT Presentation

Delegation with Endogenous States Dino Gerardi Lucas Maestri Ignacio Monzn (Collegio Carlo Alberto) (FGV EPGE) (Collegio Carlo Alberto) University of Bonn - October 23rd, 2019 Introduction Delegation Delegation problems are


  1. Delegation with Endogenous States Dino Gerardi Lucas Maestri Ignacio Monzón (Collegio Carlo Alberto) (FGV EPGE) (Collegio Carlo Alberto) University of Bonn - October 23rd, 2019

  2. Introduction Delegation � Delegation problems are widespread: � A party with authority to make a decision ( Principal ) � must rely on a better informed party ( Agent ) � Should the principal give ‡exibility to the agent, or instead restrict what the agent can choose? � Some examples: � CEO selects feasible projects Manager (better informed about their pro…tability) chooses one � Regulator restricts the prices that a monopolist (better informed about costs) can charge

  3. Introduction Moral hazard � Before choosing an action, agent can exert e¤ort and a¤ect outcomes � E¤ort is typically unobservable � Agent cannot fully control outcomes � Examples: � Manager’s e¤ort a¤ects potential pro…ts of various projects � Monopolist can adopt practices that reduce production costs

  4. Introduction Goal of the paper � How can a principal incentivize the agent to both exert e¤ort and choose appropriate actions? � Principal chooses a delegation set � Cares about e¤ort and actions � We characterize the optimal delegation set � With aligned and misaligned preferences � The optimal delegation set has a simple form: actions below a threshold are excluded

  5. Introduction Closely related literature � Delegation with misaligned preferences, no moral hazard: � Holmström (1977, 1984) � Alonso and Matouschek (2008) � Amador and Bagwell (2013) � Delegation with Information Acquisition: � Szalay (2005) � Deimen and Szalay (2018)

  6. The model with no bias The model with no bias. Timing Principal selects a delegation set A � R ( A closed) Agent exerts e¤ort e 2 [ 0, e ] at cost c ( e ) Given e¤ort e , the state γ is realized according to c.d.f. F ( γ , e ) Agent observes the state γ and chooses an action a 2 A

  7. The model with no bias Distribution of the state h i The support of the state distribution is Γ = γ , ¯ γ For every e 2 [ 0, e ] and every γ 2 Γ , f ( γ , e ) > 0 F ( � , � ) is smooth F satis…es the (strict) monotone likelihood ratio property (MLRP) : f ( γ 0 , e 0 ) f ( γ , e 0 ) > f ( γ 0 , e ) f ( γ , e ) for all e 0 > e and γ 0 > γ

  8. The model with no bias Payo¤s The parties’ payo¤s are: U P ( a , γ , e ) = u ( a , γ ) + v ( e ) U A ( a , γ , e ) = u ( a , γ ) � c ( e ) Assumptions � v ( � ) : [ 0, e ] ! R is strictly increasing and strictly concave � c ( � ) : [ 0, e ] ! R is strictly increasing and strictly convex

  9. The model with no bias � the common payo¤ component u ( � , � ) is C 2 and satis…es h i � for every γ 2 γ , ¯ γ , u ( � , γ ) is strictly quasiconcave in a and u ( a , γ ) = u ( a � ( γ ) , γ ) = 0 max a h i � Limit condition : for every γ 2 γ , ¯ γ a !� ∞ u ( a , γ ) = lim lim a ! + ∞ u ( a , γ ) = � ∞ � Single crossing condition : for all ( a , γ ) 2 R � Γ ∂ u 2 ( a , γ ) > 0 ∂γ∂ a

  10. The model with no bias Expected payo¤s Given a delegation set A and an e¤ort level e , the parties’ expected payo¤s are: V P ( A , e ) = E [ max a 2 A u ( a , γ ) j e ] + v ( e ) V A ( A , e ) = E [ max a 2 A u ( a , γ ) j e ] � c ( e ) Notice that v ( e ) can be thought as E [ r ( γ ) j e ] where r ( � ) is an increasing function

  11. Results with no bias Floor Delegation De…nition A delegation A set is a ‡oor if A = [ a , + ∞ ) for some a 2 R . The agent’s optimal action when the delegation set is a ‡oor is a 2 [ a , + ∞ ) u ( a , γ ) = max f a , a � ( γ ) g ˆ a ( γ , a ) = arg max

  12. Results with no bias Interval and ‡oor delegation sets Proposition 1 i) Let ˜ A be an optimal delegation set and let ˜ e > 0 be the optimal level of e¤ort. For every γ 2 Γ let ˜ a ( γ ) = max a 2 ˜ A u ( a , γ ) denote the action chosen by the agent when the state is γ . Then the set n � �o a : a = ˜ a ( γ ) for some γ 2 γ , ¯ γ is convex. ii) If there is an optimal delegation set, then there is also an optimal ‡oor delegation set.

  13. Results with no bias Sketch of the proof of Proposition 1 The proof of part i) is by contradiction Assume that Z ¯ Z ¯ γ γ γ u ( a � ( γ ) , γ ) f ( γ , ˜ e ) d γ > γ u ( ˜ a ( γ ) , γ ) f ( γ , ˜ e ) d γ The other case is similar � a � � � i , a � ( γ ) By continuity, there exists a unique a 2 γ such that Z ¯ Z ¯ γ γ γ u ( ˜ a ( γ ) , γ ) f ( γ , ˜ e ) d γ = γ u ( ˆ a ( γ , a ) , γ ) f ( γ , ˜ e ) d γ

  14. Results with no bias Furthermore, quasiconcavity and single crossing of u ( � , � ) γ < ( a � ) � 1 ( a ) such that guarantee that there exists a unique ˆ u ( ˜ a ( γ ) , γ ) > u ( ˆ a ( γ , a ) , γ ) if and only if γ < ˆ γ If the principal adopts the ‡oor delegation set [ a , + ∞ ) , the agent prefers ˜ e to lower levels of e¤ort The di¤erence u ( ˆ a ( γ , a ) , γ ) � u ( ˜ a ( γ ) , γ ) is negative (positive) below (above) ˆ γ Thus, it follows from MLRP that R ¯ γ γ [ u ( ˆ a ( γ , a ) , γ ) � u ( ˜ a ( γ ) , γ )] f ( γ , ˜ e ) d γ > R ¯ γ γ [ u ( ˆ a ( γ , a ) , γ ) � u ( ˜ a ( γ ) , γ )] f ( γ , e ) d γ for every e < ˜ e

  15. Results with no bias e given ˜ From the optimality of ˜ A we have: Z ¯ Z ¯ γ γ e ) > γ u ( ˜ a ( γ ) , γ ) f ( γ , ˜ e ) d γ � c ( ˜ γ u ( ˜ a ( γ ) , γ ) f ( γ , e ) d γ � c ( e ) Combining the two inequalities we obtain: Z ¯ Z ¯ γ γ e ) > γ u ( ˆ a ( γ , a ) , γ ) f ( γ , ˜ e ) d γ � c ( ˜ γ u ( ˆ a ( γ , a ) , γ ) f ( γ , e ) d γ � c ( e ) for every e < ˜ e

  16. Results with no bias If ˜ e < ¯ e and the principal adopts the ‡oor delegation set [ a , + ∞ ) , ˜ e is not optimal (this, again, follows from MLRP) Thus, the optimal e¤ort level e 0 must be larger than ˜ e . We have � ˜ � V A ( a , e 0 ) > V A ( a , ˜ e ) = V A A , ˜ e � ˜ � V P ( a , e 0 ) > V P A , ˜ e If ˜ e = ¯ e , then the agent will continue to choose ¯ e even if the principal adopts the ‡oor delegation set [ a � ε , + ∞ ) for some small ε > 0. Again, the original delegation set ˜ A is not optimal

  17. Results with no bias Existence Proposition 2 There exists an optimal delegation set. We restrict attention to ‡oor delegation sets and show that the principal’s optimization problem admits a solution

  18. Results with no bias Comparative Statics Given the ‡oor delegation set [ a , + ∞ ) , let BR ( a ) denote the set of optimal e¤ort levels. Proposition 3 i) If a > a 0 then e > e 0 for every ( e , e 0 ) 2 BR ( a ) � BR ( a 0 ) . ii) Consider two bene…t functions, v 1 ( � ) and v 2 ( � ) with v 0 1 ( e ) > v 0 2 ( e ) for every e . Let e i be an optimal level of e¤ort for the model in which v = v i for i = 1, 2. Then e 1 > e 2 . i ii) Consider two cost functions, c 1 ( � ) and c 2 ( � ) with 1 ( e ) 6 c 0 c 1 ( 0 ) = c 2 ( 0 ) = 0 and c 0 2 ( e ) for every e . Let V i P , i = 1, 2, denote the principal’s payo¤ of the optimal delegation set P > V 2 when the cost is c i ( � ) . Then V 1 P .

  19. Model with bias The model with bias Quadratic payo¤ function and uniform distributions with shifting support Agent is biased towards some action: u P ( a , γ ) = � ( γ + β � a ) 2 u A ( a , γ ) = � ( γ � a ) 2 β > 0 ( β < 0 ): the principal prefers higher (lower) actions than the agent

  20. Model with bias Consider a simple family of probability distributions When the e¤ort is γ > 0 the state is uniformly distributed in the unit interval [ γ , γ + 1 ] Cost function is quadratic: c ( γ ) = γ 2 2 The bene…t function v ( γ ) is concave

  21. Model with bias The delegation set A and the e¤ort level γ induce expected payo¤s: V P ( A , γ ) = � R γ + 1 γ , A )) 2 d ˜ ( ˜ γ + β � ˆ a ( ˜ γ + v ( γ ) γ V A ( A , γ ) = � R γ + 1 γ , A )) 2 d ˜ γ � γ 2 ( ˜ γ + β � ˆ a ( ˜ γ 2 γ � a ) 2 where ˆ a ( ˜ γ , A ) = arg max a 2 A � ( ˜

  22. Results with bias Necessary conditions for optimal e¤ort Given a delegation set A , the agent solves the following problem: h γ � a ) 2 i R γ + 1 γ � γ 2 max γ > 0 max a 2 ˜ A � ( ˜ d ˜ 2 = γ R γ + 1 γ , A )) 2 d ˜ γ � γ 2 max γ > 0 � ( ˜ γ � ˆ a ( ˜ γ 2 First-order conditions for interior γ : � � �� 2 � � � �� 2 = γ γ , ˜ γ + 1, ˜ γ � ˆ γ + 1 � ˆ a A a A In general, the …rst-order conditions are not su¢cient (the problem is not necessarily concave)

  23. Results with bias Concavity under interval delegation Lemma 1 Suppose that the delegation set is an interval [ a , ¯ a ] for some a � ¯ a . For every γ , let z ( γ ) denote the agent’s expected payo¤ if the e¤ort is γ : � � Z γ + 1 γ � γ 2 z ( γ ) = � max a ] u A ( a , ˜ γ ) d ˜ 2 a 2 [ a ,¯ γ The function z ( � ) is concave.

  24. Results with bias Optimal interval delegation Proposition 4 Let γ > 0 be an optimal level of e¤ort and ˜ A denote the smallest optimal delegation set. Then ˜ A is convex. Moreover, either ˜ A � [ γ , γ + 1 ] or ˜ A = f ¯ a g with ¯ a > γ + 1. To incentive the agent to exert high e¤ort levels the principal may allow only one action: ˜ A = f ¯ a g with ¯ a > γ + 1.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend