dynamic delegation of experimentation
play

Dynamic Delegation of Experimentation Yingni Guo Northwestern - PowerPoint PPT Presentation

Dynamic Delegation of Experimentation Yingni Guo Northwestern University ngni Guo (NU) Delegation of Experimentation 1 / 55 Introduction Delegating Experimentation I study how to manage innovation in a hierarchical organization. A principal


  1. I. Model Delegation Delegation Principal has full commitment and cannot use transfers. She determines a delegation contract at time 0 . By the Revelation Principle, Principal offers a direct mechanism π : Θ → Π � sup U ρ ( π ( θ ) , θ ) dF ( θ ) , Θ U α ( π ( θ ) , θ ) ≥ U α ( π ( θ ′ ) , θ ) ∀ θ, θ ′ ∈ Θ . subject to Yingni Guo (NU) Delegation of Experimentation 16 / 55

  2. Outline Model 1 Single-player benchmark 2 Characterizing the policy space 3 Main results 4 More general results 5 Yingni Guo (NU) Delegation of Experimentation 17 / 55

  3. II. Single-Player Benchmark Posterior Beliefs Given prior p 0 and the history of events up to time t p t = P t [ ω = 1] . Before the first success, p t satisfies a differential equation p t = − λπ t p t (1 − p t ) . ˙ At the first success, p t jumps to one. Yingni Guo (NU) Delegation of Experimentation 18 / 55

  4. II. Single-Player Benchmark Single Player’s Preferred Policy Player i ’s preferred policy is Markov wrt p t , characterized by a cutoff p ∗ i s.t. � 1 if p t > p ∗ i , π t = if p t ≤ p ∗ 0 i . The cutoff belief is r p ∗ i = . r + ( λ + r ) η i Agent’s cutoff is lower than Principal’s p ∗ α < p ∗ ρ . Yingni Guo (NU) Delegation of Experimentation 19 / 55

  5. II. Single-Player Benchmark Agency Problem Revisited τ i ( θ ) : Player i ’s preferred stopping time given θ . state prob. p 1 θ p ∗ ρ 0 time τ ρ ( θ ) Yingni Guo (NU) Delegation of Experimentation 20 / 55

  6. II. Single-Player Benchmark Agency Problem Revisited For a given prior, Agent prefers to experiment longer than Principal. ( θ ) state prob. p 1 θ p ∗ ρ p ∗ α 0 time τ ρ ( θ ) τ α ( θ ) Yingni Guo (NU) Delegation of Experimentation 20 / 55

  7. II. Single-Player Benchmark Agency Problem Revisited Higher priors warrant longer experimentation. ( θ ) state prob. p 1 θ ′ θ p ∗ ρ 0 time τ ρ ( θ ) τ ρ ( θ ′ ) Yingni Guo (NU) Delegation of Experimentation 20 / 55

  8. II. Single-Player Benchmark Agency Problem Revisited Lower types (those with lower θ ) have incentives to mimic higher types. state prob. p 1 θ ′ θ p ∗ ρ p ∗ α 0 time τ ρ ( θ ) τ α ( θ ) τ ρ ( θ ′ ) τ α ( θ ′ ) Yingni Guo (NU) Delegation of Experimentation 20 / 55

  9. Outline Model 1 Single-player benchmark 2 Characterizing the policy space 3 Main results 4 More general results 5 Yingni Guo (NU) Delegation of Experimentation 21 / 55

  10. III. Characterizing the Policy Space A Policy as a Pair of Numbers (Total Expected Discounted) Resource Pair For a fixed policy π , define w 1 ( π ) and w 0 ( π ) as follows: �� ∞ � � w 1 ( π ) ≡ E re − rt π t dt � π, 1 ∈ [0 , 1] � 0 �� ∞ � � w 0 ( π ) ≡ E re − rt π t dt � π, 0 ∈ [0 , 1] . � 0 w 1 ( π ) : (total expected discounted) resource allocated to R under π in state 1 . w 0 ( π ) : (total expected discounted) resource allocated to R under π in state 0 . Yingni Guo (NU) Delegation of Experimentation 22 / 55

  11. III. Characterizing the Policy Space A Policy as a Pair of Numbers Summary Statistic for the Payoffs Lemma 1 (A Policy as a Pair of Numbers) For a given policy π ∈ Π and prior p 0 ∈ [0 , 1] , player i ’s payoff can be written as � � ( λh i − s i ) w 1 ( π ) � � U i ( π, p 0 ) − s i = p 0 1 − p 0 · . (0 − s i ) w 0 ( π ) Proof Yingni Guo (NU) Delegation of Experimentation 23 / 55

  12. III. Characterizing the Policy Space A Policy as a Pair of Numbers Summary Statistic for the Payoffs Lemma 1 (A Policy as a Pair of Numbers) For a given policy π ∈ Π and prior p 0 ∈ [0 , 1] , player i ’s payoff can be written as � � ( λh i − s i ) w 1 ( π ) � � U i ( π, p 0 ) − s i = p 0 1 − p 0 · . (0 − s i ) w 0 ( π ) Proof ( w 1 ( π ) , w 0 ( π )) is a summary statistic of π for the payoffs. Yingni Guo (NU) Delegation of Experimentation 23 / 55

  13. III. Characterizing the Policy Space Feasible Set Feasible Set Feasible set Γ : the set of feasible resource pairs ( w 1 , w 0 ) | ( w 1 , w 0 ) = ( w 1 ( π ) , w 0 ( π )) , π ∈ Π � � Γ = . Yingni Guo (NU) Delegation of Experimentation 24 / 55

  14. III. Characterizing the Policy Space Feasible Set Characterizing the Feasible Set ⇒ ∃ p ∈ R 2 , � p � = 1 , ˆ w ∈ bd (Γ) ⇐ ˆ w ∈ argmax w ∈ Γ p · w. Yingni Guo (NU) Delegation of Experimentation 25 / 55

  15. III. Characterizing the Policy Space Feasible Set Characterizing the Feasible Set ⇒ ∃ p ∈ R 2 , � p � = 1 , ˆ w ∈ bd (Γ) ⇐ ˆ w ∈ argmax w ∈ Γ p · w. Yingni Guo (NU) Delegation of Experimentation 25 / 55

  16. III. Characterizing the Policy Space Feasible Set Characterizing the Feasible Set ⇒ ∃ p ∈ R 2 , � p � = 1 , ˆ w ∈ bd (Γ) ⇐ ˆ w ∈ argmax w ∈ Γ p · w. Yingni Guo (NU) Delegation of Experimentation 25 / 55

  17. III. Characterizing the Policy Space Feasible Set Characterizing the Feasible Set ⇒ ∃ p ∈ R 2 , � p � = 1 , ˆ w ∈ bd (Γ) ⇐ ˆ w ∈ argmax w ∈ Γ p · w. Yingni Guo (NU) Delegation of Experimentation 25 / 55

  18. III. Characterizing the Policy Space Feasible Set Characterizing the Feasible Set ⇒ ∃ p ∈ R 2 , � p � = 1 , ˆ w ∈ bd (Γ) ⇐ ˆ w ∈ argmax w ∈ Γ p · w. Yingni Guo (NU) Delegation of Experimentation 25 / 55

  19. III. Characterizing the Policy Space Feasible Set Characterizing the Feasible Set ⇒ ∃ p ∈ R 2 , � p � = 1 , ˆ w ∈ bd (Γ) ⇐ ˆ w ∈ argmax w ∈ Γ p · w. Lemma 2 (Feasible Set) , where Π M are Markov policies (wrt p ). � ( w 1 ( π ) , w 0 ( π )) , π ∈ Π M � Γ = co Proof Yingni Guo (NU) Delegation of Experimentation 25 / 55

  20. III. Characterizing the Policy Space Feasible Set Canonical Markov Policies: Poisson Conclusive News Stopping-time policies (lower-cutoff Markov policies) allocate all resource to R until a fixed time; if at least one success occurs by then, allocate all resource to R forever; otherwise, switch to S . Slack-after-success policies (upper-cutoff Markov policies) allocate all resource to R until the first success; then allocate a fixed fraction to R . Yingni Guo (NU) Delegation of Experimentation 26 / 55

  21. III. Characterizing the Policy Space Feasible Set Canonical Markov Policies: Poisson Conclusive News w 0 ( π ) slack-after-success policies 1 s e i c i l o p e m i t - g n i p p o t s w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 27 / 55

  22. b III. Characterizing the Policy Space Feasible Set Canonical Markov Policies: Poisson Conclusive News w 0 ( π ) slack-after-success policies 1 A: allocate all resource to S s e i c i l o p e m i t - g n i p p o t s A w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 27 / 55

  23. b III. Characterizing the Policy Space Feasible Set Canonical Markov Policies: Poisson Conclusive News w 0 ( π ) slack-after-success policies 1 B: switch to S at some fixed time if no success occurs s e i c i l o p e m i t - B g n i p p o t s w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 27 / 55

  24. III. Characterizing the Policy Space Feasible Set Canonical Markov Policies: Poisson Conclusive News w 0 ( π ) slack-after-success policies b C 1 s e i c C: allocate all resource to R i l o p e m i t - g n i p p o t s w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 27 / 55

  25. b III. Characterizing the Policy Space Feasible Set Canonical Markov Policies: Poisson Conclusive News w 0 ( π ) slack-after-success policies 1 D s e i c i l o p D: allocate all resource to R e m until 1 st success; then allocate i t - g some fixed fraction to R n i p p o t s w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 27 / 55

  26. b III. Characterizing the Policy Space Feasible Set Canonical Markov Policies: Poisson Conclusive News w 0 ( π ) slack-after-success policies 1 E s e i c i l o p e m i t - g n i p p o E: allocate all resource to R t s until 1 st success; then switch to S w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 27 / 55

  27. III. Characterizing the Policy Space Feasible Set Feasible Set: Poisson Conclusive News w 0 ( π ) slack-after-success policies 1 s e i c feasible set: Γ i l o p e m i t - g n i p p o t s w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 28 / 55

  28. III. Characterizing the Policy Space Feasible Set Feasible Set: Poisson Conclusive News Lemma 3 (Feasible Set: Poisson Conclusive News) The feasible set is the convex hull of the image of stopping-time and slack-after-success policies. Yingni Guo (NU) Delegation of Experimentation 29 / 55

  29. III. Characterizing the Policy Space Preferences over Feasible Pairs Preferences over Feasible Pairs Player i ’s payoff given π and θ is θη i w 1 ( π ) − (1 − θ ) w 0 ( π ) � � U i ( π, θ ) − s i = · s i . Yingni Guo (NU) Delegation of Experimentation 30 / 55

  30. III. Characterizing the Policy Space Preferences over Feasible Pairs Preferences over Feasible Pairs Player i ’s payoff given π and θ is θη i w 1 ( π ) − (1 − θ ) w 0 ( π ) � � U i ( π, θ ) − s i = · s i . Yingni Guo (NU) Delegation of Experimentation 30 / 55

  31. III. Characterizing the Policy Space Preferences over Feasible Pairs Preferences over Feasible Pairs Player i ’s payoff given π and θ is θη i w 1 ( π ) − (1 − θ ) w 0 ( π ) � � U i ( π, θ ) − s i = · s i . Player i ’s preferences over ( w 1 , w 0 ) ∈ Γ are determined by θ : the prior belief that the state is 1 ; η i : the benefit-cost ratio from the experimentation. Yingni Guo (NU) Delegation of Experimentation 30 / 55

  32. b III. Characterizing the Policy Space Preferences over Feasible Pairs Preferences over Feasible Pairs: Indifference Curves w 0 ( π ) 1 θ slope= 1 − θ η ρ P b P Principal’s indifference curve given θ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 31 / 55

  33. b III. Characterizing the Policy Space Preferences over Feasible Pairs Preferences over Feasible Pairs: Indifference Curves w 0 ( π ) 1 θ slope= 1 − θ η ρ P b P Principal’s indifference curve given θ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 31 / 55

  34. b b b b III. Characterizing the Policy Space Preferences over Feasible Pairs Preferences over Feasible Pairs: Indifference Curves w 0 ( π ) 1 Agent’s indifference curve given θ θ slope= 1 − θ η α A A θ slope= 1 − θ η ρ P P Principal’s indifference curve given θ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 31 / 55

  35. b b b b III. Characterizing the Policy Space Preferences over Feasible Pairs Preferences over Feasible Pairs: Indifference Curves w 0 ( π ) 1 Agent’s indifference curve given θ θ slope= 1 − θ η α A A A: ( w 1 α ( θ ) , w 0 α ( θ )) θ P: ( w 1 ρ ( θ ) , w 0 slope= ρ ( θ )) 1 − θ η ρ P P Principal’s indifference curve given θ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 31 / 55

  36. III. Characterizing the Policy Space Delegation Problem Reformulated Delegation Problem Reformulated Replace policy space Π with feasible set Γ : ⇒ θη i w 1 − (1 − θ ) w 0 . ( w 1 , w 0 ) ∈ Γ = Yingni Guo (NU) Delegation of Experimentation 32 / 55

  37. III. Characterizing the Policy Space Delegation Problem Reformulated Delegation Problem Reformulated Replace policy space Π with feasible set Γ : ⇒ θη i w 1 − (1 − θ ) w 0 . ( w 1 , w 0 ) ∈ Γ = Principal offers a direct mechanism ( w 1 , w 0 ) : Θ → Γ � θη ρ w 1 ( θ ) − (1 − θ ) w 0 ( θ ) � � max dF ( θ ) , Θ 1 − θw 1 ( θ ) − w 0 ( θ ) ≥ θη α θη α 1 − θw 1 ( θ ′ ) − w 0 ( θ ′ ) ∀ θ, θ ′ ∈ Θ . subject to Yingni Guo (NU) Delegation of Experimentation 32 / 55

  38. III. Characterizing the Policy Space Delegation Problem Reformulated Delegation Problem Reformulated Replace policy space Π with feasible set Γ : ⇒ θη i w 1 − (1 − θ ) w 0 . ( w 1 , w 0 ) ∈ Γ = Principal offers a direct mechanism ( w 1 , w 0 ) : Θ → Γ � θη ρ w 1 ( θ ) − (1 − θ ) w 0 ( θ ) � � max dF ( θ ) , Θ 1 − θw 1 ( θ ) − w 0 ( θ ) ≥ θη α θη α 1 − θw 1 ( θ ′ ) − w 0 ( θ ′ ) ∀ θ, θ ′ ∈ Θ . subject to Payoff parameters η α > η ρ ; feasible set Γ ; type distribution F . Yingni Guo (NU) Delegation of Experimentation 32 / 55

  39. III. Characterizing the Policy Space Delegation Problem Reformulated Delegation Problem Reformulated Replace policy space Π with feasible set Γ : ⇒ θη i w 1 − (1 − θ ) w 0 . ( w 1 , w 0 ) ∈ Γ = Principal offers a direct mechanism ( w 1 , w 0 ) : Θ → Γ � θη ρ w 1 ( θ ) − (1 − θ ) w 0 ( θ ) � � max dF ( θ ) , Θ 1 − θw 1 ( θ ) − w 0 ( θ ) ≥ θη α θη α 1 − θw 1 ( θ ′ ) − w 0 ( θ ′ ) ∀ θ, θ ′ ∈ Θ . subject to Payoff parameters η α > η ρ ; feasible set Γ ; type distribution F . Yingni Guo (NU) Delegation of Experimentation 33 / 55

  40. Outline Model 1 Single-player benchmark 2 Characterizing the policy space 3 Main results 4 More general results 5 Yingni Guo (NU) Delegation of Experimentation 34 / 55

  41. IV. Main Results The Cutoff Rule The Cutoff Rule Definition 1 The cutoff rule is the contract ( w 1 , w 0 ) s.t. � ( w 1 α ( θ ) , w 0 if θ ≤ θ ∗ , α ( θ )) ( w 1 ( θ ) , w 0 ( θ )) = ( w 1 α ( θ ∗ ) , w 0 α ( θ ∗ )) if θ > θ ∗ . Yingni Guo (NU) Delegation of Experimentation 35 / 55

  42. b b IV. Main Results The Cutoff Rule Delegation Set under Cutoff Rule w 0 ( π ) 1 θη ρ feasible set: Γ Principal’s preferred policies θη ρ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 36 / 55

  43. b b b IV. Main Results The Cutoff Rule Delegation Set under Cutoff Rule w 0 ( π ) 1 b θη α θη ρ Agent’s preferred policies feasible set: Γ Principal’s preferred policies θη α θη ρ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 36 / 55

  44. b b b b IV. Main Results The Cutoff Rule Delegation Set under Cutoff Rule w 0 ( π ) 1 b θη α θη ρ Agent’s preferred θ ∗ η α policies feasible set: Γ Principal’s preferred policies θη α θη ρ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 36 / 55

  45. b b b b IV. Main Results The Cutoff Rule Delegation Set under Cutoff Rule w 0 ( π ) 1 b θη α θη ρ Agent’s preferred θ ∗ η α policies feasible set: Γ Principal’s preferred policies θη α θη ρ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 36 / 55

  46. IV. Main Results The Cutoff Rule Optimality Main assumption For all θ ≤ θ ∗ , the following condition is satisfied: ≥ (3 θ − 1) − f ′ ( θ ) η α f ( θ ) θ (1 − θ ) . η α − η ρ Proposition 1 The cutoff rule is optimal if the main assumption holds. Yingni Guo (NU) Delegation of Experimentation 37 / 55

  47. IV. Main Results Implementation Implementing the Cutoff Rule Calibrated belief ( p t ) prior belief p 0 = θ ∗ ; without any success, it drifts down according to ˙ p t = − λπ t p t (1 − p t ) ; upon the first success, it jumps to one. Behavior a cutoff imposed at p ∗ α ; Agent has full flexibility if the belief stays above the cutoff; Agent is required to stop once the cutoff is reached. Yingni Guo (NU) Delegation of Experimentation 38 / 55

  48. b IV. Main Results Implementation Implementing the Cutoff Rule (cont.) calibrated belief p t 1 p 0 = θ ∗ cutoff: p ∗ α time t 0 Yingni Guo (NU) Delegation of Experimentation 39 / 55

  49. b IV. Main Results Implementation Implementing the Cutoff Rule (cont.) calibrated belief p t 1 p 0 = θ ∗ cutoff: p ∗ α time t 0 Yingni Guo (NU) Delegation of Experimentation 39 / 55

  50. b IV. Main Results Implementation Implementing the Cutoff Rule (cont.) calibrated belief p t 1 1st success p 0 = θ ∗ cutoff: p ∗ α time t 0 Yingni Guo (NU) Delegation of Experimentation 39 / 55

  51. b IV. Main Results Implementation Implementing the Cutoff Rule (cont.) calibrated belief p t 1 p 0 = θ ∗ cutoff: p ∗ α time t 0 Type θ stops. Yingni Guo (NU) Delegation of Experimentation 39 / 55

  52. b IV. Main Results Implementation Implementing the Cutoff Rule (cont.) calibrated belief p t 1 p 0 = θ ∗ cutoff: p ∗ α time t 0 Type θ stops. Yingni Guo (NU) Delegation of Experimentation 39 / 55

  53. b b IV. Main Results Implementation Implementing the Cutoff Rule (cont.) calibrated belief p t 1 p 0 = θ ∗ cutoff: p ∗ α time t 0 Types with θ ≥ θ ∗ stop. Type θ stops. Yingni Guo (NU) Delegation of Experimentation 39 / 55

  54. IV. Main Results Time Consistency Time Consistency Definition 2 Fix a (direct or indirect) mechanism. It is time-consistent if Principal finds it optimal to fulfill the mechanism after any history on path. Formal definition Yingni Guo (NU) Delegation of Experimentation 40 / 55

  55. IV. Main Results Time Consistency Time Consistency Definition 2 Fix a (direct or indirect) mechanism. It is time-consistent if Principal finds it optimal to fulfill the mechanism after any history on path. Formal definition Proposition 2 The cutoff rule is time-consistent if the main assumption holds. Yingni Guo (NU) Delegation of Experimentation 40 / 55

  56. b IV. Main Results Time Consistency Time Consistency: Principal’s Posterior Belief Calibrated belief p t 1 p 0 ( θ ∗ ) b Cutoff: p ∗ α time t 0 Types with θ ≥ θ ∗ stop. Type θ stops. More general results Yingni Guo (NU) Delegation of Experimentation 41 / 55

  57. b IV. Main Results Time Consistency Time Consistency: Principal’s Posterior Belief Calibrated belief p t 1 p 0 ( θ ∗ ) b Cutoff: p ∗ ρ Cutoff: p ∗ α time t 0 Types with θ ≥ θ ∗ stop. Type θ stops. More general results Yingni Guo (NU) Delegation of Experimentation 41 / 55

  58. b IV. Main Results Time Consistency Time Consistency: Principal’s Posterior Belief Calibrated belief p t 1 p 0 ( θ ∗ ) b Cutoff: p ∗ ρ Cutoff: p ∗ α time t 0 Types with θ ≥ θ ∗ stop. Type θ stops. More general results Yingni Guo (NU) Delegation of Experimentation 41 / 55

  59. b IV. Main Results Time Consistency Time Consistency: Principal’s Posterior Belief Calibrated belief p t 1 p 0 ( θ ∗ ) b Cutoff: p ∗ ρ Cutoff: p ∗ α time t 0 Types with θ ≥ θ ∗ stop. Type θ stops. More general results Yingni Guo (NU) Delegation of Experimentation 41 / 55

  60. b IV. Main Results Time Consistency Time Consistency: Principal’s Posterior Belief Calibrated belief p t 1 p 0 ( θ ∗ ) b Cutoff: p ∗ ρ Cutoff: p ∗ α time t 0 Types with θ ≥ θ ∗ stop. Type θ stops. More general results Yingni Guo (NU) Delegation of Experimentation 41 / 55

  61. b IV. Main Results Time Consistency Time Consistency: Principal’s Posterior Belief Calibrated belief p t 1 p 0 ( θ ∗ ) b Cutoff: p ∗ ρ Cutoff: p ∗ α time t 0 Types with θ ≥ θ ∗ stop. Type θ stops. More general results Yingni Guo (NU) Delegation of Experimentation 41 / 55

  62. IV. Main Results Cutoff Type The Cutoff Type The cutoff type θ ∗ : the lowest value in Θ s.t. Agent’s preferred policy given θ ∗ equals Principal’s preferred policy if she believes that θ ≥ θ ∗ . For any ˆ θ > θ ∗ , Agent’s preferred policy given ˆ θ is above Principal’s preferred policy if she believes that θ ≥ ˆ θ . Yingni Guo (NU) Delegation of Experimentation 42 / 55

  63. b b b b IV. Main Results Cutoff Type The Cutoff Type (cont.) w 0 ( π ) 1 b θη α θη ρ Agent’s preferred θ ∗ η α policies feasible set: Γ Principal’s preferred policies θη α θη ρ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 43 / 55

  64. b b b b b IV. Main Results Cutoff Type The Cutoff Type (cont.) w 0 ( π ) 1 b θη α θη ρ Agent’s preferred θ ∗ η α policies θ ∗ η ρ feasible set: Γ Principal’s preferred policies θη α θη ρ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 43 / 55

  65. b b b b b b IV. Main Results Cutoff Type The Cutoff Type (cont.) w 0 ( π ) 1 b θη α θη ρ Agent’s preferred θ ∗ η α policies θ ∗ η ρ feasible set: Γ Principal’s preferred policies θη α θη ρ w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 43 / 55

  66. IV. Main Results Cutoff Type Over- and Under-Experimentation stopping time τ Agent’s preferred stopping time type θ θ θ More general results Yingni Guo (NU) Delegation of Experimentation 44 / 55

  67. IV. Main Results Cutoff Type Over- and Under-Experimentation stopping time τ Agent’s preferred stopping time Principal’s preferred stopping time type θ θ θ More general results Yingni Guo (NU) Delegation of Experimentation 44 / 55

  68. IV. Main Results Cutoff Type Over- and Under-Experimentation stopping time τ Agent’s preferred stopping time τ α ( θ ∗ ) delegation rule Principal’s preferred stopping time type θ θ θ ∗ θ More general results Yingni Guo (NU) Delegation of Experimentation 44 / 55

  69. IV. Main Results Cutoff Type Over- and Under-Experimentation stopping time τ τ α ( θ ∗ ) delegation rule Principal’s preferred stopping time type θ θ θ ∗ θ More general results Yingni Guo (NU) Delegation of Experimentation 44 / 55

  70. IV. Main Results Cutoff Type Over- and Under-Experimentation stopping time τ τ α ( θ ∗ ) delegation rule Principal’s preferred stopping time type θ θ θ ∗ θ over-experimentation More general results Yingni Guo (NU) Delegation of Experimentation 44 / 55

  71. IV. Main Results Cutoff Type Over- and Under-Experimentation stopping time τ τ α ( θ ∗ ) delegation rule Principal’s preferred stopping time type θ θ θ ∗ θ over-experimentation under-experimentation More general results Yingni Guo (NU) Delegation of Experimentation 44 / 55

  72. Outline Model 1 Single-player benchmark 2 Characterizing the policy space 3 Main results 4 More general results 5 Yingni Guo (NU) Delegation of Experimentation 45 / 55

  73. V. More general results Poisson Inconclusive News Feasible Set: Poisson Inconclusive News w 0 ( π ) 1 s e i c i l o p v o k r a M ff to u c s feasible set: Γ e - i r c e i l pp o p u v o k r a M ff o t u c - r e w o l w 1 ( π ) 0 1 Yingni Guo (NU) Delegation of Experimentation 46 / 55

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend