organizational equilibrium with capital
play

Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, - PowerPoint PPT Presentation

Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, and Jos-Vctor Ros-Rull Economic Growth and Fluctuations Barcelona GSE June 14, 2018 FRB of Chicago (Mpls), Yale University, University of Pennsylvania, UCL, CAERP 1


  1. Benchmark I: Markov Perfect Equilibrium • Take future g ( k ) as given max u [ f ( k ) − k ′ ] + δβ Ω( k ′ ; g ) k ′ cont. value: Ω( k ; g ) = u [ f ( k ) − g ( k )] + β Ω[ g ( k ); g ] • The Generalized Euler Equation (GEE) u c = β u ′ c [ δ f ′ k + ( 1 − δ ) g ′ k ] 10

  2. Benchmark I: Markov Perfect Equilibrium • Take future g ( k ) as given max u [ f ( k ) − k ′ ] + δβ Ω( k ′ ; g ) k ′ cont. value: Ω( k ; g ) = u [ f ( k ) − g ( k )] + β Ω[ g ( k ); g ] • The Generalized Euler Equation (GEE) u c = β u ′ c [ δ f ′ k + ( 1 − δ ) g ′ k ] • The equilibrium features a constant saving rate δαβ k ′ = 1 − αβ + δαβ k α = s M k α 10

  3. Benchmark II: Ramsey Allocation with Commitment • Choose all future allocations at period 0 max u [ f ( k 0 ) − k 1 ] + δβ Ω( k 1 ) k 1 cont. value: Ω( k ) = max u [ f ( k ) − k ′ ] + β Ω( k ′ ) k ′ 11

  4. Benchmark II: Ramsey Allocation with Commitment • Choose all future allocations at period 0 max u [ f ( k 0 ) − k 1 ] + δβ Ω( k 1 ) k 1 cont. value: Ω( k ) = max u [ f ( k ) − k ′ ] + β Ω( k ′ ) k ′ • The sequence of saving rates is given by  s M = αδβ 1 − αβ + δαβ , t = 0    s t =   s R = αβ,  t > 0 11

  5. Benchmark II: Ramsey Allocation with Commitment • Choose all future allocations at period 0 max u [ f ( k 0 ) − k 1 ] + δβ Ω( k 1 ) k 1 cont. value: Ω( k ) = max u [ f ( k ) − k ′ ] + β Ω( k ′ ) k ′ • The sequence of saving rates is given by  s M = αδβ 1 − αβ + δαβ , t = 0    s t =   s R = αβ,  t > 0 • Steady state capital in Markov equilibrium is lower than Ramsey’s s M < s R 11

  6. Elements of Org Equil: Proposal and Value Function • A proposal is a sequence of saving rates { s 0 , s 1 , s 2 , . . . } 12

  7. Elements of Org Equil: Proposal and Value Function • A proposal is a sequence of saving rates { s 0 , s 1 , s 2 , . . . } • Everyone in the future can implement the same proposal 12

  8. Elements of Org Equil: Proposal and Value Function • A proposal is a sequence of saving rates { s 0 , s 1 , s 2 , . . . } • Everyone in the future can implement the same proposal • Given an initial capital k 0 , the proposal induces a sequence of capitals k 1 = s 0 k α 0 1 = k α 2 k 2 = s 1 k α 0 s 1 s α 0 . . . k t = k α t j = 0 s α t − j − 1 0 Π t − 1 j 12

  9. Proposal and Value Function • Given a proposal (sequence of saving rates) { s 0 , s 1 , s 2 , . . . } and an initial capital k 0 , we get the sequence of capitals k t = k α t j = 0 s α t − j − 1 0 Π t − 1 j 13

  10. Proposal and Value Function • Given a proposal (sequence of saving rates) { s 0 , s 1 , s 2 , . . . } and an initial capital k 0 , we get the sequence of capitals k t = k α t j = 0 s α t − j − 1 0 Π t − 1 j • The lifetime utility for the agent who makes the proposal is U ( k 0 , s 0 , s 1 , . . . ) � ∞ � � β j log = log[( 1 − s 0 ) k α ( 1 − s j ) k α 0 ] + δ j j = 1 � � � ∞ = α ( 1 − αβ + δαβ ) β j log τ = 0 s α j − τ ( 1 − s j )Π j − 1 log k 0 + log( 1 − s 0 ) + δ τ 1 − αβ j = 1 ≡ φ log k 0 + V ( s 0 , s 1 , . . . ) 13

  11. Proposal and Value Function • The lifetime utility for agent at period t is U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) � �� � � �� � total payoff action payoff 14

  12. Proposal and Value Function • The lifetime utility for agent at period t is U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) � �� � � �� � total payoff action payoff • There is a Separability property between capital and saving rates 14

  13. Proposal and Value Function • The lifetime utility for agent at period t is U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) � �� � � �� � total payoff action payoff • There is a Separability property between capital and saving rates • True for the initial proposer and all subsequent followers 14

  14. Proposal and Value Function • The lifetime utility for agent at period t is U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) � �� � � �� � total payoff action payoff • There is a Separability property between capital and saving rates • True for the initial proposer and all subsequent followers • This property is crucial to our equilibrium concept 14

  15. Proposal and Value Function • The lifetime utility for agent at period t is U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) � �� � � �� � total payoff action payoff • There is a Separability property between capital and saving rates • True for the initial proposer and all subsequent followers • This property is crucial to our equilibrium concept • What type of proposals can be implemented? 14

  16. Can the Ramsey Outcome be Implemented? • If the initial agent with k 0 proposes { s M , s R , s R , . . . } , which implies k 1 = s M k α 0 15

  17. Can the Ramsey Outcome be Implemented? • If the initial agent with k 0 proposes { s M , s R , s R , . . . } , which implies k 1 = s M k α 0 • By following the proposal, the next agent’s payoff is � � � � k 1 , s R , s R , s R , . . . s R , s R , s R , . . . = φ log k 1 + V U 15

  18. Can the Ramsey Outcome be Implemented? • If the initial agent with k 0 proposes { s M , s R , s R , . . . } , which implies k 1 = s M k α 0 • By following the proposal, the next agent’s payoff is � � � � k 1 , s R , s R , s R , . . . s R , s R , s R , . . . = φ log k 1 + V U • By copying the proposal, the next agent’s payoff is � � � � k 1 , s M , s R , s R , . . . s M , s R , s R , . . . U = φ log k 1 + V � � s R , s R , s R , . . . > φ log k 1 + V 15

  19. Can the Ramsey Outcome be Implemented? • If the initial agent with k 0 proposes { s M , s R , s R , . . . } , which implies k 1 = s M k α 0 • By following the proposal, the next agent’s payoff is � � � � k 1 , s R , s R , s R , . . . s R , s R , s R , . . . = φ log k 1 + V U • By copying the proposal, the next agent’s payoff is � � � � k 1 , s M , s R , s R , . . . s M , s R , s R , . . . U = φ log k 1 + V � � s R , s R , s R , . . . > φ log k 1 + V • Copying is better than following: Ramsey outcome cannot be implemented 15

  20. Can a Constant Saving Rate be Implemented? • Suppose the initial agent proposes { s , s , s . . . } 16

  21. Can a Constant Saving Rate be Implemented? • Suppose the initial agent proposes { s , s , s . . . } • By following the proposal, the payoff for agent in period t is U ( k t , s , s , . . . ) = φ log k t + V ( s , s , . . . ) where � � βδ δαβ V ( s , s , . . . ) ≡ H ( s ) = 1 + log( 1 − s )+ ( 1 − αβ )( 1 − β ) log( s ) 1 − β 16

  22. Can a Constant Saving Rate be Implemented? • Suppose the initial agent proposes { s , s , s . . . } • By following the proposal, the payoff for agent in period t is U ( k t , s , s , . . . ) = φ log k t + V ( s , s , . . . ) where � � βδ δαβ V ( s , s , . . . ) ≡ H ( s ) = 1 + log( 1 − s )+ ( 1 − αβ )( 1 − β ) log( s ) 1 − β • To be followed, the constant saving rate has to be s ∗ = argmax H ( s ) 16

  23. Optimal Constant Saving Rate s M < s ∗ < s R 17

  24. Can { s ∗ , s ∗ , . . . } be Implemented? • If the initial agent proposes { s ∗ , s ∗ , . . . } , no one has incentive to copy 18

  25. Can { s ∗ , s ∗ , . . . } be Implemented? • If the initial agent proposes { s ∗ , s ∗ , . . . } , no one has incentive to copy • But, she prefers to choose s M , and wait the next to propose { s ∗ , s ∗ , . . . } � � � � k 0 , s M , s ∗ , s ∗ , . . . s M , s ∗ , s ∗ , . . . = φ log k 0 + V U > φ log k 0 + V ( s ∗ , s ∗ , s ∗ , . . . ) 18

  26. Can { s ∗ , s ∗ , . . . } be Implemented? • If the initial agent proposes { s ∗ , s ∗ , . . . } , no one has incentive to copy • But, she prefers to choose s M , and wait the next to propose { s ∗ , s ∗ , . . . } � � � � k 0 , s M , s ∗ , s ∗ , . . . s M , s ∗ , s ∗ , . . . = φ log k 0 + V U > φ log k 0 + V ( s ∗ , s ∗ , s ∗ , . . . ) • Constant s ∗ proposal cannot be implemented, no one wants to propose it 18

  27. Can { s ∗ , s ∗ , . . . } be Implemented? • If the initial agent proposes { s ∗ , s ∗ , . . . } , no one has incentive to copy • But, she prefers to choose s M , and wait the next to propose { s ∗ , s ∗ , . . . } � � � � k 0 , s M , s ∗ , s ∗ , . . . s M , s ∗ , s ∗ , . . . = φ log k 0 + V U > φ log k 0 + V ( s ∗ , s ∗ , s ∗ , . . . ) • Constant s ∗ proposal cannot be implemented, no one wants to propose it • But, something else can be implemented, which converges to s ∗ 18

  28. Can { s ∗ , s ∗ , . . . } be Implemented? • If the initial agent proposes { s ∗ , s ∗ , . . . } , no one has incentive to copy • But, she prefers to choose s M , and wait the next to propose { s ∗ , s ∗ , . . . } � � � � k 0 , s M , s ∗ , s ∗ , . . . s M , s ∗ , s ∗ , . . . = φ log k 0 + V U > φ log k 0 + V ( s ∗ , s ∗ , s ∗ , . . . ) • Constant s ∗ proposal cannot be implemented, no one wants to propose it • But, something else can be implemented, which converges to s ∗ • For this, we proceed to define the organizational equilibrium 18

  29. Organizational Equilibrium Definition A sequence of saving rates { s τ } ∞ τ = 0 is organizationally admissible if 1. V ( s t , s t + 1 , s t + 2 , . . . ) is (weakly) increasing in t 2. The first agent has no incentive to delay the proposal. V ( s 0 , s 1 , s 2 , . . . ) ≥ max V ( s , s 0 , s 1 , s 2 , . . . ) s Within organizationally admissible sequences, the sequence that attains the maximum of V ( s 0 , s 1 , s 2 , . . . ) is an organizational equilibrium . 19

  30. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) 20

  31. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) • SPEq example: if someone deviates, next agent restarts from s 0 20

  32. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) • SPEq example: if someone deviates, next agent restarts from s 0 • SPEq refinement criterion 20

  33. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) • SPEq example: if someone deviates, next agent restarts from s 0 • SPEq refinement criterion • Same continuation value on and off equilibrium path 20

  34. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) • SPEq example: if someone deviates, next agent restarts from s 0 • SPEq refinement criterion • Same continuation value on and off equilibrium path • No one can be better off by deviating and counting on others to restart the game 20

  35. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) • SPEq example: if someone deviates, next agent restarts from s 0 • SPEq refinement criterion • Same continuation value on and off equilibrium path • No one can be better off by deviating and counting on others to restart the game • In equilibrium: U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) = φ log k t + V ∗ 20

  36. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) • SPEq example: if someone deviates, next agent restarts from s 0 • SPEq refinement criterion • Same continuation value on and off equilibrium path • No one can be better off by deviating and counting on others to restart the game • In equilibrium: U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) = φ log k t + V ∗ • Total payoff only depends on capital, not a trigger with self-punishment 20

  37. Remarks on Organizational Equilibrium • Organizational Equilibrium is the outcome of some Subgame Perfect Equilibrium (SPEq) • SPEq example: if someone deviates, next agent restarts from s 0 • SPEq refinement criterion • Same continuation value on and off equilibrium path • No one can be better off by deviating and counting on others to restart the game • In equilibrium: U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) = φ log k t + V ∗ • Total payoff only depends on capital, not a trigger with self-punishment • Agents’ action depend on past actions, not a Markov equilibrium 20

  38. Construct the Organizational Equilibrium • Look for a sequence of saving rates { s 0 , s 1 , . . . } 21

  39. Construct the Organizational Equilibrium • Look for a sequence of saving rates { s 0 , s 1 , . . . } • Every generation obtains the same V V ( s t , s t + 1 , . . . ) = V ( s t + 1 , s t + 2 , . . . ) = V which induces the following difference equation δαβ β ( 1 − δ ) log( 1 − s t + 1 ) = 1 − αβ log s t + log( 1 − s t ) − ( 1 − β ) V 21

  40. Construct the Organizational Equilibrium • Look for a sequence of saving rates { s 0 , s 1 , . . . } • Every generation obtains the same V V ( s t , s t + 1 , . . . ) = V ( s t + 1 , s t + 2 , . . . ) = V which induces the following difference equation δαβ β ( 1 − δ ) log( 1 − s t + 1 ) = 1 − αβ log s t + log( 1 − s t ) − ( 1 − β ) V • We call this difference equation “the proposal function” s t + 1 = q ( s t ; V ) 21

  41. Construct the Organizational Equilibrium • Look for a sequence of saving rates { s 0 , s 1 , . . . } • Every generation obtains the same V V ( s t , s t + 1 , . . . ) = V ( s t + 1 , s t + 2 , . . . ) = V which induces the following difference equation δαβ β ( 1 − δ ) log( 1 − s t + 1 ) = 1 − αβ log s t + log( 1 − s t ) − ( 1 − β ) V • We call this difference equation “the proposal function” s t + 1 = q ( s t ; V ) • The maximal V and an initial s 0 are needed to determine { s τ } ∞ τ = 0 21

  42. Determine V ∗ • As V increases, the proposal function q ( s ; V ) moves upwards • The highest V = V ∗ is achieved when q ( s ; V ) is tangent to the 45 degree 22 line (at s ∗ )

  43. Determine the Initial Saving Rate s 0 • The first agent should have no incentive to delay the proposal V ( s , s 0 , s 1 , s 2 , . . . ) = V ( s M , s 0 , s 1 , s 2 , . . . ) max s • s 0 has to be such that V ∗ = V ( s 0 , s 1 , s 2 , . . . ) ≥ V ( s M , s 0 , s 1 , s 2 , . . . ) → s 0 ≤ q ∗ � s M � − • We select s 0 = q ∗ � s M � , which yields the highest welfare during transition 23

  44. Org Equil in the Quasi-Geometric Discounting Growth Model The organizational equilibrium { s τ } ∞ τ = 0 is given recursively by the proposal function q ∗ � � − ( 1 − β ) V ∗ + δαβ 1 − αβ log s t − 1 + log( 1 − s t − 1 ) s t = q ∗ ( s t − 1 ) = 1 − exp β ( 1 − δ ) where the initial saving rate s 0 , the steady state s ∗ , and V ∗ are given by s 0 = q ∗ � s M � δαβ s ∗ = ( 1 − β + δβ )( 1 − αβ ) + δαβ V ∗ = 1 − β + δβ αδβ log( 1 − s ∗ ) + ( 1 − β )( 1 − αβ ) log s ∗ 1 − β 24

  45. Transition Dynamics • The equilibrium starts from s 0 , and monotonically converges to s ∗ . 25

  46. Remarks 1. To solve proposal function, no agent can treat herself specially, V t = V t + 1 Thank you for the idea, I will do it myself 2. To determine the initial saving rate, the agent starts from low saving rate Goodwill has to be built gradually 3. We will show how the outcome compared with the Markov and Ramsey We do much better than Markov equilibrium 26

  47. Comparison: Steady State 1 0.8 0.6 0.4 0.2 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 • Organizational equilibrium is much better than the Markov equilibrium 27

  48. Comparison: Allocation in Transition • Organizational equilibrium: starts low, converges to being close to Ramsey 28

  49. Comparison: Payoff in Transition U ( k t , s t , s t + 1 , . . . ) = φ log k t + V ( s t , s t + 1 , . . . ) � �� � � �� � total payoff action payoff 29

  50. Organizational Equilibrium for Weakly Separable Economies

  51. General Definition • An infinite sequence of decision makers is called to act 30

  52. General Definition • An infinite sequence of decision makers is called to act • State k ∈ K 30

  53. General Definition • An infinite sequence of decision makers is called to act • State k ∈ K • Action a ∈ A 30

  54. General Definition • An infinite sequence of decision makers is called to act • State k ∈ K • Action a ∈ A • State evolves k t + 1 = F ( k t , a t ) 30

  55. General Definition • An infinite sequence of decision makers is called to act • State k ∈ K • Action a ∈ A • State evolves k t + 1 = F ( k t , a t ) • Preferences: U ( k t , a t , a t + 1 , a t + 2 , . . . ) 30

  56. General Definition • An infinite sequence of decision makers is called to act • State k ∈ K • Action a ∈ A • State evolves k t + 1 = F ( k t , a t ) • Preferences: U ( k t , a t , a t + 1 , a t + 2 , . . . ) 1. At any point in time t , the set A is independent of the state k t 30

  57. General Definition • An infinite sequence of decision makers is called to act • State k ∈ K • Action a ∈ A • State evolves k t + 1 = F ( k t , a t ) • Preferences: U ( k t , a t , a t + 1 , a t + 2 , . . . ) 1. At any point in time t , the set A is independent of the state k t 2. U is weakly separable in k and in { a s } ∞ s = 0 U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . and such that v is strictly increasing in its second argument. 30

  58. General Definition • An infinite sequence of decision makers is called to act • State k ∈ K • Action a ∈ A • State evolves k t + 1 = F ( k t , a t ) • Preferences: U ( k t , a t , a t + 1 , a t + 2 , . . . ) 1. At any point in time t , the set A is independent of the state k t 2. U is weakly separable in k and in { a s } ∞ s = 0 U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . and such that v is strictly increasing in its second argument. 3. V is weakly separable in a 0 and { a s } ∞ s = 1 V ( a 0 , a 1 , a 2 , ... ) ≡ � V ( a 0 , � V ( a 1 , a 2 , . . . )) , with � V strictly increasing in its second argument. 30

  59. On the Choice of Actions • Weak separability and state independence of A depend on the specification of the action set • Example: hyperbolic discounting. If the choice is c , feasible actions depend on k • So, sometimes a problem may look nonseparable, but may become separable by rescaling actions appropriately 31

  60. Organizational Equilibrium Definition A sequence of actions { a t } ∞ t = 0 is organizationally admissible if 1. V ( a t , a t + 1 , a t + 2 , . . . ) is (weakly) increasing in t 2. The first agent has no incentive to delay the proposal. V ( a 0 , a 1 , a 2 , . . . ) ≥ max a ∈ A V ( a , a 0 , a 1 , a 2 , . . . ) Within organizationally admissible sequences, the sequence that attains the maximum of V ( a 0 , a 1 , a 2 , . . . ) is an organizational equilibrium . 32

  61. Organizational Equilibrium vs. Subgame-Perfect Equilibrium 1. Org Equil is the equilibrium path of a sub-game perfect equilibrium 2. It can be implemented through various strategies. Examples: • Restart from the beginning when someone deviates • Use difference equation to make each player indifferent between deviating and following the equilibrium strategy (over a range) 33

  62. Organization Equil vs. Reconsideration-Proof Equil • Reconsideration-proof equilibria = ⇒ Value for all current and future players independent of past history 34

  63. Organization Equil vs. Reconsideration-Proof Equil • Reconsideration-proof equilibria = ⇒ Value for all current and future players independent of past history • Org Equil: same property only for action payoff: U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . Future players affected by different state 34

  64. Organization Equil vs. Reconsideration-Proof Equil • Reconsideration-proof equilibria = ⇒ Value for all current and future players independent of past history • Org Equil: same property only for action payoff: U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . Future players affected by different state • Without state variables, Org Equil is the outcome of a Reconsideration-Proof Equil 34

  65. Organization Equil vs. Reconsideration-Proof Equil • Reconsideration-proof equilibria = ⇒ Value for all current and future players independent of past history • Org Equil: same property only for action payoff: U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . Future players affected by different state • Without state variables, Org Equil is the outcome of a Reconsideration-Proof Equil • Rationale for renegotiation/reconsideration-proofness: reject threats that are Pareto-dominated ex post 34

  66. Organization Equil vs. Reconsideration-Proof Equil • Reconsideration-proof equilibria = ⇒ Value for all current and future players independent of past history • Org Equil: same property only for action payoff: U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . Future players affected by different state • Without state variables, Org Equil is the outcome of a Reconsideration-Proof Equil • Rationale for renegotiation/reconsideration-proofness: reject threats that are Pareto-dominated ex post • Similar spirit for no-delay condition: 34

  67. Organization Equil vs. Reconsideration-Proof Equil • Reconsideration-proof equilibria = ⇒ Value for all current and future players independent of past history • Org Equil: same property only for action payoff: U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . Future players affected by different state • Without state variables, Org Equil is the outcome of a Reconsideration-Proof Equil • Rationale for renegotiation/reconsideration-proofness: reject threats that are Pareto-dominated ex post • Similar spirit for no-delay condition: • If agents coordinate on Pareto-dominant equilibrium ( s ∗ ) right away... 34

  68. Organization Equil vs. Reconsideration-Proof Equil • Reconsideration-proof equilibria = ⇒ Value for all current and future players independent of past history • Org Equil: same property only for action payoff: U ( k , a 0 , a 1 , a 2 , . . . ) ≡ v ( k , V ( a 0 , a 1 , a 2 , . . . )) . Future players affected by different state • Without state variables, Org Equil is the outcome of a Reconsideration-Proof Equil • Rationale for renegotiation/reconsideration-proofness: reject threats that are Pareto-dominated ex post • Similar spirit for no-delay condition: • If agents coordinate on Pareto-dominant equilibrium ( s ∗ ) right away... • ... then they should do the same next period (independent of past history)... 34

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend