Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, - - PowerPoint PPT Presentation

organizational equilibrium with capital
SMART_READER_LITE
LIVE PREVIEW

Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, - - PowerPoint PPT Presentation

Organizational Equilibrium with Capital Marco Bassetto, Zhen Huo, and Jos-Vctor Ros-Rull Economic Growth and Fluctuations Barcelona GSE June 14, 2018 FRB of Chicago (Mpls), Yale University, University of Pennsylvania, UCL, CAERP 1


slide-1
SLIDE 1

Organizational Equilibrium with Capital

Marco Bassetto, Zhen Huo, and José-Víctor Ríos-Rull Economic Growth and Fluctuations Barcelona GSE June 14, 2018

FRB of Chicago (Mpls), Yale University, University of Pennsylvania, UCL, CAERP 1

slide-2
SLIDE 2

Introduction

slide-3
SLIDE 3

Question

  • Time inconsistency is a pervasive issue
  • taxation, government debt, consumption-saving problem, . . .
  • Benchmark: Markov equilibrium
  • Can agents do better than Markov equilibrium?
  • yes, with trigger strategies, self-punishment as a threat
  • We show agents do better than Markov without “trigger strategies”
  • continuation value independent of history

2

slide-4
SLIDE 4

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

3

slide-5
SLIDE 5

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

  • Future agents can

3

slide-6
SLIDE 6

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

  • Future agents can
  • 1. Follow the proposal

3

slide-7
SLIDE 7

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

  • Future agents can
  • 1. Follow the proposal
  • 2. Make their own proposal

3

slide-8
SLIDE 8

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

  • Future agents can
  • 1. Follow the proposal
  • 2. Make their own proposal
  • 3. Wait for the next agent to make a proposal

3

slide-9
SLIDE 9

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

  • Future agents can
  • 1. Follow the proposal
  • 2. Make their own proposal
  • 3. Wait for the next agent to make a proposal
  • Organizational equilibrium

3

slide-10
SLIDE 10

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

  • Future agents can
  • 1. Follow the proposal
  • 2. Make their own proposal
  • 3. Wait for the next agent to make a proposal
  • Organizational equilibrium
  • 1. Initial agent is willing to make a proposal, no delaying

3

slide-11
SLIDE 11

This paper: Propose an Equilibrium Concept

  • Agents not only decide their action: they can make proposals for the

future

  • Future agents can
  • 1. Follow the proposal
  • 2. Make their own proposal
  • 3. Wait for the next agent to make a proposal
  • Organizational equilibrium
  • 1. Initial agent is willing to make a proposal, no delaying
  • 2. Future agents are willing to follow

3

slide-12
SLIDE 12

Equilibrium Properties I

  • No one can treat herself specially
  • If a proposal favors the initial proposer, the next agent wants to copy the

idea and restart

  • Cooperation across current and future decision makers has to be built

gradually

  • Proposal starts with small deviation from Markov, otherwise temptation

to let future agents start the process

  • Over time, greater departures from short-term best response, in

anticipation of more beneficial future outcomes

4

slide-13
SLIDE 13

Equilibrium Properties II

  • Compare with Markov equilibrium

5

slide-14
SLIDE 14

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

5

slide-15
SLIDE 15

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium

5

slide-16
SLIDE 16

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies

5

slide-17
SLIDE 17

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies
  • History can be abolished, no self-punishment

5

slide-18
SLIDE 18

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies
  • History can be abolished, no self-punishment
  • In terms of Subgame Perfect Refinements:

5

slide-19
SLIDE 19

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies
  • History can be abolished, no self-punishment
  • In terms of Subgame Perfect Refinements:
  • 1. Same continuation value on or off equilibrium path

5

slide-20
SLIDE 20

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies
  • History can be abolished, no self-punishment
  • In terms of Subgame Perfect Refinements:
  • 1. Same continuation value on or off equilibrium path
  • 2. No one wants to deviate and wait for a restart of the game

5

slide-21
SLIDE 21

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies
  • History can be abolished, no self-punishment
  • In terms of Subgame Perfect Refinements:
  • 1. Same continuation value on or off equilibrium path
  • 2. No one wants to deviate and wait for a restart of the game
  • Directly related to reconsideration-proof equilibrium

5

slide-22
SLIDE 22

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies
  • History can be abolished, no self-punishment
  • In terms of Subgame Perfect Refinements:
  • 1. Same continuation value on or off equilibrium path
  • 2. No one wants to deviate and wait for a restart of the game
  • Directly related to reconsideration-proof equilibrium
  • Extend to a class of models with capital

5

slide-23
SLIDE 23

Equilibrium Properties II

  • Compare with Markov equilibrium
  • Payoff only depends on (payoff relevant) state variables, like Markov

equilibrium

  • Action can depend on history, different from Markov equilibrium
  • Compare with trigger strategies
  • History can be abolished, no self-punishment
  • In terms of Subgame Perfect Refinements:
  • 1. Same continuation value on or off equilibrium path
  • 2. No one wants to deviate and wait for a restart of the game
  • Directly related to reconsideration-proof equilibrium
  • Extend to a class of models with capital
  • Impose additional no-delaying condition (relevant for environments with

state variables)

5

slide-24
SLIDE 24

Quantitative Findings

  • Apply the equilibrium concept in
  • Quasi-geometric discounting growth model
  • Government taxation model
  • Steady state
  • Allocation is close to Ramsey outcome, much better than Markov

equilibrium

  • Transition
  • Allocation starts similar to Markov, converges to similar to Ramsey

6

slide-25
SLIDE 25

Related Literature

  • Markov equilibrium and GEE
  • Currie and Levine (1993), Bassetto and Sargent (2005), Klein and Ríos-Rull (2003),

Klein, Quadrini and Ríos-Rull (2005), Krusell and Ríos-Rull (2008), Krusell, Kuruscu, and Smith (2010), Song, Storesletten and Zilibotti (2012)

  • Sustainable plans
  • Stokey (1988), Chari and Kehoe (1990), Abreu, Pearce and Stacchetti (1990), Phelan

and Stacchetti (2001)

  • Quasi-geometric discounting growth model
  • Strotz (1956), Phelps and Pollak (1968), Laibson (1997), Krusell and Smith (2003),

Chatterjee and Eyigungor (2015), Bernheim, Ray, and Yeltekin (2017), Cao and Werning (2017)

  • Refinement of subgame perfect equilibrium
  • Farrell and Maskin (1989), Kocherlakota (2008), Nozawa (2014), Ales and Sleet (2015)

7

slide-26
SLIDE 26

Plan

  • 1. An example: a growth model with quasi-geometric discounting
  • 2. General definition and property
  • 3. Application in government taxation problem

8

slide-27
SLIDE 27

Growth model

slide-28
SLIDE 28

The Environment

  • Preferences: quasi-geometric discounting

Ψt = u(ct) + δ

  • τ=1

βτu (ct+τ)

9

slide-29
SLIDE 29

The Environment

  • Preferences: quasi-geometric discounting

Ψt = u(ct) + δ

  • τ=1

βτu (ct+τ)

  • Period utility function u(c) = log c

9

slide-30
SLIDE 30

The Environment

  • Preferences: quasi-geometric discounting

Ψt = u(ct) + δ

  • τ=1

βτu (ct+τ)

  • Period utility function u(c) = log c
  • δ = 1 is the time-consistent case

9

slide-31
SLIDE 31

The Environment

  • Preferences: quasi-geometric discounting

Ψt = u(ct) + δ

  • τ=1

βτu (ct+τ)

  • Period utility function u(c) = log c
  • δ = 1 is the time-consistent case
  • Technology

f (kt) = kα

t ,

kt+1 = f (kt) − ct.

9

slide-32
SLIDE 32

Benchmark I: Markov Perfect Equilibrium

  • Take future g(k) as given

max

k′

u[f (k) − k′] + δβΩ(k′; g)

  • cont. value:

Ω(k; g) = u[f (k) − g(k)] + βΩ[g(k); g]

10

slide-33
SLIDE 33

Benchmark I: Markov Perfect Equilibrium

  • Take future g(k) as given

max

k′

u[f (k) − k′] + δβΩ(k′; g)

  • cont. value:

Ω(k; g) = u[f (k) − g(k)] + βΩ[g(k); g]

  • The Generalized Euler Equation (GEE)

uc = βu′

c [δf ′ k + (1 − δ) g ′ k] 10

slide-34
SLIDE 34

Benchmark I: Markov Perfect Equilibrium

  • Take future g(k) as given

max

k′

u[f (k) − k′] + δβΩ(k′; g)

  • cont. value:

Ω(k; g) = u[f (k) − g(k)] + βΩ[g(k); g]

  • The Generalized Euler Equation (GEE)

uc = βu′

c [δf ′ k + (1 − δ) g ′ k]

  • The equilibrium features a constant saving rate

k′ = δαβ 1 − αβ + δαβ kα = sM kα

10

slide-35
SLIDE 35

Benchmark II: Ramsey Allocation with Commitment

  • Choose all future allocations at period 0

max

k1

u[f (k0) − k1] + δβΩ(k1)

  • cont. value:

Ω(k) = max

k′

u[f (k) − k′] + βΩ(k′)

11

slide-36
SLIDE 36

Benchmark II: Ramsey Allocation with Commitment

  • Choose all future allocations at period 0

max

k1

u[f (k0) − k1] + δβΩ(k1)

  • cont. value:

Ω(k) = max

k′

u[f (k) − k′] + βΩ(k′)

  • The sequence of saving rates is given by

st =        sM =

αδβ 1−αβ+δαβ , t = 0

sR = αβ, t > 0

11

slide-37
SLIDE 37

Benchmark II: Ramsey Allocation with Commitment

  • Choose all future allocations at period 0

max

k1

u[f (k0) − k1] + δβΩ(k1)

  • cont. value:

Ω(k) = max

k′

u[f (k) − k′] + βΩ(k′)

  • The sequence of saving rates is given by

st =        sM =

αδβ 1−αβ+δαβ , t = 0

sR = αβ, t > 0

  • Steady state capital in Markov equilibrium is lower than Ramsey’s

sM < sR

11

slide-38
SLIDE 38

Elements of Org Equil: Proposal and Value Function

  • A proposal is a sequence of saving rates {s0, s1, s2, . . .}

12

slide-39
SLIDE 39

Elements of Org Equil: Proposal and Value Function

  • A proposal is a sequence of saving rates {s0, s1, s2, . . .}
  • Everyone in the future can implement the same proposal

12

slide-40
SLIDE 40

Elements of Org Equil: Proposal and Value Function

  • A proposal is a sequence of saving rates {s0, s1, s2, . . .}
  • Everyone in the future can implement the same proposal
  • Given an initial capital k0, the proposal induces a sequence of capitals

k1 = s0kα k2 = s1kα

1 = kα2 0 s1sα

. . . kt = kαt

0 Πt−1 j=0 sαt−j−1 j 12

slide-41
SLIDE 41

Proposal and Value Function

  • Given a proposal (sequence of saving rates) {s0, s1, s2, . . .} and an

initial capital k0, we get the sequence of capitals kt = kαt

0 Πt−1 j=0 sαt−j−1 j 13

slide-42
SLIDE 42

Proposal and Value Function

  • Given a proposal (sequence of saving rates) {s0, s1, s2, . . .} and an

initial capital k0, we get the sequence of capitals kt = kαt

0 Πt−1 j=0 sαt−j−1 j

  • The lifetime utility for the agent who makes the proposal is

U(k0, s0, s1, . . .) = log[(1 − s0)kα

0 ] + δ ∞

  • j=1

βj log

  • (1 − sj)kα

j

  • =α(1 − αβ + δαβ)

1 − αβ log k0 + log(1 − s0) + δ

  • j=1

βj log

  • (1 − sj)Πj−1

τ=0sαj−τ τ

  • ≡φ log k0 + V (s0, s1, . . .)

13

slide-43
SLIDE 43

Proposal and Value Function

  • The lifetime utility for agent at period t is

U(kt, st, st+1, . . .)

  • total payoff

= φ log kt + V (st, st+1, . . .)

  • action payoff

14

slide-44
SLIDE 44

Proposal and Value Function

  • The lifetime utility for agent at period t is

U(kt, st, st+1, . . .)

  • total payoff

= φ log kt + V (st, st+1, . . .)

  • action payoff
  • There is a Separability property between capital and saving rates

14

slide-45
SLIDE 45

Proposal and Value Function

  • The lifetime utility for agent at period t is

U(kt, st, st+1, . . .)

  • total payoff

= φ log kt + V (st, st+1, . . .)

  • action payoff
  • There is a Separability property between capital and saving rates
  • True for the initial proposer and all subsequent followers

14

slide-46
SLIDE 46

Proposal and Value Function

  • The lifetime utility for agent at period t is

U(kt, st, st+1, . . .)

  • total payoff

= φ log kt + V (st, st+1, . . .)

  • action payoff
  • There is a Separability property between capital and saving rates
  • True for the initial proposer and all subsequent followers
  • This property is crucial to our equilibrium concept

14

slide-47
SLIDE 47

Proposal and Value Function

  • The lifetime utility for agent at period t is

U(kt, st, st+1, . . .)

  • total payoff

= φ log kt + V (st, st+1, . . .)

  • action payoff
  • There is a Separability property between capital and saving rates
  • True for the initial proposer and all subsequent followers
  • This property is crucial to our equilibrium concept
  • What type of proposals can be implemented?

14

slide-48
SLIDE 48

Can the Ramsey Outcome be Implemented?

  • If the initial agent with k0 proposes {sM, sR, sR, . . .}, which implies

k1 = sMkα

15

slide-49
SLIDE 49

Can the Ramsey Outcome be Implemented?

  • If the initial agent with k0 proposes {sM, sR, sR, . . .}, which implies

k1 = sMkα

  • By following the proposal, the next agent’s payoff is

U

  • k1, sR, sR, sR, . . .
  • = φ log k1 + V
  • sR, sR, sR, . . .
  • 15
slide-50
SLIDE 50

Can the Ramsey Outcome be Implemented?

  • If the initial agent with k0 proposes {sM, sR, sR, . . .}, which implies

k1 = sMkα

  • By following the proposal, the next agent’s payoff is

U

  • k1, sR, sR, sR, . . .
  • = φ log k1 + V
  • sR, sR, sR, . . .
  • By copying the proposal, the next agent’s payoff is

U

  • k1, sM, sR, sR, . . .
  • = φ log k1 + V
  • sM, sR, sR, . . .
  • > φ log k1 + V
  • sR, sR, sR, . . .
  • 15
slide-51
SLIDE 51

Can the Ramsey Outcome be Implemented?

  • If the initial agent with k0 proposes {sM, sR, sR, . . .}, which implies

k1 = sMkα

  • By following the proposal, the next agent’s payoff is

U

  • k1, sR, sR, sR, . . .
  • = φ log k1 + V
  • sR, sR, sR, . . .
  • By copying the proposal, the next agent’s payoff is

U

  • k1, sM, sR, sR, . . .
  • = φ log k1 + V
  • sM, sR, sR, . . .
  • > φ log k1 + V
  • sR, sR, sR, . . .
  • Copying is better than following:

Ramsey outcome cannot be implemented

15

slide-52
SLIDE 52

Can a Constant Saving Rate be Implemented?

  • Suppose the initial agent proposes {s, s, s . . .}

16

slide-53
SLIDE 53

Can a Constant Saving Rate be Implemented?

  • Suppose the initial agent proposes {s, s, s . . .}
  • By following the proposal, the payoff for agent in period t is

U (kt, s, s, . . .) =φ log kt + V (s, s, . . .) where V (s, s, . . .) ≡ H(s) =

  • 1 +

βδ 1 − β

  • log(1−s)+

δαβ (1 − αβ)(1 − β) log(s)

16

slide-54
SLIDE 54

Can a Constant Saving Rate be Implemented?

  • Suppose the initial agent proposes {s, s, s . . .}
  • By following the proposal, the payoff for agent in period t is

U (kt, s, s, . . .) =φ log kt + V (s, s, . . .) where V (s, s, . . .) ≡ H(s) =

  • 1 +

βδ 1 − β

  • log(1−s)+

δαβ (1 − αβ)(1 − β) log(s)

  • To be followed, the constant saving rate has to be

s∗ = argmax H(s)

16

slide-55
SLIDE 55

Optimal Constant Saving Rate

sM < s∗ < sR

17

slide-56
SLIDE 56

Can {s∗, s∗, . . .} be Implemented?

  • If the initial agent proposes {s∗, s∗, . . .}, no one has incentive to copy

18

slide-57
SLIDE 57

Can {s∗, s∗, . . .} be Implemented?

  • If the initial agent proposes {s∗, s∗, . . .}, no one has incentive to copy
  • But, she prefers to choose sM, and wait the next to propose

{s∗, s∗, . . .} U

  • k0, sM, s∗, s∗, . . .
  • = φ log k0 + V
  • sM, s∗, s∗, . . .
  • > φ log k0 + V (s∗, s∗, s∗, . . .)

18

slide-58
SLIDE 58

Can {s∗, s∗, . . .} be Implemented?

  • If the initial agent proposes {s∗, s∗, . . .}, no one has incentive to copy
  • But, she prefers to choose sM, and wait the next to propose

{s∗, s∗, . . .} U

  • k0, sM, s∗, s∗, . . .
  • = φ log k0 + V
  • sM, s∗, s∗, . . .
  • > φ log k0 + V (s∗, s∗, s∗, . . .)
  • Constant s∗ proposal cannot be implemented, no one wants to

propose it

18

slide-59
SLIDE 59

Can {s∗, s∗, . . .} be Implemented?

  • If the initial agent proposes {s∗, s∗, . . .}, no one has incentive to copy
  • But, she prefers to choose sM, and wait the next to propose

{s∗, s∗, . . .} U

  • k0, sM, s∗, s∗, . . .
  • = φ log k0 + V
  • sM, s∗, s∗, . . .
  • > φ log k0 + V (s∗, s∗, s∗, . . .)
  • Constant s∗ proposal cannot be implemented, no one wants to

propose it

  • But, something else can be implemented, which converges to s∗

18

slide-60
SLIDE 60

Can {s∗, s∗, . . .} be Implemented?

  • If the initial agent proposes {s∗, s∗, . . .}, no one has incentive to copy
  • But, she prefers to choose sM, and wait the next to propose

{s∗, s∗, . . .} U

  • k0, sM, s∗, s∗, . . .
  • = φ log k0 + V
  • sM, s∗, s∗, . . .
  • > φ log k0 + V (s∗, s∗, s∗, . . .)
  • Constant s∗ proposal cannot be implemented, no one wants to

propose it

  • But, something else can be implemented, which converges to s∗
  • For this, we proceed to define the organizational equilibrium

18

slide-61
SLIDE 61

Organizational Equilibrium

Definition A sequence of saving rates {sτ}∞

τ=0 is organizationally admissible if

  • 1. V (st, st+1, st+2, . . .) is (weakly) increasing in t
  • 2. The first agent has no incentive to delay the proposal.

V (s0, s1, s2, . . .) ≥ max

s

V (s, s0, s1, s2, . . .) Within organizationally admissible sequences, the sequence that attains the maximum of V (s0, s1, s2, . . .) is an organizational equilibrium.

19

slide-62
SLIDE 62

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

20

slide-63
SLIDE 63

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

  • SPEq example: if someone deviates, next agent restarts from s0

20

slide-64
SLIDE 64

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

  • SPEq example: if someone deviates, next agent restarts from s0
  • SPEq refinement criterion

20

slide-65
SLIDE 65

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

  • SPEq example: if someone deviates, next agent restarts from s0
  • SPEq refinement criterion
  • Same continuation value on and off equilibrium path

20

slide-66
SLIDE 66

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

  • SPEq example: if someone deviates, next agent restarts from s0
  • SPEq refinement criterion
  • Same continuation value on and off equilibrium path
  • No one can be better off by deviating and counting on others to restart

the game

20

slide-67
SLIDE 67

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

  • SPEq example: if someone deviates, next agent restarts from s0
  • SPEq refinement criterion
  • Same continuation value on and off equilibrium path
  • No one can be better off by deviating and counting on others to restart

the game

  • In equilibrium:

U(kt, st, st+1, . . .) = φ log kt + V (st, st+1, . . .) = φ log kt + V ∗

20

slide-68
SLIDE 68

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

  • SPEq example: if someone deviates, next agent restarts from s0
  • SPEq refinement criterion
  • Same continuation value on and off equilibrium path
  • No one can be better off by deviating and counting on others to restart

the game

  • In equilibrium:

U(kt, st, st+1, . . .) = φ log kt + V (st, st+1, . . .) = φ log kt + V ∗

  • Total payoff only depends on capital, not a trigger with self-punishment

20

slide-69
SLIDE 69

Remarks on Organizational Equilibrium

  • Organizational Equilibrium is the outcome of some Subgame Perfect

Equilibrium (SPEq)

  • SPEq example: if someone deviates, next agent restarts from s0
  • SPEq refinement criterion
  • Same continuation value on and off equilibrium path
  • No one can be better off by deviating and counting on others to restart

the game

  • In equilibrium:

U(kt, st, st+1, . . .) = φ log kt + V (st, st+1, . . .) = φ log kt + V ∗

  • Total payoff only depends on capital, not a trigger with self-punishment
  • Agents’ action depend on past actions, not a Markov equilibrium

20

slide-70
SLIDE 70

Construct the Organizational Equilibrium

  • Look for a sequence of saving rates {s0, s1, . . .}

21

slide-71
SLIDE 71

Construct the Organizational Equilibrium

  • Look for a sequence of saving rates {s0, s1, . . .}
  • Every generation obtains the same V

V (st, st+1, . . .) = V (st+1, st+2, . . .) = V which induces the following difference equation β(1 − δ) log(1 − st+1) = δαβ 1 − αβ log st + log(1 − st) − (1 − β)V

21

slide-72
SLIDE 72

Construct the Organizational Equilibrium

  • Look for a sequence of saving rates {s0, s1, . . .}
  • Every generation obtains the same V

V (st, st+1, . . .) = V (st+1, st+2, . . .) = V which induces the following difference equation β(1 − δ) log(1 − st+1) = δαβ 1 − αβ log st + log(1 − st) − (1 − β)V

  • We call this difference equation “the proposal function”

st+1 = q(st; V )

21

slide-73
SLIDE 73

Construct the Organizational Equilibrium

  • Look for a sequence of saving rates {s0, s1, . . .}
  • Every generation obtains the same V

V (st, st+1, . . .) = V (st+1, st+2, . . .) = V which induces the following difference equation β(1 − δ) log(1 − st+1) = δαβ 1 − αβ log st + log(1 − st) − (1 − β)V

  • We call this difference equation “the proposal function”

st+1 = q(st; V )

  • The maximal V and an initial s0 are needed to determine {sτ}∞

τ=0 21

slide-74
SLIDE 74

Determine V ∗

  • As V increases, the proposal function q(s; V ) moves upwards
  • The highest V = V ∗ is achieved when q(s; V ) is tangent to the 45 degree

line (at s∗)

22

slide-75
SLIDE 75

Determine the Initial Saving Rate s0

  • The first agent should have no incentive to delay the proposal

max

s

V (s, s0, s1, s2, . . .) = V (sM, s0, s1, s2, . . .)

  • s0 has to be such that

V ∗ = V (s0, s1, s2, . . .) ≥ V (sM, s0, s1, s2, . . .) − → s0 ≤ q∗ sM

  • We select s0 = q∗

sM , which yields the highest welfare during transition

23

slide-76
SLIDE 76

Org Equil in the Quasi-Geometric Discounting Growth Model

The organizational equilibrium {sτ}∞

τ=0 is given recursively by the proposal

function q∗ st = q∗(st−1) = 1 − exp

  • −(1 − β)V ∗ +

δαβ 1−αβ log st−1 + log(1 − st−1)

β(1 − δ)

  • where the initial saving rate s0, the steady state s∗, and V ∗ are given by

s0 = q∗ sM s∗ = δαβ (1 − β + δβ)(1 − αβ) + δαβ V ∗ = 1 − β + δβ 1 − β log(1 − s∗) + αδβ (1 − β)(1 − αβ) log s∗

24

slide-77
SLIDE 77

Transition Dynamics

  • The equilibrium starts from s0, and monotonically converges to s∗.

25

slide-78
SLIDE 78

Remarks

  • 1. To solve proposal function, no agent can treat herself specially,

Vt = Vt+1 Thank you for the idea, I will do it myself

  • 2. To determine the initial saving rate, the agent starts from low saving

rate Goodwill has to be built gradually

  • 3. We will show how the outcome compared with the Markov and

Ramsey We do much better than Markov equilibrium

26

slide-79
SLIDE 79

Comparison: Steady State

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.2 0.4 0.6 0.8 1

  • Organizational equilibrium is much better than the Markov equilibrium

27

slide-80
SLIDE 80

Comparison: Allocation in Transition

  • Organizational equilibrium: starts low, converges to being close to

Ramsey

28

slide-81
SLIDE 81

Comparison: Payoff in Transition

U(kt, st, st+1, . . .)

  • total payoff

= φ log kt + V (st, st+1, . . .)

  • action payoff

29

slide-82
SLIDE 82

Organizational Equilibrium for Weakly Separable Economies

slide-83
SLIDE 83

General Definition

  • An infinite sequence of decision makers is called to act

30

slide-84
SLIDE 84

General Definition

  • An infinite sequence of decision makers is called to act
  • State k ∈ K

30

slide-85
SLIDE 85

General Definition

  • An infinite sequence of decision makers is called to act
  • State k ∈ K
  • Action a ∈ A

30

slide-86
SLIDE 86

General Definition

  • An infinite sequence of decision makers is called to act
  • State k ∈ K
  • Action a ∈ A
  • State evolves kt+1 = F(kt, at)

30

slide-87
SLIDE 87

General Definition

  • An infinite sequence of decision makers is called to act
  • State k ∈ K
  • Action a ∈ A
  • State evolves kt+1 = F(kt, at)
  • Preferences: U(kt, at, at+1, at+2, . . .)

30

slide-88
SLIDE 88

General Definition

  • An infinite sequence of decision makers is called to act
  • State k ∈ K
  • Action a ∈ A
  • State evolves kt+1 = F(kt, at)
  • Preferences: U(kt, at, at+1, at+2, . . .)
  • 1. At any point in time t, the set A is independent of the state kt

30

slide-89
SLIDE 89

General Definition

  • An infinite sequence of decision makers is called to act
  • State k ∈ K
  • Action a ∈ A
  • State evolves kt+1 = F(kt, at)
  • Preferences: U(kt, at, at+1, at+2, . . .)
  • 1. At any point in time t, the set A is independent of the state kt
  • 2. U is weakly separable in k and in {as}∞

s=0

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). and such that v is strictly increasing in its second argument.

30

slide-90
SLIDE 90

General Definition

  • An infinite sequence of decision makers is called to act
  • State k ∈ K
  • Action a ∈ A
  • State evolves kt+1 = F(kt, at)
  • Preferences: U(kt, at, at+1, at+2, . . .)
  • 1. At any point in time t, the set A is independent of the state kt
  • 2. U is weakly separable in k and in {as}∞

s=0

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). and such that v is strictly increasing in its second argument.

  • 3. V is weakly separable in a0 and {as}∞

s=1

V (a0, a1, a2, ...) ≡ V (a0, V (a1, a2, . . .)), with V strictly increasing in its second argument.

30

slide-91
SLIDE 91

On the Choice of Actions

  • Weak separability and state independence of A depend on the

specification of the action set

  • Example: hyperbolic discounting. If the choice is c, feasible actions

depend on k

  • So, sometimes a problem may look nonseparable, but may become

separable by rescaling actions appropriately

31

slide-92
SLIDE 92

Organizational Equilibrium

Definition A sequence of actions {at}∞

t=0 is organizationally admissible if

  • 1. V (at, at+1, at+2, . . .) is (weakly) increasing in t
  • 2. The first agent has no incentive to delay the proposal.

V (a0, a1, a2, . . .) ≥ max

a∈A V (a, a0, a1, a2, . . .)

Within organizationally admissible sequences, the sequence that attains the maximum of V (a0, a1, a2, . . .) is an organizational equilibrium.

32

slide-93
SLIDE 93

Organizational Equilibrium vs. Subgame-Perfect Equilibrium

  • 1. Org Equil is the equilibrium path of a sub-game perfect equilibrium
  • 2. It can be implemented through various strategies. Examples:
  • Restart from the beginning when someone deviates
  • Use difference equation to make each player indifferent between

deviating and following the equilibrium strategy (over a range)

33

slide-94
SLIDE 94

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

34

slide-95
SLIDE 95

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

  • Org Equil: same property only for action payoff:

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). Future players affected by different state

34

slide-96
SLIDE 96

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

  • Org Equil: same property only for action payoff:

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). Future players affected by different state

  • Without state variables, Org Equil is the outcome of a

Reconsideration-Proof Equil

34

slide-97
SLIDE 97

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

  • Org Equil: same property only for action payoff:

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). Future players affected by different state

  • Without state variables, Org Equil is the outcome of a

Reconsideration-Proof Equil

  • Rationale for renegotiation/reconsideration-proofness: reject threats

that are Pareto-dominated ex post

34

slide-98
SLIDE 98

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

  • Org Equil: same property only for action payoff:

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). Future players affected by different state

  • Without state variables, Org Equil is the outcome of a

Reconsideration-Proof Equil

  • Rationale for renegotiation/reconsideration-proofness: reject threats

that are Pareto-dominated ex post

  • Similar spirit for no-delay condition:

34

slide-99
SLIDE 99

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

  • Org Equil: same property only for action payoff:

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). Future players affected by different state

  • Without state variables, Org Equil is the outcome of a

Reconsideration-Proof Equil

  • Rationale for renegotiation/reconsideration-proofness: reject threats

that are Pareto-dominated ex post

  • Similar spirit for no-delay condition:
  • If agents coordinate on Pareto-dominant equilibrium (s∗) right away...

34

slide-100
SLIDE 100

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

  • Org Equil: same property only for action payoff:

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). Future players affected by different state

  • Without state variables, Org Equil is the outcome of a

Reconsideration-Proof Equil

  • Rationale for renegotiation/reconsideration-proofness: reject threats

that are Pareto-dominated ex post

  • Similar spirit for no-delay condition:
  • If agents coordinate on Pareto-dominant equilibrium (s∗) right away...
  • ... then they should do the same next period (independent of past

history)...

34

slide-101
SLIDE 101

Organization Equil vs. Reconsideration-Proof Equil

  • Reconsideration-proof equilibria =

⇒ Value for all current and future players independent of past history

  • Org Equil: same property only for action payoff:

U(k, a0, a1, a2, . . .) ≡ v(k, V (a0, a1, a2, . . .)). Future players affected by different state

  • Without state variables, Org Equil is the outcome of a

Reconsideration-Proof Equil

  • Rationale for renegotiation/reconsideration-proofness: reject threats

that are Pareto-dominated ex post

  • Similar spirit for no-delay condition:
  • If agents coordinate on Pareto-dominant equilibrium (s∗) right away...
  • ... then they should do the same next period (independent of past

history)...

  • =

⇒ no discipline for first player

34

slide-102
SLIDE 102

Existence

  • Under separability and other weak conditions, Organizational

Equilibrium exists

  • (to be completed) Organizational Equilibrium admits a recursive

structure with a fixed point at+1 = q∗(at)

35

slide-103
SLIDE 103

A Class of Separable Economies

  • Most economies do not satisfy separability condition
  • Our strategy: approximate the original economy by separable ones
  • First order approximation satisfies the separable property

Ψt = u(kt, at) + δ

  • τ=1

βτu (kt+τ, at+τ) subject to u(kt, at) = Γ10 + Γ11h(kt) + Γ12m(at) h(kt+1) = Γ20 + Γ21h(kt) + Γ22g(at)

  • h(k), m(a), g(a) can be any monotonic functions

36

slide-104
SLIDE 104

Example

  • Original economy

Ψt = log(ct) + δ

  • τ=1

βτ log(ct+τ) s.t. ct + it = kα

t

kt+1 = (1 − d)kt + it

  • The approximated economy

Ψt = log(ct) + δ

  • τ=1

βτ log(ct+τ) s.t. ct + it = kα

t

kt+1 = k k1−d

t

id

t 37

slide-105
SLIDE 105

Government Policy

slide-106
SLIDE 106

A Simple Version: Govnt Expd with k Inc Taxation

  • Preferences:

t=0 βt[γc log ct + γg log gt] 38

slide-107
SLIDE 107

A Simple Version: Govnt Expd with k Inc Taxation

  • Preferences:

t=0 βt[γc log ct + γg log gt]

  • Technology:

f (kt) = kα

t ,

kt+1 = f (kt) − ct − gt.

38

slide-108
SLIDE 108

A Simple Version: Govnt Expd with k Inc Taxation

  • Preferences:

t=0 βt[γc log ct + γg log gt]

  • Technology:

f (kt) = kα

t ,

kt+1 = f (kt) − ct − gt.

  • Consumers’ budget constraint:

ct + kt+1 = (1 − τt)rtkt + πt

38

slide-109
SLIDE 109

A Simple Version: Govnt Expd with k Inc Taxation

  • Preferences:

t=0 βt[γc log ct + γg log gt]

  • Technology:

f (kt) = kα

t ,

kt+1 = f (kt) − ct − gt.

  • Consumers’ budget constraint:

ct + kt+1 = (1 − τt)rtkt + πt

  • Prices:

rt = fk(kt), πt = f (kt) − rkkt

38

slide-110
SLIDE 110

A Simple Version: Govnt Expd with k Inc Taxation

  • Preferences:

t=0 βt[γc log ct + γg log gt]

  • Technology:

f (kt) = kα

t ,

kt+1 = f (kt) − ct − gt.

  • Consumers’ budget constraint:

ct + kt+1 = (1 − τt)rtkt + πt

  • Prices:

rt = fk(kt), πt = f (kt) − rkkt

  • Government budget constraint:

gt = τtrtkt

38

slide-111
SLIDE 111

Difference from Previous Setup

  • In the quasi-geometric discounting, only one player per period
  • Here, gov’t + private sector
  • Need to deal with the competitive equilibrium component

39

slide-112
SLIDE 112

Payoffs

  • Given an arbitrary {τt}∞

t=0, Euler equation has to hold in equilibrium

u′(ct) = β(1 − τt+1)f ′(kt+1)u′(ct+1)

40

slide-113
SLIDE 113

Payoffs

  • Given an arbitrary {τt}∞

t=0, Euler equation has to hold in equilibrium

u′(ct) = β(1 − τt+1)f ′(kt+1)u′(ct+1)

  • It induces a sequence of saving rates (and allocations) such that

st 1 − st − ατt = αβ(1 − τt+1) 1 − st+1 − ατt+1

40

slide-114
SLIDE 114

Payoffs

  • Given an arbitrary {τt}∞

t=0, Euler equation has to hold in equilibrium

u′(ct) = β(1 − τt+1)f ′(kt+1)u′(ct+1)

  • It induces a sequence of saving rates (and allocations) such that

st 1 − st − ατt = αβ(1 − τt+1) 1 − st+1 − ατt+1

  • Total payoff with initial capital k

U(k, τ0, τ1, τ2, . . .) = α(1 + γ) 1 − αβ log k + V (τ0, τ1, τ2, . . .)

40

slide-115
SLIDE 115

Payoffs

  • Given an arbitrary {τt}∞

t=0, Euler equation has to hold in equilibrium

u′(ct) = β(1 − τt+1)f ′(kt+1)u′(ct+1)

  • It induces a sequence of saving rates (and allocations) such that

st 1 − st − ατt = αβ(1 − τt+1) 1 − st+1 − ατt+1

  • Total payoff with initial capital k

U(k, τ0, τ1, τ2, . . .) = α(1 + γ) 1 − αβ log k + V (τ0, τ1, τ2, . . .)

  • Action payoff

V ({τt}∞

t=0) = ∞

  • t=0

βt

  • log(1−ατt −st)+γ log ατt + αβ(1 + γ)

1 − αβ log st

  • 40
slide-116
SLIDE 116

Organizational Equilibrium in Government Taxation Problem

Definition A sequence of tax rates {τt}∞

t=0 is organizationally admissible if

  • V (τt, τt+1, τt+2, . . .) is (weakly) increasing in t

41

slide-117
SLIDE 117

Organizational Equilibrium in Government Taxation Problem

Definition A sequence of tax rates {τt}∞

t=0 is organizationally admissible if

  • V (τt, τt+1, τt+2, . . .) is (weakly) increasing in t
  • The implementability constraint is satisfied

41

slide-118
SLIDE 118

Organizational Equilibrium in Government Taxation Problem

Definition A sequence of tax rates {τt}∞

t=0 is organizationally admissible if

  • V (τt, τt+1, τt+2, . . .) is (weakly) increasing in t
  • The implementability constraint is satisfied
  • Government has no incentive to delay the proposal.

V (τ0, τ1, τ2, . . .) ≥ max

τ

V (τ, τ0, τ1, τ2, . . .) Within organizationally admissible sequences, any sequence that attains the maximum of V (τ0, τ1, τ2, . . .) is an organizational equilibrium.

41

slide-119
SLIDE 119

Proposal Function in Organizational Equilibrium

  • The equilibrium starts from τ0, and monotonically converges to τ ∗.

42

slide-120
SLIDE 120

43

slide-121
SLIDE 121

Comparison: Payoff in Transition

U(kt, τt, τt+1, . . .)

  • total payoff

= α(1 + γ) 1 − αβ log kt + V (τt, τt+1, . . .)

  • action payoff

44

slide-122
SLIDE 122

Quantitative Version: depreciation and leisure

  • Preferences

  • t=0

βt[γc log ct + γg log gt + γℓ log(1 − ℓt)]

45

slide-123
SLIDE 123

Quantitative Version: depreciation and leisure

  • Preferences

  • t=0

βt[γc log ct + γg log gt + γℓ log(1 − ℓt)]

  • Consumers’ budget constraint

ct + kt+1 = kt + (1 − τ ℓ

t − τt)wtℓt + (1 − τ k t − τt)(rt − δ)kt 45

slide-124
SLIDE 124

Quantitative Version: depreciation and leisure

  • Preferences

  • t=0

βt[γc log ct + γg log gt + γℓ log(1 − ℓt)]

  • Consumers’ budget constraint

ct + kt+1 = kt + (1 − τ ℓ

t − τt)wtℓt + (1 − τ k t − τt)(rt − δ)kt

  • Technology:

f (kt) = kα

t ℓ1−α t

, kt+1 = (1 − δ)kt + it

45

slide-125
SLIDE 125

Quantitative Version: depreciation and leisure

  • Preferences

  • t=0

βt[γc log ct + γg log gt + γℓ log(1 − ℓt)]

  • Consumers’ budget constraint

ct + kt+1 = kt + (1 − τ ℓ

t − τt)wtℓt + (1 − τ k t − τt)(rt − δ)kt

  • Technology:

f (kt) = kα

t ℓ1−α t

, kt+1 = (1 − δ)kt + it

  • Government budget constrain

gt = τ k

t (rt − δ)kt + τ ℓ t wtℓt + τt(wtℓt + (rt − δ)kt) 45

slide-126
SLIDE 126

Labor Income Tax

Aggregate statistics Labor income tax Pareto Ramsey Markov Organization y 1.000 0.790 0.794 0.792 k/y 2.959 2.959 2.959 2.959 c/y 0.583 0.583 0.600 0.591 g/y 0.180 0.180 0.164 0.172 c/g 3.240 3.240 3.662 3.435 ℓ 0.320 0.253 0.254 0.253 τ 0.281 0.256 0.269

Parameters: α = 0.36, β = 0.96, δ = 0.08, γg = 0.09, γc = 0.27, γℓ = 0.64

46

slide-127
SLIDE 127

Capital Income Tax

Aggregate statistics Capital income tax Pareto Ramsey Markov Organization y 1.000 0.685 0.570 0.660 k/y 2.959 1.972 1.360 1.824 c/y 0.583 0.722 0.697 0.716 g/y 0.180 0.120 0.195 0.138 c/g 3.240 6.018 3.580 5.188 ℓ 0.320 0.275 0.282 0.277 τ 0.594 0.774 0.645

Parameters: α = 0.36, β = 0.96, δ = 0.08, γg = 0.09, γc = 0.27, γℓ = 0.64

47

slide-128
SLIDE 128

Total Income Tax

Aggregate statistics Total income tax Pareto Ramsey Markov Organization y 1.000 0.764 0.769 0.767 k/y 2.959 2.676 2.698 2.687 c/y 0.583 0.601 0.612 0.606 g/y 0.180 0.185 0.173 0.179 c/g 3.240 3.240 3.542 3.379 ℓ 0.320 0.259 0.259 0.259 τ 0.236 0.220 0.228

Parameter: α = 0.36, β = 0.96, δ = 0.08, γg = 0.09, γc = 0.27, γℓ = 0.64

48

slide-129
SLIDE 129

Conclusion

  • Propose organizational equilibrium for economy with state variables

49

slide-130
SLIDE 130

Conclusion

  • Propose organizational equilibrium for economy with state variables
  • Three properties

49

slide-131
SLIDE 131

Conclusion

  • Propose organizational equilibrium for economy with state variables
  • Three properties
  • No one is special: thank you for the idea, I will do it myself

49

slide-132
SLIDE 132

Conclusion

  • Propose organizational equilibrium for economy with state variables
  • Three properties
  • No one is special: thank you for the idea, I will do it myself
  • Goodwill has to be built gradually

49

slide-133
SLIDE 133

Conclusion

  • Propose organizational equilibrium for economy with state variables
  • Three properties
  • No one is special: thank you for the idea, I will do it myself
  • Goodwill has to be built gradually
  • Outcome is close to Ramsey allocation, much better than Markov

equilibrium

49

slide-134
SLIDE 134

Conclusion

  • Propose organizational equilibrium for economy with state variables
  • Three properties
  • No one is special: thank you for the idea, I will do it myself
  • Goodwill has to be built gradually
  • Outcome is close to Ramsey allocation, much better than Markov

equilibrium

  • Future agenda:

49

slide-135
SLIDE 135

Conclusion

  • Propose organizational equilibrium for economy with state variables
  • Three properties
  • No one is special: thank you for the idea, I will do it myself
  • Goodwill has to be built gradually
  • Outcome is close to Ramsey allocation, much better than Markov

equilibrium

  • Future agenda:
  • Idea can be used to generalize renegotiation-proofness in games with

multiple players

49

slide-136
SLIDE 136

Conclusion

  • Propose organizational equilibrium for economy with state variables
  • Three properties
  • No one is special: thank you for the idea, I will do it myself
  • Goodwill has to be built gradually
  • Outcome is close to Ramsey allocation, much better than Markov

equilibrium

  • Future agenda:
  • Idea can be used to generalize renegotiation-proofness in games with

multiple players

  • Further analysis of approximation options

49