Optimal policy computation with Dynare MONFISPOL workshop, Stresa - - PowerPoint PPT Presentation

optimal policy computation with dynare
SMART_READER_LITE
LIVE PREVIEW

Optimal policy computation with Dynare MONFISPOL workshop, Stresa - - PowerPoint PPT Presentation

Optimal policy computation with Dynare MONFISPOL workshop, Stresa Michel Juillard 1 March 12, 2010 1 Bank of France and CEPREMAP Introduction Dynare currently implements two manners to compute optimal policy in DSGE models optimal rule


slide-1
SLIDE 1

Optimal policy computation with Dynare

MONFISPOL workshop, Stresa Michel Juillard1 March 12, 2010

1Bank of France and CEPREMAP

slide-2
SLIDE 2

Introduction

Dynare currently implements two manners to compute optimal policy in DSGE models

◮ optimal rule under commitment (Ramsey policy) ◮ optimal simple rules

slide-3
SLIDE 3

A New Keynesian model

Thanks to Eleni Iliopulos! Utility function: Ut = ln Ct − φN1+γ

t

1 + γ Uc,t = 1 Ct UN,t = −φNγ

t

slide-4
SLIDE 4

Recursive equilibrium

1 Ct =βEt

  • 1

Ct+1 Rt πt+1

  • πt

Ct (πt − 1) =βEt πt+1 Ct+1 (πt+1 − 1)

  • + εAt

ω Nt Ct φCtNγ

t

At − ε − 1 ε

  • AtNt =Ct + ω

2 (πt − 1)2 ln At =ρ ln At−1 + ǫt

slide-5
SLIDE 5

Ramsey problem

max E0

  • t=0

βt

  • ln Ct − φN1+γ

t

1 + γ − µ1,t 1 Ct − βEt

  • 1

Ct+1 Rt πt+1

  • − µ2,t

πt Ct (πt − 1) − βEt πt+1 Ct+1 (πt+1 − 1)

  • −εAt

ω Nt Ct φCtNγ

t

At − ε − 1 ε

  • − µ3,t
  • AtNt − Ct − ω

2 (πt − 1)2 − µ4,t (ln At − ρ ln At−1 − ǫt)

  • µi,t = λi,tβt where λi,t is a Lagrange multiplier.
slide-6
SLIDE 6

F .O.C.

µ1,t−1 CtRt−1 π2

t

+ 1 Ct [2πt − 1]

  • µ2,t − µ2,t−1
  • = µ3,tω (

1 Ct − µ1,t C2

t

+ µ1,t−1 Rt−1 C2

t πt

− µ2,t 1 C2

t

  • πt (πt − 1) + AtNt

ω (ε − 1)

  • +µ2,t−1

1 C2

t

[πt (πt − 1)] = µ3,t φNγ

t + φ ε

ωNγ

t µ2,t (γ + 1) − µ2,t

At Ctω (ε − 1) = µ3,tAt µ2,t ω Nt Ct (ε − 1) + µ3,tNt + µ4,t At − ρβµ4,t+1 At = 0 µ1,tβEt

  • 1

Ct+1πt+1

  • = 0
slide-7
SLIDE 7

Dynare code

var pai, c, n, r, a; varexo u; parameters beta, rho, epsilon, omega, phi, gamma; beta=0.99; gamma=3;

  • mega=17;

epsilon=8; phi=1; rho=0.95;

slide-8
SLIDE 8

Dynare code (continued)

model; a = rho*a(-1)+u; 1/c = beta*r/(c(+1)*pai(+1)); pai*(pai-1)/c = beta*pai(+1)*(pai(+1)-1)/c(+1) +epsilon*phi*n^(gamma+1)/omega

  • exp(a)*n*(epsilon-1)/(omega*c);

exp(a)*n = c+(omega/2)*(pai-1)^2; end;

slide-9
SLIDE 9

Dynare code (continued)

initval; pai=1; r=1/beta; c=0.96717; n=0.96717; a=0; end; shocks; var u; stderr 0.008; end; planner_objective(ln(c)

  • phi*((n^(1+gamma))/(1+gamma)));

ramsey_policy(planner_discount=0.99,order=1);

slide-10
SLIDE 10

Ramsey policy: General nonlinear case

max

{yτ }∞

τ=0

Et

  • τ=t

βτ−tU(yτ) s.t. Eτf(yτ+1, yτ, yτ−1, ετ) = 0 yt ∈ Rn : endogenous variables εt ∈ Rp : stochastic shocks and f : R3n+p → Rm There are n − m free policy instruments.

slide-11
SLIDE 11

Lagrangian

The Lagrangian is written L (yt−1, εt) = Et

  • τ=t

βτ−tU(yτ) − [λτ]η [f (yτ+1, yτ, yτ−1, ετ)]η where λτ is a vector of m Lagrange multipliers We adopt tensor notations because later on we will have to deal with the second

  • rder derivatives of [λ (st)]η [F (st)]η

γ.

It turns out that it is the discounted value of the Lagrange multipliers that are stationary and not the multipliers

  • themselves. It is therefore handy to rewrite the Lagrangian as

L (yt−1, εt) = Et

  • τ=t

βτ−t U(yτ) − [µτ]η [f (yτ+1, yτ, yτ−1, ετ)]η whith µt = λτ/βτ−t.

slide-12
SLIDE 12

Optimization problem reformulated

The optimization problem becomes max

{yτ }∞

τ=t

min

{λτ }∞

τ=t

Et

  • τ=t

βτ−t U(yτ) − [µτ]η [f (yτ+1, yτ, yτ−1, ετ)]η for yt−1 given.

slide-13
SLIDE 13

First order necessary conditions

Derivatives of the Lagrangian with respect to endogenous variables: ∂L ∂yt

  • α

=Et

  • [U1(yt)]α − [µt]η [f2(yt+1, yt, yt−1, εt)]η

α

− β [µt+1]η [f3(yt+2, yt+1, yt, εt+1)]η

α

  • τ = t

∂L ∂yτ

  • α

=Et

  • [U1(yτ)]α − [µτ]η [f2(yτ+1, yτ, yτ−1, ετ)]η

α

− β [µτ+1]η [f3(yτ+2, yτ+1, yτ, ετ+1)]η

α

− β−1 [µτ−1]η [f1(yτ, yτ−1, yτ−2, ετ−1)]α

  • τ > t
slide-14
SLIDE 14

First order conditions

The first order conditions of this optimization problem are Et

  • [U1(yτ)]α − [µτ]η [f2(yτ+1, yτ, yτ−1, ετ)]η

α

−β [µτ+1]η [f3(yτ+2, yτ+1, yτ, ετ+1)]η

α

−β−1 [µτ−1]η [f1(yτ, yτ−1, yτ−2, ετ−1)]η

α

  • = 0

[f (yτ+1, yτ, yτ−1, ετ)]η = 0 with µt−1 = 0 and where U1() is the Jacobian of function U() with respect to yτ and fi() is the first order partial derivative of f() with respect to the ith argument.

slide-15
SLIDE 15

Cautionary remark

The First Order Conditions for optimality are only necessary conditions for a maximum. Levine, Pearlman and Pierse (2008) propose algorithms to check a sufficient condition. I still need to adapt it to the present framework where the dynamic conditions, f() are given as implicit functions and variables can be both predetermined and forward looking.

slide-16
SLIDE 16

Nature of the solution

The above system of equations is nothing but a larger system

  • f nonlinear rational expectation equations. As such, it can be

approximated either to first order or to second order. The solution takes the form yt µt

  • = ˆ

g (yt−2, yt−1, µt−1, εt−1, εt) The optimal policy is then directly obtained as part of the set of g() functions.

slide-17
SLIDE 17

The steady state problem

The steady state is solution of U1(¯ y) − ¯ µ

f2(¯ y, ¯ y, ¯ y, 0) + βf3(¯ y, ¯ y, ¯ y, 0) +β−1f1(¯ y, ¯ y, ¯ y, 0)

  • =

f(¯ y, ¯ y, ¯ y, 0) =

slide-18
SLIDE 18

Computing the steady state

For a given value ˜ y, it is possible to use the first matrix equation above to obtain the value of ˜ µ that minimizes the sum of square residuals, e: M = f2(˜ y, ˜ y, ˜ y, ˜ x, 0) + βf3(˜ y, ˜ y, ˜ y, ˜ x, 0) +β−1f1(˜ y, ˜ y, ˜ y, 0) ˜ µ′ = U1(˜ y)M′ MM′−1 e′ = U1(˜ y) − ˜ µ′M Furthermore, ˜ y must satisfy the m equations f(˜ y, ˜ y, ˜ y, 0) = It is possible to build a sytem of equations whith only n unkowns ˜ y, but we must provide n − m independent measures of the residuals e. Independent in the sense that the derivatives of these measures with respect to ˜ y must be linearly independent.

slide-19
SLIDE 19

A QR trick

At the steady state, the following must hold exactly U1 (˜ y) = ¯ µ′M This can only be if M⋆ =

  • M

U1 (˜ y)

  • is of rank m The reordered QR decomposition of M⋆ is such

that M⋆ E = Q R

(m + 1) × n n × n (m + 1) × (m + 1) (m + 1) × n

where E is a permutation matrix, Q an orthogonal matrix and R a triangular matrix with diagonal elements ordered in decreasing size.

slide-20
SLIDE 20

A QR trick (continued)

◮ When U1 (˜

y) = ¯ µ′M doesn’t hold exactly M⋆ is full rank (m + 1) and the n − m last elements of R may be different from zero.

◮ When U1 (˜

y) = ¯ µ′M holds exactly M⋆ has rank m and the n − m last elements of R are zero.

◮ The last n − m elements of the last row of R provide the

n − m independent measures of the residuals e

◮ In practice, we build a nonlinear function with ˜

y as input and that returns the n − m last elements of the last row of R and f (˜ y, ˜ y, ˜ y, 0). At the solution, when ˜ y = ¯ y, this function must return zeros.

slide-21
SLIDE 21

When an analytical solution is available

Let’s write yx the n − m policy instruments and yy, the m remaining endogenous variables, such that y⋆ = y⋆

x

y⋆

y

  • .

The analytical solution is given by y⋆

y = ˜

f (y⋆

x ) .

We have then f (y⋆, y⋆, y⋆, 0) = 0 Combining this analytical solution with the n − m residuals computed above, we can reduce the steady state problem to a system of n − m nonlinear equations that is much easier to solve.

slide-22
SLIDE 22

First order approximation of the FOCs

Et

  • [U11]αγ
  • yt

γ − [ µt]η [f2]η

α − β [

µt+1]η [f3]η

α − β−1 [

µt−1]η [f1]η

α

− [¯ µ]η

  • β [f13]η

γα

  • yt+2

γ + β−1 [f13]η

αγ

  • yt−2

γ +

  • [f12]η

γα + β [f23]η γα

yt+1 γ +

  • [f23]η

αγ + β−1 [f12]η αγ

yt−1 γ +

  • [f33]η

αγ + β [f22]η αγ + β−1 [f11]η αγ

yt γ + [f24]η

αδ [

εt]δ + β [f34]η

αδ [

εt+1]δ + β−1 [f14]η

αδ [

εt−1]]δ

  • =

Et

  • [f1]ηγ
  • yt+1

γ + [f2]ηγ

  • yt

γ + [f3]ηγ

  • yt−1

γ + [f4]ηδ [ εt]δ =

where yt = yt − ¯ y, µt = µt − ¯ µ and fij the second order derivatives corresponding to the ith and the jth argument of the f() function.

slide-23
SLIDE 23

The first pitfall

A naive approach ot linear-quadratic appoximation that would consider a linear approximation of the dynamics of the system and a second order approximation of the objective function, ignores the second order derivatives fij that enter in the first

  • rder approximation of the dynamics of the model under
  • ptimal policy.
slide-24
SLIDE 24

The approximated solution function

yt = ¯ y + g1 yt−2 + g2 yt−1 + g3 µt−1 + g4εt−1 + g5εt µt = ¯ µ + h1 yt−2 + h2 yt−1 + h3 µt−1 + h4εt−1 + h5εt

slide-25
SLIDE 25

Dynare implementation

When the ramsey_policy instruction is given, Dynare forms the matrices of coefficients of the first order expansion of the first

  • rder necessary conditions, without writing explicitely the

equations for the first order necessary conditions. In the future, we will write a *.mod file containing all the equations for the Ramsey problem.

slide-26
SLIDE 26

Evaluating welfare under optimal policy

◮ Optimal policy under commitment is mostly interesting as a

benchmark to evaluate other, more practical policies.

◮ The welfare evaluation must be consistent with the

approximated solution. Idealy, if the approximated solution is the exact solution of an approximated problem, approximated welfare should be the objective function of the approximated problem.

◮ A first order approximation of the dynamics under optimal

policy is the exact solution of some quadratic problem.

◮ Plugging policy rule in original model and compute an

approximation of welfare should provide the same measure and avoid paradoxical rankings.

◮ We should aim at a second order approximation of welfare.

slide-27
SLIDE 27

Conjectures

  • 1. Under optimal policy, the second order approximation of

welfare is equal to a second order approximation of the Lagrangian

  • 2. Second order derivatives of the policy function drop from

the second order approximation of welfare

slide-28
SLIDE 28

Intuition from the static case

Consider max

y

U(y) s.t. f(y, s) = 0 The Lagrangian is L(s) = U(y) − [λ]η [f(y, s)]η The solution takes the form y = g(s) λ = h(s)

slide-29
SLIDE 29

First order condition for optimality

[U1]α + − [λ]η [f1(y, s)]η = 0 f(y, s) = 0

slide-30
SLIDE 30

Second order approximation of the Lagrangian

Taking the Taylor expansion around ¯ s: ¯ U − ¯ λ

  • η

¯ f η − ¯ f

  • η [h1]η

γ [ˆ

s] +

  • [U1]α −

¯ λ

  • η [f1]η

α

  • [g1]α

γ −

¯ λ

  • η [f2]η

γ

s]γ + 1 2 [U1]α − ¯ λ

  • η [f1]η

α

  • [gss]α

γθ +

  • [U11]αδ −

¯ λ

  • η [f11]η

αδ

  • [gs]α

γ [gs]δ θ

− ¯ λ

  • η [f22]η

γδ −

¯ f

  • η [h11]η

γδ − 2

¯ λ

  • η [f21]η

αδ [g1]α γ

+ [h1]η

γ [f1]η α [g1]α δ

s]γ [ˆ s]δ

slide-31
SLIDE 31

Simplifying

Taking the Taylor expansion of the Lagrangian around ¯ s: ¯ U + [U1]α [g1]α

γ [ˆ

s]γ + 1 2 [U11]αδ − ¯ λ

  • η [f11]η

αδ

  • [gs]α

γ [gs]δ θ −

¯ λ

  • η [f22]η

γδ

− 2 ¯ λ

  • η [f21]η

αδ [g1]α γ + [h1]η γ [f1]η α [g1]α δ

[ˆ s]γ [ˆ s]δ

slide-32
SLIDE 32

In addition . . .

From steady state: [U1]α − ¯ λ

  • η [f1]η

α = 0

when we compute a second order approximation of the FOCs, we have also [f1]η

α [gss]α γθ + [f11]η αδ [gs]α γ [gs]δ θ + [f22]η γδ + 2 [f21]η αδ [g1]α γ = 0

So, [U1]α [gss]α

γθ = −

¯ λ

  • η
  • [f11]η

αδ [gs]α γ [gs]δ θ + [f22]η γδ + 2 [f21]η αδ [g1]α γ

slide-33
SLIDE 33

Extending the intuition to DSGE models

Steady state condition: [U1]α − [¯ µ]η

  • [f2]η

α + β [f3]η α + β−1 [f1]η α

  • = 0

◮ One needs to properly select timing of variables in welfare

summation over time.

◮ One needs to use the fact that initial value of Lagrange

multipliers is zero.

slide-34
SLIDE 34

Timeless perspective

◮ Time inconsistency problem stems from the fact that the initial

value of the lagged Lagrange multipliers is set to zero.

◮ If, later on, the authorities re–optimize, they reset the Lagrange

multipliers to zero.

◮ This mechanism reflects the fact that authorities make their

decision after that private agents have formed their expectations (on the basis of the previous policy).

◮ When private agents expect the authorities to reoptimize the

economy switch to a Nash equilibrium (discretionary solution)

◮ For optimal policy in a timeless perspective, Svensson and

Woodford suggest that authorities relinquish their first period advantage and act as if the Lagrange mulitpliers had been initialized to zero in the far away past.

◮ What should be the initial value of the Lagrange multipliers for

  • ptimal policy in a timeless perspective?
slide-35
SLIDE 35

Eliminating the Lagrange multipliers

A linear approximation of the solution to a Ramsey problem takes the form: yt

  • µt
  • =

A11 A12 A21 A22 yt−1

  • µt−1
  • +

B1 B2

  • εt

= A1 yt−1 + A2 µt−1 + Bεt where A1 and A2 are the conforming sub-matrices. The problem is to eliminate µt−1 from the expression for yt by substituting values of yt−1, yt−2, εt−1.

slide-36
SLIDE 36

A QR algolrithm (I)

A2E = QR = Q11 Q12 Q21 Q22 R1 R2

  • where E is a permutation matrix, R1 is upper triangular and R2

is empty if A2 is full (column) rank. Replacing A2 by its decomposition, one gets Q′ yt

  • µt
  • = Q′A1

yt−1 + RE′ µt−1 + Q′Bεt The bottom part of the above system can be written Q′

12

yt + Q′

22

µt =

  • Q′

12

Q′

22

A1 yt−1 + Bεt

  • This system in turn contains necessarily more equations than

elements in µt.

slide-37
SLIDE 37

A QR algolrithm (II)

A new application of the QR decomposition to Q′

22 gives:

Q′

22

E = Q R = Q11

  • Q12
  • Q21
  • Q22

R1

  • R2
  • where

E is a permutation matrix. When R2 isn’t empty, the system is undetermined and it is possible to choose some of the multipliers µt, the ones corresponding to the columns of R2. We set them to zero, following a minimal state space type of

  • arguments. Note however, that the QR decomposition isn’t
  • unique. Once the economy is managed according to optimal

policy, all choices of decomposition would be equivalent, but the choice may matter for the initial period. This is an issue that deserves further studying.

slide-38
SLIDE 38

A QR algolrithm (III)

Then, we have

  • µt =

E

  • R−1

1

  • Q′

11

  • Q′

21

Q′

12

Q′

22

A1 yt−1 + Bεt

  • − Q′

12

yt

  • and
  • µt−1 =

E

  • R−1

1

  • Q′

11

  • Q′

21

Q′

12

Q′

22

A1 yt−2 + Bεt−1

  • − Q′

12

yt−1

slide-39
SLIDE 39

A QR algolrithm (IV)

Replacing, µt−1 in the original equation for yt, one obtains finally

  • yt = M1

yt−1 + M2 yt−2 + M3εt + M4εt−1 where M1 = A11 − A12 E

  • R−1

1

  • Q′

11

  • Q′

21

  • Q′

12

  • M2

= A12 E

  • R−1

1

  • Q′

11

  • Q′

21

Q′

12

Q′

22

  • A1
  • M3

= B1 M4 = A12 E

  • R−1

1

  • Q′

11

  • Q′

21

Q′

12

Q′

22

  • B
slide-40
SLIDE 40

A linear–quadratic example (I)

var y inf r dr; varexo e_y e_inf; parameters delta sigma alpha kappa gamma1 gamma2; delta = 0.44; kappa = 0.18; alpha = 0.48; sigma = -0.06;

slide-41
SLIDE 41

A linear–quadratic example(II)

model(linear); y = delta*y(-1)+(1-delta)*y(+1)+sigma *(r-inf(+1))+ inf = alpha*inf(-1)+(1-alpha)*inf(+1)+kappa*y+e_inf; dr = r - r(-1); end; shocks; var e_y; stderr 0.63; var e_inf; stderr 0.4; end;

slide-42
SLIDE 42

A linear–quadratic example(III)

planner_objective y^2 + inf^2 + 0.2*dr^2; ramsey_policy(planner_discount=1); Warning: the solution isn’t necessarily minimum state!

slide-43
SLIDE 43

Optimal simple rule

Exemple (Clarida, Gali, Gertler) yt = δyt−1 + (1 − δ)Etyt+1 + σ(rt − Etinft+1) + eyt inft = αinf−1 + (1 − α)Etinft+1 + κyt + einft rt = γ1inft + γ2yt Objectif arg min

γ1,γ2 var(y) + var(inf)

= arg min

γ1,γ2 lim β→1 E0 ∞

  • t=1

(1 − β)βt(y2

t + inf 2 t )

slide-44
SLIDE 44

DYNARE example

var y inf r; varexo e_y e_inf; parameters delta sigma alpha kappa gamma1 gamma2; delta = 0.44; kappa = 0.18; alpha = 0.48; sigma = -0.06;

slide-45
SLIDE 45

(continued)

model(linear); y = delta*y(-1)+(1-delta)*y(+1)+sigma *(r-inf(+1))+e_y; inf = alpha*inf(-1)+(1-alpha)*inf(+1)+kappa*y+e_inf; r = gamma1*inf+gamma2*y; end; shocks; var e_y; stderr 0.63; var e_inf; stderr 0.4; end;

slide-46
SLIDE 46

(continued)

  • ptim_weights;

inf 1; y 1; end; gamma1 = 1.1; gamma2 = 0;

  • sr_params gamma1 gamma2;
  • sr;
slide-47
SLIDE 47

Another example

yt = δyt−1 + (1 − δ)Etyt+1 + σ(rt − Etinft+1) + eyt inft = αinf−1 + (1 − α)Etinft+1 + κyt + einft rt = γ1inft + γ2yt drt = rt − rt−1 Objectif min

γ1,γ2 var(y) + var(inf) + 0.2var(dr)