EC400 Part II, Math for Micro: Lecture 6 Leonardo Felli NAB.SZT 16 - - PowerPoint PPT Presentation

ec400 part ii math for micro lecture 6
SMART_READER_LITE
LIVE PREVIEW

EC400 Part II, Math for Micro: Lecture 6 Leonardo Felli NAB.SZT 16 - - PowerPoint PPT Presentation

EC400 Part II, Math for Micro: Lecture 6 Leonardo Felli NAB.SZT 16 September 2010 SOC with Two Variables and One Constraint Consider the problem: max f ( x 1 , x 2 ) x 1 , x 2 s.t. g 1 ( x 1 , x 2 ) b . The bordered Hessian matrix is:


slide-1
SLIDE 1

EC400 Part II, Math for Micro: Lecture 6

Leonardo Felli

NAB.SZT

16 September 2010

slide-2
SLIDE 2

SOC with Two Variables and One Constraint

Consider the problem: max

x1,x2

f (x1, x2) s.t. g1(x1, x2) ≤ b. The bordered Hessian matrix is: B =          ∂g ∂x1 ∂g ∂x2 ∂g ∂x1 ∂2f ∂x2

1

− λ∗ ∂2g ∂x2

1

∂2f ∂x1 ∂x2 − λ∗ ∂2g ∂x1 ∂x2 ∂g ∂x2 ∂2f ∂x1 ∂x2 − λ∗ ∂2g ∂x1 ∂x2 ∂2f ∂x2

2

− λ∗ ∂2f ∂x2

2

        

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 2 / 31

slide-3
SLIDE 3

The leading principal submatrixes are: B1 = (0), B2 =     ∂g ∂x1 ∂g ∂x1 ∂2f ∂x2

1

− λ∗ ∂2g ∂x2

1

    , B3 = B The sufficient SOC are:

|B2| < 0 (sign of (−1)1) which is always satisfied, and |B3| > 0 (sign of (−1)2).

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 3 / 31

slide-4
SLIDE 4

Consider now the problem: max

x1,x2,x3

f (x1, x2, x3) s.t. g(x1, x2, x3) ≤ b The bordered Hessian matrix is: H =               ∂g ∂x1 ∂g ∂x2 ∂g ∂x3 ∂g ∂x1 ∂L ∂x2

1

∂L ∂x1 ∂x2 ∂L ∂x1 ∂x3 ∂g ∂x2 ∂L ∂x1 ∂x2 ∂L ∂x2

2

∂L ∂x2 ∂x3 ∂g ∂x3 ∂L ∂x1 ∂x3 ∂L ∂x2 ∂x3 ∂L ∂x2

3

             

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 4 / 31

slide-5
SLIDE 5

The leading principal submatrixes are: H1 = (0), H2 =     ∂g ∂x1 ∂g ∂x1 ∂L ∂x2

1

    H3 =          ∂g ∂x1 ∂g ∂x2 ∂g ∂x1 ∂L ∂x2

1

∂L ∂x1 ∂x2 ∂g ∂x2 ∂L ∂x1 ∂x2 ∂L ∂x2

2

         , H4 = H The sufficient SOC are then |H2| < 0 (always satisfied), |H3| > 0 and |H4| < 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 5 / 31

slide-6
SLIDE 6

Maximum Value Functions

Profit functions and indirect utility functions are example of maximum value functions, whereas cost functions and expenditure functions are minimum value functions.

Definition

Let x(b) be a solution to the problem of maximizing f (x) subject to g(x) ≤ b, the corresponding maximum value function is then v(b) = f (x(b)) A maximum value function is non decreasing.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 6 / 31

slide-7
SLIDE 7

The Interpretation of the Lagrange Multiplier

Consider the problem: max

x∈Rn

f (x) s.t. g1(x) ≤ b∗

1

. . . gk(x) ≤ b∗

k.

Let b∗ = (b∗

1, ..., b∗ k) and x∗ 1(b∗), ..., x∗ n(b∗) denote the optimal

solution and let λ1(b∗), ..., λk(b∗) be the corresponding Lagrange multipliers.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 7 / 31

slide-8
SLIDE 8

Theorem

Assume that, as b varies near b∗, x∗

1(b∗), ..., x∗ n(b∗) and λ1(b∗), ..., λk(b∗)

are differentiable functions and that x∗(b∗) satisfies the constraint qualification. Then for each j = 1, 2, ..., k : ∂ ∂bj f (x∗(b∗)) = λj(b∗)

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 8 / 31

slide-9
SLIDE 9

Proof:

We prove it in the case of a single equality constraint, with f and g functions of two variables. The Lagrangian is L(x, y, λ; b) = f (x, y) − λ(h(x, y) − b) The solution satisfies for all b: = ∂L ∂x (x∗(b), y∗(b), λ∗(b); b) = ∂f ∂x (x∗(b), y∗(b)) − λ∗(b)∂h ∂x (x∗(b), y∗(b)),

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 9 / 31

slide-10
SLIDE 10

and = ∂L ∂y (x∗(b), y∗(b), λ∗(b); b) = ∂f ∂y (x∗(b), y∗(b)) − λ∗(b)∂h ∂y (x∗(b), y∗(b)), Furthermore, since h(x∗(b), y∗(b)) = b for all b: ∂h ∂x (x∗, y∗)dx∗(b) db + ∂h ∂y (x∗, y∗)dy∗(b) db = 1

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 10 / 31

slide-11
SLIDE 11

Therefore, using the chain rule, we have: df (x∗(b), y∗(b)) db = ∂f ∂x (x∗, y∗)dx∗(b) db + ∂f ∂y (x∗, y∗)dy∗(b) db = λ∗(b) ∂h ∂x (x∗, y∗)dx∗(b) db + ∂h ∂y (x∗, y∗)dy∗(b) db

  • =

λ∗(b).

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 11 / 31

slide-12
SLIDE 12

Economic Interpretation:

Lagrange multiplier can be interpreted as a ‘shadow price’. For example, in a firm profit maximization problem, Lagrange multipliers tell us how valuable another unit of input would be to the firm’s profits. Alternatively, they tell us how much the firm’s maximum profit changes when the constraint is relaxed. Finally, they identify the maximum amount the firm would be willing to pay to acquire another unit of input.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 12 / 31

slide-13
SLIDE 13

Recall that L(x(b), y(b), λ(b)) = f (x(b), y(b)) − λ(b)(g(x(b), y(b)) − b) = f (x(b), y(b)) So that ∂ ∂bL(x(b), y(b), λ(b); b) = d dbf (x(b), y(b); b) = λ(b)

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 13 / 31

slide-14
SLIDE 14

Envelope Theorem

What we have found is simply a particular application of the envelope theorem, which says that ∂ ∂bL(x(b), y(b), λ(b); b) = d dbf (x(b), y(b); b) Consider the problem: max

x1,x2,...,xn

f (x1, x2, ..., xn) s.t. h1(x1, x2, ..., xn, c) = 0 . . . hk(x1, x2, ..., xn, c) = 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 14 / 31

slide-15
SLIDE 15

Let x∗

1(c), ..., x∗ n(c) denote the optimal solution and let

µ1(c), ..., µk(c) be the corresponding Lagrange multipliers. Suppose that x∗

1(c), ..., x∗ n(c) and µ1(c), ..., µk(c) are differentiable

functions and that x∗(c) satisfies the constraint qualification. Then for each j = 1, 2, ..., k : ∂ ∂c L(x∗(c), µ(c); c) = d dc f (x∗(c); c)

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 15 / 31

slide-16
SLIDE 16

Notice: in the case hi(x1, x2, ..., xn, c) = 0 is h′

i(x1, x2, ..., xn) − c = 0,

then we are back to the previous case: ∂ ∂c L(x∗(c), µ(c); c) = d dc f (x∗(c), c) = λj(c) but the statement is more general.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 16 / 31

slide-17
SLIDE 17

Proof:

We prove it for the simpler case of an unconstrained problem. Let φ(x; a) be a continuous function of x ∈ Rn and a scalar a. For any a, consider the maximization problem of max φ(x; a). Let x∗(a) be the solution of this problem and a continuous and differentiable function of a. We will show that d daφ(x∗(a); a) = ∂ ∂aφ(x∗(a); a)

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 17 / 31

slide-18
SLIDE 18

By the chain rule we have: d daφ(x∗(a); a) =

  • i

∂φ ∂xi (x∗(a); a)dx∗

i

da (a) + ∂φ ∂a (x∗(a); a) Since by the first order conditions: ∂φ ∂xi (x∗(a); a) = 0, ∀i We then get: d daφ(x∗(a); a) = ∂φ ∂a (x∗(a); a)

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 18 / 31

slide-19
SLIDE 19

Intuitively, when we are already at a maximum, changing slightly the parameters of the problem or the constraints, does not affect, up to a first order, the value of the maximum function through changes in the solution x∗(a), because of the first order conditions ∂φ ∂xi (x∗(a); a) = 0 This is a local result, when we use the envelope theorem we have to make sure though that we don’t jump to another solution in a discrete manner.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 19 / 31

slide-20
SLIDE 20

Implicit Function Theorem

In economic theory, once we pin down an equilibrium or a solution to an optimization problem, we are interested in how the exogenous variables change the value of the endogenous variables. The key tool used in this endeavor is the Implicit Function Theorem. We have been using the Implicit Function Theorem throughout without stating and explaining why we can use it.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 20 / 31

slide-21
SLIDE 21

Implicit Function Theorem

The Implicit Function Theorem assures us of the condition under which a set of simultaneous equations: F 1(y1, ..., yn; x1, ..., xm) = F 2(y1, ..., yn; x1, ..., xm) = . . . . . . F n(y1, ..., yn; x1, ..., xm) = (1) defines a set of implicit functions: y1 = f 1(x1, ..., xm) y2 = f 2(x1, ..., xm) . . . . . . yn = f n(x1, ..., xm) (2)

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 21 / 31

slide-22
SLIDE 22

In other words, the conditions of the Implicit Function Theorem guarantee the existence of these implicit functions even though we may not be able to write these functions in an explicit form. Consider (1). Assume that the functions F 1, .., F n all have continuous partial derivatives with respect to all x and y variables. Assume also that at a point (y′, x′) satisfying (1) the determinant of the (n × n) Jacobian w.r.t. the y-variables is not 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 22 / 31

slide-23
SLIDE 23

In other words, assume that: |J| =

  • ∂F 1(y′, x′)

∂y1 ∂F 1(y′, x′) ∂y2 · · · ∂F 1(y′, x′) ∂yn ∂F 2(y′, x′) ∂y1 ∂F 2(y′, x′) ∂y2 · · · ∂F 2(y′, x′) ∂yn . . . . . . ... . . . ∂F n(y′, x′) ∂y1 ∂F n(y′, x′) ∂y2 · · · ∂F n(y′, x′) ∂yn

  • = 0,

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 23 / 31

slide-24
SLIDE 24

Theorem (Implicit Function Theorem)

Then there exists an m−dimensional neighborhood of x′ in which the variables y1..., yn are functions of x1, ..., xm according to the f i functions as in (2) above. The definitions in (2) are satisfied at x′ and y′. They also satisfy (1) for every vector x in the neighbourhood, thereby giving (1) the status of a set of identities. Finally, the functions f i are continuous and have continuous partial derivatives with respect to all the x variables.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 24 / 31

slide-25
SLIDE 25

Derivatives of Implicit Functions

It is then possible to find the partial derivatives of the implicit functions without having to explicitly solve for the y variables. This is because in the neighborhood in question the equations (1) have a status of identities, we can then take the total differential of each of these and write dF j = 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 25 / 31

slide-26
SLIDE 26

When considering only dx1 = 0 and setting the rest dxi = 0, the result, in matrix notation, is:      

∂F 1 ∂y1 ∂F 1 ∂y2

· · ·

∂F 1 ∂yn ∂F 2 ∂y1 ∂F 2 ∂y2

· · ·

∂F 2 ∂yn

. . . . . . ... . . .

∂F n ∂y1 ∂F n ∂y2

· · ·

∂F n ∂yn

           

∂y1 ∂x1 ∂y2 ∂x1

. . .

∂yn ∂x1

      = −      

∂F 1 ∂x1 ∂F 2 ∂x1

. . .

∂F n ∂x1

      Finally, since |J| is non zero there is a unique nontrivial solution to this linear system, which by Cramer’s rule can be expressed as ∂yj ∂x1

  • = |Jj|

|J| .

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 26 / 31

slide-27
SLIDE 27

Comparative Static

This is for general problems. Optimization problems have a unique feature: the condition that indeed |J| = 0. This is because the system of equation that define the implicit functions is the set of FOC. Then J is simply the matrix of partial second derivatives of L, similar to the bordered Hessian. In other words, we can take the maximum value function, or a set of equilibrium conditions, totally differentiate them and find how the endogenous variables change with the exogenous ones in the neighbourhood of the solution.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 27 / 31

slide-28
SLIDE 28

For example, in the case of optimization with one equality constraint we get that F 1(λ, x, y; b) = F 2(λ, x, y; b) = F 3(λ, x, y; b) = is given by b − g(x, y) = fx − λgx = fy − λgy =

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 28 / 31

slide-29
SLIDE 29

We need to ensure that the Jacobian is not zero and then then we can use total differentiation. In other words, we need to ensure that: |J| = det         ∂F 1 ∂λ ∂F 1 ∂x ∂F 1 ∂y ∂F 2 ∂λ ∂F 2 ∂x ∂F 2 ∂x ∂F 3 ∂λ ∂F 3 ∂x ∂F 3 ∂y         = 0

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 29 / 31

slide-30
SLIDE 30
  • r:
  • −gx

−gy −gx fxx − λgxx fxy − λgxy −gy fxy − λgxy fyy − λgyy

  • = 0

But the determinant of J, is that of the bordered Hessian B. Whenever sufficient second order conditions are satisfied, we know that the det of the bordered Hessian is not zero (in fact it is positive).

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 30 / 31

slide-31
SLIDE 31

Now we can totally differentiate the equations: gxdx + gydy − 1db = (fxx − λgxx)dx + (fxy − λgxy)dy − gxdλ = (fyx − λgyx)dx + (fyy − λgyy)dy − gydλ = At the solution, we can then solve for ∂x ∂b, ∂y ∂b , ∂λ ∂b .

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 31 / 31