Applications of the Stochastic Gradient Method December 11, 2019 P. - - PowerPoint PPT Presentation

applications of the stochastic gradient method
SMART_READER_LITE
LIVE PREVIEW

Applications of the Stochastic Gradient Method December 11, 2019 P. - - PowerPoint PPT Presentation

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Applications of the Stochastic Gradient Method December 11, 2019 P. Carpentier Master Optimization


slide-1
SLIDE 1

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint

Applications of the Stochastic Gradient Method

December 11, 2019

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 149 / 328

slide-2
SLIDE 2

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint

Lecture Outline

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 150 / 328

slide-3
SLIDE 3

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 151 / 328

slide-4
SLIDE 4

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 152 / 328

slide-5
SLIDE 5

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

A Basic Two-Stage Recourse Problem

We consider the management of a water reservoir. Water is drawn from the reservoir by way of random consumers. In order to ensure the water supply, 2 decisions are taken at 2 successive time steps. A first supply decision q1 is taken without any knowledge of the effective consumption, the associated cost being equal to c1

  • q1

2, with c1 > 0. Once the consumption d (realization of a random variable D) has been observed, a second supply decision q2 is taken in

  • rder to maintain the reservoir at its initial level, that is,

q2 = d − q1 . The associated cost is equal to c2

  • q2

2, with c2 > 0. The problem is to minimize the expected overall cost of operation.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 153 / 328

slide-6
SLIDE 6

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Mathematical Formulation and Solution

Problem Formulation q1 is a deterministic decision variable, whereas q2 is the realization of a random variable Q2. min

(q1,Q2) c1

  • q1

2 + E

  • c2
  • Q2

2 s.t. q1 + Q2 = D P-a.s. . Equivalent Problem min

q1∈R E

  • c1
  • q1

2 + c2

  • D − q1

2 Analytical solution: q♯

1 =

c2 c1 + c2 E

  • D
  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 154 / 328

slide-7
SLIDE 7

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Stochastic Gradient Algorithm

Q(k+1)

1

= Q(k)

1

− α k + β

  • 2(c1 + c2)Q(k)

1

− 2c2D(k+1) . Algorithm (initialization)

// // Random generator // rand(’normal’); rand(’seed’,123); // // Random consumption // m = 10.; sd = 5.; // // Criterion // c1 = 3.; c2 = 1.; // // Initialization // x = [ ]; y = [ ];

Algorithm (iterations)

// // Algorithm // q1k = 10.; for k = 1:100 dk = m + (sd*rand(1)); gk = 2*((c1+c2)*q1k) - 2*(c2*dk); epsk = 1/(k+10); q1k = q1k - (epsk*gk); x = [x ; k]; y = [y ; q1k]; end // // Trajectory plot // plot2d(x,y); xtitle(’Stochastic Gradient ’,’Iter.’,’q1’);

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 155 / 328

slide-8
SLIDE 8

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

A Realization of the Algorithm

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 156 / 328

slide-9
SLIDE 9

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

More Realizations

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 157 / 328

slide-10
SLIDE 10

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Slight Modification of the Problem

As in the basic two-stage recourse problem, a first supply decision q1 is taken without any knowledge of the effective consumption, the associated cost being equal to c1

  • q1

2, a second supply decision q2 is taken once the consumption d has been observed (realization of a r.v. D), the cost of this second decision being equal to c2

  • q2

2. The difference between supply and demand is penalized thanks to an additional cost term c3

  • q1 + q2 − d
  • 2. The new problem is :

min

(q1,Q2) E

  • c1
  • q1

2 + c2

  • Q2

2 + c3

  • q1 + Q2 − D

2 . Question: how to solve it using a stochastic gradient algorithm?

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 158 / 328

slide-11
SLIDE 11

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Resolution of the Modified Problem

Idea: use the interchange theorem to solve the problem w.r.t. Q2. min

(q1,Q2) E

  • c1
  • q1

2 + c2

  • Q2

2 + c3

  • q1 + Q2 − D

2 ⇐ ⇒ min

q1

  • c1
  • q1

2 + min

Q2

E

  • c2
  • Q2

2 + c3

  • q1 + Q2 − D

2 ⇐ ⇒ min

q1

  • c1
  • q1

2 + E

  • min

q2 c2

  • q2

2 + c3

  • q1 + q2 − D

2 . The optimal solution of the minimization problem w.r.t. q2 is Q2

♯ =

c3 c2 + c3

  • D − q1
  • so that the problem is equivalent to the open-loop problem

min

q1 E

  • c1
  • q1

2 + c2c3 c2 + c3

  • q1 − D

2

  • .

The stochastic gradient algorithm applies!

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 159 / 328

slide-12
SLIDE 12

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 160 / 328

slide-13
SLIDE 13

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation – Problem Statement

A company owns N production units and has to meet a given non stochastic demand d ∈ R. For each unit i, the decision maker first takes an investment decision ui ∈ R, the associated cost being Ii(ui). Then a discrete disturbance wi ∈ {wi,a, wi,b, wi,c} occurs. Knowing all noises, the decision maker selects for each unit i an operating point vi ∈ R, which leads to a cost Ci(ui, vi, wi) and a production Pi(vi, wi). The goal is to minimize the expected overall cost, subject to the following constraints: investment limitation: Θ(u1, . . . , uN) ≤ 0,

  • peration limitation: vi ≤ ϕi(ui) , i = 1 . . . , N,

demand satisfaction: N

i=1 Pi(vi, wi) − d = 0.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 161 / 328

slide-14
SLIDE 14

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation – Questions

1 Write down the global optimization problem.

Is it possible to solve directly the problem with N large? Is it possible to apply the stochastic gradient algorithm?

2 Extract the optimization subproblem obtained when both the

investment u = (u1, . . . , uN) and the noise w = (w1, . . . , wN) are fixed. The value of this subproblem is denoted f ♯(u, w).

Give assumptions for the resolution of this subproblem. Give assumptions for f ♯ to be a smooth convex function. Compute the partial derivatives of f ♯ w.r.t. u.

3 Reformulate the optimization problem using function f ♯ and

apply the stochastic gradient algorithm in the following cases:

the investment limitation is decoupled: ∀i, ui ∈ [ui, ui], the investment limitation is linear: u1 + . . . + uN ≤ u, the investment limitation is convex: Θ(u1, . . . , uN) ≤ 0.

4 What if decision vi is based on the knowledge of wi only?

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 162 / 328

slide-15
SLIDE 15

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation — Answer to Q1

We denote by W = (W1, . . . , WN) the global noise variable. min

(ui∈R,Vi W )E

  • N
  • i=1
  • Ii(ui) + Ci
  • ui, Vi, Wi
  • ,

s.t. Θ(u1, . . . , uN) ≤ 0 ,

N

  • i=1

P

  • Vi, Wi
  • − d = 0

P-a.s. , Vi − ϕi(ui) ≤ 0 P-a.s. , i = 1, . . . , N . For N = 21, the sizes of the problem are huge: card(W) = 321 ≈ 1010 possible noise values, N + N × card(W) decision variables, 1 + card(W) + N × card(W) constraints. The SG algorithm does not apply: decisions Vi are random variables.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 163 / 328

slide-16
SLIDE 16

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation — Answer to Q2

Thanks to the interchange theorem, the minimization w.r.t. Vi can be formulated independently for each realization of W . For a realization w of W , the inner minimization subproblem w.r.t. v is f ♯(u, w) = min

(v1,...,vN)∈RN N

  • i=1

Ci

  • ui, vi, wi
  • ,

s.t.

N

  • i=1

P

  • vi, wi
  • − d = 0 ,

vi − ϕi(ui) ≤ 0 , i = 1, . . . , N . Let λ and (µ1, . . . , µN) be the associated multipliers. Assuming that the functions Ci are convex continuous coercive w.r.t. vi, the functions Pi are linear w.r.t. vi, the above problem admits a non empty set of saddle points.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 164 / 328

slide-17
SLIDE 17

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation — Answer to Q2 (end)

If we moreover assume that the functions Ci are jointly convex w.r.t. (ui, vi), the functions Ci are differentiable w.r.t. vi, the functions ϕi are concave differentiable, then the function f ♯ is convex subdifferentiable w.r.t. u and ∇uiCi

  • ui, v ♯

i , wi

  • − µ♯

i ∇ϕi(ui) ∈ ∂uif ♯(u, w) .

Finally, if we assume that the subproblem admits an unique saddle point (v ♯, λ♯, µ♯), then the function f ♯ is differentiable w.r.t. u, and ∇uif ♯(u, w) = ∇uiCi

  • ui, v ♯

i , wi

  • − µ♯

i ∇ϕi(ui) .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 165 / 328

slide-18
SLIDE 18

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation — Answer to Q3

Using the function f ♯ obtained when minimizing w.r.t. the variables vi, the global optimization problem is reformulated as min

(u1,...,uN)∈RN N

  • i=1

Ii(ui) + E

  • f ♯(u1, . . . , uN, W )
  • ,

s.t. Θ(u1, . . . , uN) ≤ 0 . We assume that the function f ♯ is convex differentiable, the functions Ii are convex coercive differentiable, the function Θ is convex differentiable, and we denote the gradient w.r.t. ui of the cost under the expectation by ∇uij(u, w) = ∇Ii

  • ui
  • + ∇uiCi
  • ui, v ♯

i , wi

  • − µ♯

i ∇ϕi(ui) .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 166 / 328

slide-19
SLIDE 19

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation — Answer to Q3 (end)

The stochastic gradient method applies to the reformulated problem. Decoupled investment limitation.

1

Draw a realization w (k+1) of W .

2

Solve the inner minimization subproblem at (u(k), w (k+1)) and denote by v (k+1) and µ(k+1) its solution.

3

Update u using the standard stochastic gradient formula u(k+1)

i

= proj[ui,ui]

  • u(k)

i

− ǫ(k)∇uij(u(k), w (k+1))

  • .

Linear investment limitation.

3

Compute u

(k+ 1

2 )

i

= u(k)

i

− ǫ(k)∇uij(u(k), w (k+1)) for all i and project the point u(k+ 1

2 ) on the half-space u1 + . . . + uN ≤ u.

Convex investment limitation. Apply the stochastic Arrow-Hurwicz algorithm (with multiplier p).

3

u(k+1)

i

= u(k)

i

− ǫ(k) ∇uij(u(k), w (k+1)) +

  • Θ′

ui(u(k))

⊤ · p(k) .

4

p(k+1) = max

  • 0, p(k) + ǫ(k)Θ(u(k+1))
  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 167 / 328

slide-20
SLIDE 20

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Two-Stage Recourse Problem Trade-off Between Investment and Operation

Trade-off Investment/Operation — Answer to Q4

We assume that the random variables (W1, . . . , WN) are independent. From the independence assumption, and since Vi Wi, we have

N

  • i=1

P

  • Vi, Wi
  • = d

⇐ ⇒ ∃ (d1, . . . , dN) s.t. P

  • Vi, Wi
  • = di ,

N

  • i=1

di = d .

The inner minimization subproblem w.r.t. v can be decomposed i by i:

g ♯

i (ui, di, wi) = min vi ∈R Ci

  • ui, vi, wi
  • s.t. P
  • vi, wi
  • − di = 0 , vi − ϕi(ui) ≤ 0 .

The global optimization problem is then reformulated as

min

(u1,...,uN)∈RN,(d1,...,dN)∈RN N

  • i=1
  • Ii(ui) + E
  • g ♯

i (ui, , di, Wi)

  • ,

s.t. Θ(u1, . . . , uN) ≤ 0 ,

N

  • i=1

di − d = 0 ,

and thus can be solved by the stochastic Arrow-Hurwicz algorithm.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 168 / 328

slide-21
SLIDE 21

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 169 / 328

slide-22
SLIDE 22

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 170 / 328

slide-23
SLIDE 23

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

Option Pricing Problem — Modeling

The price of an option with payoff ψ(St, 0 ≤ t ≤ T) is given by P = E

  • e−rT ψ(St, 0 ≤ t ≤ T)
  • ,

where the dynamics of the underlying n-dimensional asset S is described by the following stochastic differential equation dSt = St

  • r dt + σ(t, St) dWt
  • , S0 = x ,

r being the interest rate and σ(t, S) being the volatility function.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 171 / 328

slide-24
SLIDE 24

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

Option Pricing Problem — Discretization

Most of the time, the exact value of price P is not available. To

  • vercome the difficulty, one considers a discretized approximation

(in time) of S, so that the price P is approximated by

  • P = E
  • e−rT ψ(

S t1, . . . , S td)

  • .

In such cases, the discretized function can be expressed in terms of the Brownian increments, or equivalently using a random normal

  • vector. A compact form for the discretized price is
  • P = E
  • φ(ξ)
  • ,

where ξ is a large n × d-dimensional Gaussian vector with identity covariance matrix and zero-mean.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 172 / 328

slide-25
SLIDE 25

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 173 / 328

slide-26
SLIDE 26

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

Option Pricing Problem — Questions

Problem: compute P = E

  • φ(ξ)
  • by Monte Carlo simulations.

1 Obtain the expression of

P when applying, for any given parameter θ ∈ Rn×d, the change of variables G = ξ − θ.

2 Obtain the expression of the variance

V (θ) associated to the previously obtained parameterized expression of P.

3 Apply a change of variables in

V (θ) so that parameter θ no longer appears as an argument of φ.

4 Prove that, without any assumption on φ,

V is a convex differentiable function of θ.

5 Obtain the expression of the gradient ∇

V (θ).

6 Implement a stochastic gradient algorithm to minimize

V (θ).

7 Compute the price

P by Monte Carlo.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 174 / 328

slide-27
SLIDE 27

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

Option Pricing Problem — Answers to Q1-Q4

With the change of variables G = ξ − θ, we obtain

  • P = E
  • φ(G + θ)e−θ ,G − θ2

2

  • ,
  • V (θ) = E
  • φ(G + θ)2e−2θ ,G −θ2

− E

  • φ(G )

2 . From this expression, using ξ = G + θ, we obtain

  • V (θ) = E
  • φ(ξ)2e−θ ,ξ+ θ2

2

  • − E
  • φ(ξ)

2 . We deduce that, without any specific assumption on φ, function V is strictly convex and differentiable.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 175 / 328

slide-28
SLIDE 28

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

Option Pricing Problem — Answers to Q5-Q6

Our goal is to obtain a value of θ such that the variance V (θ) associated to P is as small as possible: min

θ∈Rn×d E

  • φ(ξ)2e−θ ,ξ+ θ2

2

  • .

A straightforward calculation gives the gradient of V , namely, ∇ V (θ) = E

  • (θ − ξ)φ(ξ)2e−θ ,ξ+ θ2

2

  • ,

so that the stochastic gradient algorithm applies to the problem θ(k+1) = θ(k)−ǫ(k) θ(k)−ξ(k+1) φ

  • ξ(k+1)2e−θ(k) ,G(k+1)+ θ(k)2

2

, and converges to the unique solution denoted θ♯.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 176 / 328

slide-29
SLIDE 29

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

Option Pricing Problem — Answer to Q7

Finally, the computation of price P is done in two steps. Using a N-sample of ξ, obtain an approximation θ(N) of θ♯ by N iterations of the stochastic gradient algorithm. Once θ(N) has been obtained, use the standard Monte Carlo method to compute an approximation of the price P based

  • n another N-sample of ξ:
  • P

(N) = 1

N

N

  • k=1

φ(ξ(N+k) + θ(N))e−θ(N) ,G(N+k)− θ(N)2

2

. The computation requires 2N evaluations of φ, whereas a crude Monte Carlo method evaluates φ only N times. This method will be efficient if V (θ♯) ≪ V (0)/2.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 177 / 328

slide-30
SLIDE 30

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Pricing Problem Modeling Computing Efficiently the Price

Algorithm Improvement

It is possible to compute Monte Carlo approximations of both θ♯ and P by using the same N-sample of ξ. The algorithm is

θ(k+1) = θ(k)− ǫ(k) θ(k)− ξ(k+1) φ

  • ξ(k+1)2e−θ(k) ,ξ(k+1)+

θ(k)2 2

,

  • P

(k+1) =

P

(k)−

1 k + 1

  • P

(k)− φ(ξ(k+1)+ θ(k)) e−θ(k) ,ξ(k+1)−

θ(k)2 2

  • .

Note that the last relation is just the recursive form of

  • P

(N) = 1

N

N

  • k=1

φ(G (k+1)+ θ(k)) e−θ(k) ,G(k+1)−

θ(k)2 2

.

A Central Limit Theorem is available for this algorithm:

√ N

  • P

(N) −

P

  • D

− → N

  • 0,

V (θ♯)

  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 178 / 328

slide-31
SLIDE 31

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 179 / 328

slide-32
SLIDE 32

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 180 / 328

slide-33
SLIDE 33

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Mission to Mars

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 181 / 328

slide-34
SLIDE 34

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Satellite Model

dr dt = v , dv dt = −µ r r3 + F mκ , (6a) dm dt = − T g0Isp δ . (6b) (6a) is 6-dimensional state vector (position r and velocity v). (6b) is 1-dimensional state vector (mass m including fuel). κ involves the direction cosines of the thrust, δ is the on-off switch

  • f the engine (3 controls at all), and µ, F, T, g0, Isp are constants.

The deterministic control problem is to drive the satellite from the initial condition at ti to a known final position rf and velocity vf at tf (given) while minimizing the fuel consumption m(ti) − m(tf).

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 182 / 328

slide-35
SLIDE 35

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Deterministic Optimization Problem

Using equinoctial coordinates for the position and velocity ❀ state vector x ∈ R7, and cartesian coordinates for the thrust of the engine ❀ control vector u ∈ R3, the optimization problem has the following expression: Deterministic optimization problem min

u(·) K

  • x(tf)
  • subject to:

x(ti) = xi ,

  • x (t) = f
  • x(t), u(t)
  • ,

u(t) ≤ 1 ∀t ∈ [ti, tf] , C

  • x(tf)
  • = 0 .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 183 / 328

slide-36
SLIDE 36

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Engine Failure

Sometimes, the engine may fail to work when needed: the satellite drifts away from the deterministic optimal trajectory. After the engine control is recovered, it is not always possible to drive the satellite to the final target at tf. . . By anticipating such possible failures and by modifying the trajectory followed before the failure occurs, one may increase the possibility of eventually reaching the target. But such a deviation from the deterministic optimal trajectory results in a deterioration of the economic performance. The problem is thus to balance the increased probability of eventually reaching the target despite possible failures against the expected economic performance, that is, to quantify the price of safety one is ready to pay for.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 184 / 328

slide-37
SLIDE 37

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Stochastic Formulation (1)

A failure is modeled using two random variables: tp : random initial time of the failure, td : random duration of the failure. For any realization (tξ

p, tξ d) of a failure:

1 u(·) denotes the control used prior to the failure

❀ u is defined over [ti, tf] but implemented over [ti, tξ

p]

and corresponds to an open-loop control,

2 the control is equal to 0 during the failure (over [tξ

p, tξ p + tξ d]),

3 vξ(·) denotes the control used after the failure

❀ vξ is defined over [tξ

p + tξ d, tf] (if nonempty) and

corresponds to a closed-loop strategy V depending on ξ. The satellite dynamics in the stochastic formulation writes: xξ(ti) = xi ,

  • x ξ(t) = f ξ

xξ(t), u(t), vξ(t)

  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 185 / 328

slide-38
SLIDE 38

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Stochastic Formulation (2)

The problem is to minimize the expected cost (fuel consumption) w.r.t. the open-loop control u and the closed-loop strategy V, the probability to hit the target at tf being at least equal to p. Robust stochastic optimization problem min

u(·) min V(·) E

  • K
  • xξ(tf)
  • C
  • xξ(tf)
  • = 0
  • subject to:

xξ(ti) = xi ,

  • x ξ(t) = f ξ

xξ(t), u(t), vξ(t)

  • ,

u(t) ≤ 1 ∀t ∈ [ti, tf] , vξ(t) ≤ 1 ∀t ∈ [tξ

p + tξ d, tf] ,

P

  • C
  • xξ(tf)
  • = 0
  • ≥ p .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 186 / 328

slide-39
SLIDE 39

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 187 / 328

slide-40
SLIDE 40

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Indicator Function

Consider the real-valued indicator function: 1(y) =

  • 1

if y = 0,

  • therwise.

Then P

  • C
  • xξ(tf)
  • = 0
  • = E
  • 1
  • C
  • xξ(tf)
  • ,

and E

  • K
  • xξ(tf)
  • C
  • xξ(tf)
  • = 0
  • =

E

  • K
  • xξ(tf)
  • × 1
  • C
  • xξ(tf)
  • E
  • 1
  • C
  • xξ(tf)
  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 188 / 328

slide-41
SLIDE 41

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Problem Reformulation

Then, the robust stochastic optimization problem can be (shortly) reformulated as min

u(·) min V(·)

E

  • K
  • xξ(tf)
  • × 1
  • C
  • xξ(tf)
  • E
  • 1
  • C
  • xξ(tf)
  • s.t.

E

  • 1
  • C
  • xξ(tf)
  • ≥ p .

This formulation is not well-suited for a numerical implementation (e.g. stochastic APP algorithm) for many reasons, and first of all because a ratio of expectations is not an expectation!

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 189 / 328

slide-42
SLIDE 42

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

An Useful Lemma

The previous problem falls in the class of problems formulated as min

u

J(u) Θ(u) s.t. Θ(u) ≥ p , (7) where J and Θ assume positive values.

Lemma

1

If u♯ is a solution of (7) and if Θ(u♯) = p, then u♯ is also a solution of min

u

J(u) s.t. Θ(u) ≥ p . (8)

2

Conversely, if u♯ is a solution of (8), and if an optimal Kuhn-Tucker multiplier β♯ satisfies the condition β♯ ≥ J(u♯) Θ(u♯) , then u♯ is also a solution of (7).

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 190 / 328

slide-43
SLIDE 43

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Back to a Cost in Expectation

Using this lemma, the robust stochastic optimization problem is reformulated as a problem in which the cost and the constraint functions correspond to expectations: min

u(·) min V(·) E

  • K
  • xξ(tf)
  • × 1
  • C
  • xξ(tf)
  • s.t.

E

  • 1
  • C
  • xξ(tf)
  • ≥ p .

Using the Interchange Theorem, this problem is equivalent to min

u(·) E

  • min

vξ(·) K

  • xξ(tf)
  • × 1
  • C
  • xξ(tf)
  • s.t.

E

  • 1
  • C
  • xξ(tf)
  • ≥ p .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 191 / 328

slide-44
SLIDE 44

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 192 / 328

slide-45
SLIDE 45

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Lagrangian Formulation

min

u(·) E

  • min

vξ(·) K

  • xξ(tf)
  • × 1
  • C
  • xξ(tf)
  • s.t.

p − E

  • 1
  • C
  • xξ(tf)
  • ≤ 0
  • µ

Assume there exists a saddle point for the associated Lagrangian. In order to solve max

µ≥0

min

u(·)

  • µ p + E
  • min

vξ(·)

  • K
  • xξ(tf)
  • − µ
  • × 1
  • C
  • xξ(tf)
  • W (u, µ, ξ)
  • ,

that is, max

µ≥0

min

u(·)

  • µ p + E
  • W (u, µ, ξ)
  • ,

we use the stochastic APP algorithm with core K(·) = 1

2 · 2.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 193 / 328

slide-46
SLIDE 46

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Algorithm Overview

Stochastic APP algorithm At iteration k,

1 draw a failure ξk = (tξk

p , tξk d ) according to its probability law,

2 compute the gradient of W w.r.t. u and update u(·):

uk+1 = ΠB

  • uk − εk ∇uW (uk, µk, ξk)
  • ,

3 compute the gradient of W w.r.t. µ and update µ:

µk+1 = max

  • 0, µk + ρk

p + ∇µW (uk+1, µk, ξk)

  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 194 / 328

slide-47
SLIDE 47

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

First Difficulty: 1 is not a Smooth Function

At every iteration k, we must evaluate function W as well as its derivatives w.r.t. u(.) and µ. But W is not differentiable! To overcome the difficulty, we implement a mollifier technique: 1(y) =

  • 1

if y = 0,

  • therwise, 1r(y) =

  

  • 1 − y2

r2

2 if y ∈ [−r, r],

  • therwise.

0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0

There are rules to drive r to 0 as the iteration number k → +∞

[Andrieu et al., 2007].

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 195 / 328

slide-48
SLIDE 48

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Second Difficulty: Solving the Inner Problem

The mollified optimization problem to solve at each iteration is: Wrk(uk, ξk, µk) = min

vξ(·)

  • K
  • xξ(tf)
  • − µk

× 1rk

  • C
  • xξ(tf)
  • .

In this setting, we have to check if the target is reached up to rk. Different cases have to be considered:

1 the target can be reached accurately, 2 the target can be reached up to rk only, 3 the target cannot be reached up to rk.

Note that if reaching the target is possible but too expensive (that is, if K

  • xξ(tf)
  • ≥ µk), the best thing to do is to stop the engine!

In practice, the solution of the approximated problem is derived from the resolution of two standard optimal control problems. . .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 196 / 328

slide-49
SLIDE 49

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Parameters Tuning

Gradient step length: εk = a b + k , ρk = c d + k , usual for a standard stochastic gradient algorithm. Optimal choice of the smoothing parameter: rk = α β + k

1 3

, the mollifier coefficient rk decreases slowly. Stochastic APP algorithm will need a large number of iterations.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 197 / 328

slide-50
SLIDE 50

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

1

Two Exercices about Stochastic Gradient Two-Stage Recourse Problem Trade-off Between Investment and Operation

2

Option Pricing Problem and Variance Reduction Pricing Problem Modeling Computing Efficiently the Price

3

Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 198 / 328

slide-51
SLIDE 51

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Example: Interplanetary Mission

ti = 0.70 and tf = 8.70 (normalized units), tp: exponential distribution: P

  • tp ≥ tf
  • ≈ 0.58 = πf,

td: exponential distribution: P

  • 0.035 ≤ td ≤ 0.125
  • ≈ 0.80.
  • Comp. normale w
  • Comp. tangentielle s
  • Comp. radiale q

1 2 3 4 5 6 7 8 9 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0

The deterministic optimal control has a “bang–off–bang” shape. Along the optimal trajectory, the probability to recover a failure is: pdet ≈ 0.94.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 199 / 328

slide-52
SLIDE 52

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Masse finale / iterations 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0.674 0.675 0.676 0.677 0.678 0.679 0.680 0.681 Multiplicateur probabilite / iterations 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0.314 0.316 0.318 0.320 0.322 0.324 0.326

  • Comp. normale w
  • Comp. tangentielle s
  • Comp. radiale q

1 2 3 4 5 6 7 8 9 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0

Figure: Probability level p = 0.750

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 200 / 328

slide-53
SLIDE 53

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Masse finale / iterations 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0.673 0.674 0.675 0.676 0.677 0.678 0.679 0.680 0.681 Multiplicateur probabilite / iterations 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0.318 0.320 0.322 0.324 0.326 0.328 0.330 0.332

  • Comp. normale w
  • Comp. tangentielle s
  • Comp. radiale q

1 2 3 4 5 6 7 8 9 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0

Figure: Probability level p = 0.960

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 201 / 328

slide-54
SLIDE 54

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

Masse finale / iterations 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0.674 0.675 0.676 0.677 0.678 0.679 0.680 0.681 Multiplicateur probabilite / iterations 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0.320 0.325 0.330 0.335 0.340 0.345 0.350 0.355 0.360 0.365

  • Comp. normale w
  • Comp. tangentielle s
  • Comp. radiale q

1 2 3 4 5 6 7 8 9 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0

Figure: Probability level p = 0.990

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 202 / 328

slide-55
SLIDE 55

Two Exercices about Stochastic Gradient Option Pricing Problem and Variance Reduction Spatial Rendez-vous Under Probability Constraint Satellite Model and Optimization Problem Probability and Conditional Expectation Handling Stochastic APP Algorithm Numerical Results

The Price of Safety. . .

Consommation sans panne / Probabilite 0.85 0.90 0.95 1.00 0.3204 0.3206 0.3208 0.3210 0.3212 0.3214 0.3216 0.3218 0.3220 0.3222

  • P. Carpentier

Master Optimization — Stochastic Optimization 2019-2020 203 / 328