Stochastic Optimization and Discretization January 06, 2021 P. - - PowerPoint PPT Presentation

stochastic optimization and discretization
SMART_READER_LITE
LIVE PREVIEW

Stochastic Optimization and Discretization January 06, 2021 P. - - PowerPoint PPT Presentation

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Stochastic Optimization and Discretization January 06, 2021 P. Carpentier Master Optimization Stochastic


slide-1
SLIDE 1

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Stochastic Optimization and Discretization

January 06, 2021

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 204 / 328

slide-2
SLIDE 2

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

A Change in the Point of View

During the first part of the course, we have studied open-loop stochastic optimization problems, that is, problems in which the decisions correspond to deterministic variables which minimize a cost function defined as an expectation. min

u∈Uad E

  • j(u, W )
  • .

We now enter the realm of closed-loop stochastic optimization, that is, the case where on-line information is available to the decision maker. The decisions are thus functions of information and correspond to random variables. min

U∈Uad E

  • j(U, W )
  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 205 / 328

slide-3
SLIDE 3

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Variables and Constraints

The decision variable U is now a random variable and belongs to a functional space U. A canonical example is: U = L2(Ω, A, P; U). The contraints U ∈ Uad on the r.v. U may be of different kinds: point-wise constraints dealing with the possible values of U: U ∈ Uad =

  • U ∈ U, U(ω) ∈ Uad P-a.s.
  • ,

risk constraints, such as expectation or probability constraints: U ∈ Uad =

  • U ∈ U, P
  • Θ(U) ≤ θ
  • ≥ π
  • ,

measurability constraints which express the fact that a given amount of information Y is available to the decision maker: U ∈ Uad =

  • U ∈ U, U measurable w.r.t. Y
  • .

We will mainly concentrate on measurability constraints.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 206 / 328

slide-4
SLIDE 4

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Compact Formulation of a Closed-Loop Problem

Given a probability space (Ω, A, P), the essential ingredients of a stochastic optimization problem are noise W : r.v. with values in a measurable space (W, W), decision U: r.v. with values in a measurable space (U, U), information Y : r.v. with values in a measurable space (Y, Y), cost function: measurable mapping j : U × W → R. The σ-field generated by Y is denoted by B ⊂ A. With all these elements at hand, the problem is written as follows: min

UY

E

  • j(U, W )
  • .

The notation U Y (or equivalently U B) is used to express that the r.v. U is measurable w.r.t. to the σ-field generated by Y .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 207 / 328

slide-5
SLIDE 5

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Representation of Measurability Constraints

Consider the information structure of the stochastic optimization problem in a compact form, that is, the measurability constraints U Y . This information structure may be interpreted in different ways. From the functional point of view, using a Doob’s Theorem, the decision U is expressed as a measurable function of Y : U = ϕ(Y ) . In this setting, the decision variable becomes the function ϕ. From the algebraic point of view, the constraints are expressed in terms of σ-field, that is, σ

  • U
  • ⊂ σ
  • Y
  • .

Question: how to take this last representation into account?

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 208 / 328

slide-6
SLIDE 6

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Dynamic Information Structure (DIS)

This is the situation when B = σ(Y ) depends on U. For example, in the case where Y = h(U, W ), the constraint expression is U h(U, W ) , which yields a (seemingly) implicit measurability constraint. This is a source of huge complexity for stochastic optimization problems, known under the name of the dual effect of control. Indeed, the decision maker has to take care of the following double effect:

  • n the one hand, his decision affects the cost E
  • j(U, W )
  • ,
  • n the other hand, she makes the information more or less

constrained, that is, a less or more large admissible set for U.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 209 / 328

slide-7
SLIDE 7

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Static Information Structure (SIS)

This is the case when B = σ(Y ) is fixed, defined independently

  • f U. Therefore, the terminology “static” expresses that the

information σ-field B constraining the decision U cannot be modified by the decision maker. It does not imply that no dynamics is present in the problem formulation.12 The situation where the information Y is a function of a exogenous noise W , that is, Y = h(W ), always induces a static information structure. Note that it may happen that Y functionally depends on U whereas the σ-field B generated by Y remains fixed.

12If time is involved in the problem, at each time t, a decision Ut is taken

based on the available information Yt, inducing a measurability constraint Ut Yt. But the issue of dynamic information depends on the dependency

  • f Yt w.r.t. the controls, and not on the presence of time t in the problem.
  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 210 / 328

slide-8
SLIDE 8

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Position of the Problem. . .

We want to solve a closed-loop stochastic optimization problem, that is, a problem such that the decision variable U is a random variable which satisfies measurability conditions imposed by the information structure defined by the random variable Y . We assume that the problem is dual effect free, that is, we assume that the σ-field generated by the information variable Y does not depend on the control variable U (static information structure). We manipulate the measurability conditions from the algebraic point of view, that is, σ(U) ⊂ σ(Y ) = B. In order to numerically solve the optimization problem, we need to approximate the problem by using a finite representation of it.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 211 / 328

slide-9
SLIDE 9

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

and Problem under Consideration

The standard form of the problem we are interested in is V(W , B) = min

U∈U E

  • j(U, W )
  • ,

subject to U is B-measurable , where B = σ(Y ) is a fixed σ-field. In order to obtain a numerically tractable approximation of this problem, we have to approximate the noise W by a “finite” noise Wn (Monte Carlo,. . . ), the σ-field B by a “finite” σ-field Bn (partition,. . . ). Question: V(Wn, Bn) − → V(W , B) ?

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 212 / 328

slide-10
SLIDE 10

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

A Specific Instance of the Problem

A specific instance of the problem is the one which incorporates dynamical systems, that is, the stochastic optimal control problem: min

(U0,...,UT−1,X0,...,XT ) E

T−1

  • t=0

Lt(Xt, Ut, Wt+1) + K(XT)

  • subject to

X0 = f−1(W0) , Xt+1 = ft(Xt, Ut, Wt+1) , t = 0, . . . , T − 1 , Ut Yt , t = 0, . . . , T − 1 . Assuming that σ(Yt) are fixed σ-fields, a widely used approach to discretize this optimization problem is the so-called scenario tree

  • method. We present it before considering the general case.
  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 213 / 328

slide-11
SLIDE 11

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result

Lecture Outline

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 214 / 328

slide-12
SLIDE 12

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 215 / 328

slide-13
SLIDE 13

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 216 / 328

slide-14
SLIDE 14

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

A Standard Stochastic Optimal Control Problem

Consider the following stochastic optimal control problem with a static (non-anticipative) information structure. min

(U0,...,UT−1,X0,...,XT ) E

T−1

  • t=0

Lt(Xt, Ut, Wt+1) + K(XT)

  • subject to

X0 = f−1(W0) , Xt+1 = ft(Xt, Ut, Wt+1) , t = 0, . . . , T − 1 , Ut ht(W0, . . . , Wt) , t = 0, . . . , T − 1 . Almost sure constraints (e.g. bound constraints on Xt and Ut) may also be present in the formulation.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 217 / 328

slide-15
SLIDE 15

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

Scenario Tree Methodology

Obtain a finite dimensional approximation of the problem.

1 Discretize the noise process

{Wt} using a scenario tree.

2 Copy out the measurability

constraints on this structure: Ut ht(W0, . . . , Wt).

3 Write the dynamics and cost

functions at the tree nodes: Xt+1 = ft(Xt, Ut, Wt+1).

4 Solve the problem using

adequate mathematical programming techniques.

t

2 3 1

Ut

(Xt, W t)

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 218 / 328

slide-16
SLIDE 16

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 219 / 328

slide-17
SLIDE 17

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

  • 1. Discretize the Random Inputs

The tree architecture is characterized by the fact that each node of the tree corresponds to a unique past noise history but is generally followed by several possible future histories. The tree is obtained by repeatedly using a finite approximation of the conditional probability laws P

  • Wt | W0, . . . , Wt−1
  • :

P

  • W0
  • ≈ {w 1

0 , . . . , w n0 0 }

  • P
  • W1 | W0 = w i
  • ≈ {w i,1

1 , . . . , w i,n1

} . . .

Note that this discretization scheme is much more sophisticated than the standard Monte Carlo sampling of (W0, . . . , WT). The starting point may be a given collection of scenarios from which

  • ne constructs a tree by grouping

the scenarios according to their (approximate) common past.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 220 / 328

slide-18
SLIDE 18

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

  • 2. Copy out the Measurability Constraints

Assume that the information consists of the exact observation of all past noises: Yt = (W0, . . . , Wt). Then, a different decision has to be attached at each node of the scenario tree. But the method can face more general situations by grouping nodes of the scenario tree in order to represent the information structure induced by the ht(W0, . . . , Wt)’s. In all cases, the information structure is entirely coded within the scenario tree by means of those groups of nodes (one decision for each group of nodes). For example, the so-called perfect memory information structure ht(W0, . . . , Wt) =

  • 0(W0), . . . , t(Wt)
  • leads to a grouping of

scenario tree nodes at each time step t, and ultimately produces a tree structure called the decision tree.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 221 / 328

slide-19
SLIDE 19

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

  • 3. Write the Dynamics and Cost Functions

Consider a node ν ∈ N of the scenario tree at time t, and denote: f(ν) the predecessor of node ν (= µ), π(ν) the probability of node ν, θ(ν) the time index of node ν (= t), γ(ν) the control index of node ν.

µ ν t t + 1 t − 1 η ζ ξ

Note that the probability function π satisfies the following conditions: π(ν) =

  • ξ∈f−1(ν)

π(ξ) ,

  • ν∈θ−1(t)

π(ν) = 1 .

Then, the dynamic equation from node µ to node ν writes xν = fθ(f(ν))

  • xf(ν), uγ(f(ν)), wν
  • .

The cost induced by the transition is: Lθ(f(ν))

  • xf(ν), uγ(f(ν)), wν
  • .
  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 222 / 328

slide-20
SLIDE 20

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

  • 4. Solve the Approximated Problem

The initial stochastic optimization problem boils down to min

  • ν∈N\θ−1(0)

π(ν)Lθ(f(ν))

  • xf(ν), uγ(f(ν)), wν
  • +
  • ν∈θ−1(T)

π(ν)K(xν)

  • ,

subject only to the dynamics constraints xν = f−1(wν) ∀ν ∈ θ−1(0) , xν = fθ(f(ν))

  • xf(ν), uγ(f(ν)), wν
  • ∀ν ∈ N \ θ−1(0) .

The initial infinite dimensional stochastic optimization problem is approximated by a finite dimensional deterministic problem, that can be solved using relevant mathematical programming tools.

Note that this approximation corresponds to an optimal control problem with an arborescent (rather than linear) time structure.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 223 / 328

slide-21
SLIDE 21

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

Facts and Questions about the Scenario Tree Method

Dual Effect: it is mandatory that no dual effect holds true. White noise: the noise process (W0, . . . , WT) may be correlated. Perfect memory : this property is not required although useful. Complexity: the amount of scenarios needed to achieve a given accuracy grows exponentially w.r.t. the number of time steps T

  • f the problem (see [Shapiro, 2006]).

——————————————————————————— Tree structure: how to build a tree which is at the same time representative of the problem and numerically tractable? Extrapolation: how to obtain feedback laws once the optimal decisions on the nodes of the scenario tree have been computed? A huge literature is available on the scenario tree method. . .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 224 / 328

slide-22
SLIDE 22

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Scenario Tree Method Overview Some Details about the Method

(Very) Compact View of the Scenario Tree Approach

The stochastic optimal control problem under consideration depends on both a noise process W and a sequence of σ-fields B. It can thus be represented under the compact form: V(W , B) = min

UB E

  • j(U, W )
  • ,

with B = σ

  • h(W )
  • .

The aim of the scenario tree method is to approximate the noise W by a “finite” noise Wn, and deduce the approximated information Bn = σ

  • h(Wn)
  • .

In this framework, an unique approximation is performed to obtain the approximated solution V

  • Wn, Bn
  • , and it is possible to prove

that V

  • Wn, Bn
  • → V(W , B) (see [Pennanen, 2005]).

But the noise has been discretized in a very specific way. . .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 225 / 328

slide-23
SLIDE 23

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 226 / 328

slide-24
SLIDE 24

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 227 / 328

slide-25
SLIDE 25

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

A simple SOC problem

min

UW0

E

  • εU2 + (W0 + U + W1)2

The noises W0 and W1 are independent random variables, each with a uniform probability distribution over [−1, 1]. The initial state is X0 = W0. The decision variable U is measurable w.r.t. W0 : U W0. The final state is X1 = X0 + U + W1. The goal is to minimize the expectation of

  • εU2 + X 2

1

  • , where ε is

a “small” positive number (cheap control).

Note that this example matches the Markovian setting.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 228 / 328

slide-26
SLIDE 26

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Exact Solution of the Problem

E

  • εU2 + (W0 + U + W1)2

= E

  • W 2
  • 1/3

+ W 2

1

  • 1/3

+(1 + ε)U2 + 2UW0 + 2 UW1 +2 W0W1

  • The problem is thus equivalent to

min

UW0

2 3 + E

  • (1 + ε)U2 + 2UW0
  • ,

By the first order optimality condition, the optimal solution is U ♯ = − W0 1 + ε . The associated optimal cost is readily calculated to be J♯ = 1 3 × 1 + 2ε 1 + ε = 1 3 + O(ε) .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 229 / 328

slide-27
SLIDE 27

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 230 / 328

slide-28
SLIDE 28

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Noise Discretization “a la Monte Carlo”

We crudely sample the optimization problem.

W 0 W 1 wi wi

1

To that purpose, we first consider a realization of a N-sample of the couple (W0, W1), that is, points in the square Ω = [−1, 1]2 :

  • (wi

0, wi 1)

  • i=1,...,N

This sample will be used to approximate the expectation by the Monte Carlo method.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 231 / 328

slide-29
SLIDE 29

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Discretized Information Structure

We consider the N realizations {ui}i=1,...,N of the decision variable U, corresponding to the discretization of the noise, and we have to keep in mind that U should be measurable w.r.t. the first component W0 of the noise (W0, W1): U W0 . To translate this condition in our discrete framework issued from a Monte Carlo sample, we impose the constraint ∀(i, j) ∈ {1, . . . , N}2 , wi

0 = wj

= ⇒ ui = uj , which prevents U from taking different values whenever two samples of the noise display the same value on the first component (corresponding to W0).

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 232 / 328

slide-30
SLIDE 30

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

The Measurability Constraint is Not Effective!

The expression of the cost after discretization is 1 N

  • N
  • i=1

ε

  • ui2 +
  • wi

0 + ui + wi 1

2

  • ,

and it is minimized w.r.t. (u1, . . . , uN) under the constraints ui = uj whenever wi

0 = wj 0 .

Since the N sample trajectories (wi

0, wi 1) of (W0, W1) are produced

by a Monte Carlo sampling over [−1, 1]2, then, with probability 1, wi

0 = wj

∀(i, j) such that i = j . The constraints are in fact never effective, so that the discretized cost can be minimized independently for each individual sample i.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 233 / 328

slide-31
SLIDE 31

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Something is Wrong. . .

The optimization problem associated to the i-th sample is min

ui∈R ε

  • ui2 +
  • wi

0 + ui + wi 1

2 , which yields the optimal value and the optimal cost ui

♭ = −wi 0 + wi 1

1 + ε , ji

♭ = ε(wi 0 + wi 1)2

1 + ε . The averaged cost over the N samples is equal to 1 N

N

  • i=1

ε(wi

0 + wi 1)2

1 + ε − →

N→+∞

2 ε 3 (1 + ε) = 0 + O(ε) . This cost is far below the true optimal cost J♯ = 1/3 + O(ε) ! However, any admissible solution (any U such that U W0) cannot achieve a cost better than the optimal cost J♯. . .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 234 / 328

slide-32
SLIDE 32

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Real Value of the Discretized Problem Solution

The resolution of the discretized problem derived from the Monte Carlo procedure yields N optimal values ui

♭ = −wi 0 + wi 1

1 + ε , but not a random variable. The associated cost value of order ε is just a fake cost estimation, because we have not produced an admissible control for the initial problem, namely a random variable U ♭ measurable with respect to W0. To evaluate the true cost of this naive approach, we must first derive an admissible control for the initial problem, that is, a random variable U ♭ over [−1, 1]2 with constant value along every vertical line of this square (since the horizontal axis corresponds to the first component W0 of the noise).

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 235 / 328

slide-33
SLIDE 33

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Construction of an Admissible Control (1)

We assume that the sample points have been renumbered so that the value of the sample first component wi

0 is increasing with i. W 0 W 1 wi wi

1

Divide the square into N vertical strips by drawing vertical lines in the middle

  • f segments [wi

0, wi+1

]. The i-th strip is given by [ai−1, ai] × [−1, 1], with: ai = (wi

0 + wi+1

)/2 , for i = 2, . . . , N − 1, (a0 = −1 and aN = 1).

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 236 / 328

slide-34
SLIDE 34

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Construction of an Admissible Control (2)

We construct a solution U ♭ as the function of (w0, w1) which is constant over each vertical strip defined on the square, the value

  • f U ♭ in strip i being equal to the optimal value ui

♭ = − wi

0+wi 1

1+ε :

U ♭(w0, w1) =

N

  • i=1

ui

♭ 1[ai−1,ai]×[−1,1](w0, w1) ,

where (w0, w1) ranges in the square [−1, 1]2 and where 1A(·) is the indicator function of the set A: 1A(x) = 1 if x ∈ A ,

  • therwise .

Note that the control U ♭ depends on the N samples (wi

0, wi 0) by

means of the values of the mid-points ai’s and of the controls ui

♭’s.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 237 / 328

slide-35
SLIDE 35

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Evaluation of the Expected Cost (1)

The corresponding cost value E

  • ε(U ♭)2 + (W0 + U ♭ + W1)2

can be evaluated analytically (integration w.r.t. (w0, w1) over the square [−1, 1]2), and is equal to 2 3 +

N

  • i=1
  • (1 + ε)ai − ai−1

2

  • ui

2 + (ai)2 − (ai−1)2 2 ui

  • ,

where the values ai and ui

♭ depend on the samples (wi 0, wi 1).

In order to assess the value of this estimate, we now compute its expectation when considering that the (wi

0, wi 1)’s are realizations

  • f independent random variables (W i

0, W i 1). This calculation is

not straightforward because the wi

0’s have been reordered, so

that we compute it numerically for different values of N.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 238 / 328

slide-36
SLIDE 36

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Evaluation of the Expected Cost (2)

The cost provided by the admissible control U ♭ is estimated 2/3.

5 10 15 20 0.6 0.8 1.0 1.2 cost N

Figure: Estimated cost as a function of the number N of samples

This value neither corresponds to the true optimal cost (1/3) nor to the cost of the discrete problem (0). In fact, the value 2/3 is equal to the one given by the best open-loop control: U⋆ ≡ 0 !

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 239 / 328

slide-37
SLIDE 37

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 240 / 328

slide-38
SLIDE 38

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Scenario Tree Approach

W0 W1

. . . . . . . . .

The scenario tree approach leads to N0 × N1 scenarios:

  • (wj

0, wjk 1 )

k=1,...,N1

j=1,...,N0 .

Notice that the discretization wj

0 of

the first noise W0 only depends on j = 1, . . . , N0, whereas the discretization wjk

1 of

the noise W1 “hangs” from a given j and depends on k = 1, . . . , N1. From the measurability constraint, a different value uj of the control U is associated to each value wj

0 of W0.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 241 / 328

slide-39
SLIDE 39

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Scenario Tree Optimal Solution

On the scenario tree, the original cost E

  • εU2 + (W0 + U + W1)2 is

approximated by 1 N0

N0

  • j=1
  • ε(uj)2 + 1

N1

N1

  • k=1

(uj + wj

0 + wjk 1 )2

The solution of this approximated problem is uj

♮ = −wj 0 + wj 1

1 + ε , where wj

1 = 1

N1

N1

  • k=1

wjk

1 ,

to be compared with the naive Monte Carlo solution uj

♭ = −w j 0 + w j 1

1 + ε .

Note that wj

1 is an estimate of the expectation E

  • W1
  • , since we

assumed that W0 and W1 are independent random variables.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 242 / 328

slide-40
SLIDE 40

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Scenario Tree Optimal Cost

Let (σj

1)2 = 1 N1

N1

k=1(wjk 1 )2. The solution uj ♮ yields the cost

1 N0(1 + ε)

N0

  • j=1
  • ε(wj

0)2 + 2εwj 0wj 1 − (wj 1)2 + (1 + ε)(σj 1)2

. The two estimates wj

1 and (σj 1)2 converge as N1 goes to infinity

towards their asymptotic values, that is, 0 and 1/3, so that the scenario tree optimal cost is such that 1 N0(1 + ε)

N0

  • j=1
  • ε(wj

0)2 + 1 + ε

3

N0→+∞

1 3 + O(ε) . This cost is of the same order than the “true” optimal cost! However it does not correspond to an admissible solution. . .

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 243 / 328

slide-41
SLIDE 41

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Admissible Control and Associated Cost

As in the naive Monte Carlo method, we derive from the uj

♮’s an

admissible solution U ♮ for the initial problem (piecewise constant fonction over N0 strips of the square [−1, 1]2). The cost provided by U ♮ is estimated 1/3, corresponding to the true optimal cost.

N0 cost

  • ptimal cost

cost 10 20 30 40 50 0.35 0.40 0.45 0.50 0.55

Figure: Estimated cost on a tree with N2

0 scenarios

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 244 / 328

slide-42
SLIDE 42

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Where Do We Stand?

True Solution Naive Monte Carlo Scenario Tree Discrete Cost O(ε) 1/3 + O(ε) Optimal Control −W0/(1 + ε) −(w i

0 + w i 1)/(1 + ε)

−(w i

0 + w j 1)/(1 + ε)

Induced Cost 1/3 + O(ε) 2/3 + O(ε) 1/3 + O(ε)

1 The naive Monte Carlo method

discretizes the noise process as a whole, deduces the discretization of the measurability constraint, yields a cost not better than the open-loop solution. . .

2 The scenario tree approach

discretizes the noise in a clever way (forward process), deduces the discretization of the measurability constraint, yields the optimal cost!

Clue: the conditional probability laws are well estimated.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 245 / 328

slide-43
SLIDE 43

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Monte Carlo Interpretation of the Scenario Tree

W0 {a, b, c, d}, W1

  • {1}, {2, 3, 4}, {5, 6}, {7, 8, 9, 10}
  • .

d c b a 1 2 3 4 5 6 7 8 9 10 1 a b c d 3 4 6 5 7 8 9 10 2

In a scenario tree, groups of samples are naturally aligned vertically!

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 246 / 328

slide-44
SLIDE 44

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Voronoi Quantization

However, others quantizations of Ω are possible.

1 a b c d 3 4 6 5 7 8 9 10 2

Given a set of points in the square [−1, 1]2, the Voronoi tessellation minimizes the mean quadratic error among finite random variables taking given

  • values. We in fact consider a

discretized version of the random variable (W0, W1), rather than a Monte Carlo sampling.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 247 / 328

slide-45
SLIDE 45

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 248 / 328

slide-46
SLIDE 46

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Independent Discretization of Noise and Information (1)

Choose a discretization of the noise (8 cells).

3 2 1 4 5 6 7 8

Noise

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 249 / 328

slide-47
SLIDE 47

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Independent Discretization of Noise and Information (2)

Choose a discretization of the noise (8 cells). Choose a discretization of the information (5 cells).

3 2 1 4 5 6 7 8 b c d e a

Noise Information

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 250 / 328

slide-48
SLIDE 48

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Independent Discretization of Noise and Information (3)

Choose a discretization of the noise (8 cells). Choose a discretization of the information (5 cells). Combine both discretizations (21 non empty cells).

3 2 1 4 5 6 7 8 3a 3b 4b 5b 5c 7c 4c 4d 8d 8e 7d 7e 5d 6d 6e 6b 6c 2b 6a 1a 2a b c d e a

Noise Information Mixing

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 251 / 328

slide-49
SLIDE 49

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Independent Discretization of Noise and Information (4)

W0 {a, b, c, d, e} , W1 {1, 2, 3, 4, 5, 6, 7, 8}.

3a 3b 4b 5b 5c 7c 4c 4d 8d 8e 7d 7e 5d 6d 6e 6b 6c 2b 6a 1a 2a a b c d e 2 3 4 5 6 7 8 1

This approach does not necessarily produce a tree structure!

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 252 / 328

slide-50
SLIDE 50

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

Discretized Optimization Problem

Using the notation j(u, w0, w1) = εu2 + (w0 + u + w1)2, the discretized optimization problem is min

{uk}

  • k∈{a,...,e}

8

  • i=1

πik j(uk, wi

0, wi 1) ,

where πik is the probability weight of the cell ik, uk is the control value on the cell k and wi the noise value on the cell i. Note that some of the weights πik’s are equal to zero. The solution of this discretized problem can be computed (finite dimensional optimization). We expect that the optimal cost of the discretized problem converges to the true optimal cost J♯ as the numbers of points in the 2 discrete sets associated to information and noise ({a, . . . , e} and {1, . . . , 8} in our example) go to infinity.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 253 / 328

slide-51
SLIDE 51

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 254 / 328

slide-52
SLIDE 52

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

Problem and its Approximation

We consider the general form of a stochastic optimisation problem: V(W , B) = min

U∈U E

  • j(U, W )
  • ,

subject to U is B-measurable . We consider a sequence of random noises

  • Wn
  • n∈N and another

sequence of σ-fields

  • Bn
  • n∈N such that the Wn’s and the Bn’s

have “finite” representations, e.g. Wn = n

i=1 wi1Ωi,

(Ω1, . . . , Ωn) being a partition of Ω, Bn = σ(Ω1, . . . , Ωn), (Ω1, . . . , Ωn) being a partition of Ω. We are interested in the sequence of values

  • V(Wn, Bn)
  • n∈N.
  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 255 / 328

slide-53
SLIDE 53

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 256 / 328

slide-54
SLIDE 54

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

Convergence Notions for W

These are standard and well known notions. Convergence in distribution: Wn

D

− → W . lim

n→+∞ E

  • f (Wn)
  • = E
  • f (W )
  • for all continuous bounded f .

This is the underlying concept in the Monte Carlo method: the empirical law defined by a N-sample (W (1), . . . , W (n)) of W , that is, 1

n

n

i=1 δW (i), weakly converges to PW .

Convergence in probability: Wn

P

− → W . ∀ǫ > 0 , lim

n→+∞ P

  • Wn − W
  • W ≥ ǫ
  • = 0 .

This notion is much stronger than the previous one.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 257 / 328

slide-55
SLIDE 55

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 258 / 328

slide-56
SLIDE 56

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

Convergence Notions for B

These notions and results are a little less well known. . . Strong Convergence of σ-fields: Bn → B. lim

n→+∞ E

  • f
  • Bn
  • L1

− → E

  • f
  • B
  • for all f ∈ L1(R) .

Main properties.

1 The topology of the strong convergence is metrizable, so that

the space A⋆ of sub-fields of A is a complete separable metric space.

2 The σ-fields generated by a finite partition of Ω are dense in

A⋆ equipped with the previous metric.

3 Let {Yn}n∈N be a sequence of random variables such that

Yn

P

− → Y and σ(Yn) ⊂ σ(Y ) ∀n. Then, σ(Yn) → σ(Y ).

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 259 / 328

slide-57
SLIDE 57

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

1

Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Some Details about the Method

2

Stochastic Optimal Control and Discretization Puzzles Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal

3

A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 260 / 328

slide-58
SLIDE 58

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

Convergence Theorem

Theorem Let W = Lq(Ω, A, P; W) and U = Lr(Ω, A, P; U), with 1 ≤ q < +∞ and 1 ≤ r < +∞. Under the assumptions H1 the sequence {Bn}n∈N strongly converges to B, and Bn ⊂ B, H2 the sequence

  • Wn
  • n∈N Lq converges to W (in Lq-norm),

H3 the normal integrand j is such that

∀(u, u′) ∈ U2 , ∀(w, w ′) ∈ W2 ,

  • j(u, w) − j(u′, w ′)
  • ≤ α u − u′r

U + β w − w ′q W ,

the convergence of the approximated optimal costs holds true lim

n→+∞ V

  • Wn, Bn
  • = V
  • W , B
  • .

Using epi-convergence, it is possible to obtain the same results under weaker assumptions and to ensure the convergence of the sequence of the solutions.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 261 / 328

slide-59
SLIDE 59

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Convergence of Random Variables Convergence of σ-Fields The Long-Awaited Convergence Theorem

Conclusions

In the discretization of a SOC problem, there are two issues:

noise discretization, information discretization.

The naive Monte Carlo discretization provides a too weak convergence notion (in distribution, not in probability). The scenario tree methodology provides an effective way to discretize stochastic optimal control problem, but the two discretizations of information and of noise are bundled. Independent discretizations of noise and information offer

a greater latitude to select discretization schemes, a way to obtain proper convergence results.

  • P. Carpentier

Master Optimization — Stochastic Optimization 2020-2021 262 / 328