Large Deviations for Multi-valued Stochastic Differential Equations - - PowerPoint PPT Presentation

large deviations for multi valued stochastic differential
SMART_READER_LITE
LIVE PREVIEW

Large Deviations for Multi-valued Stochastic Differential Equations - - PowerPoint PPT Presentation

Large Deviations for Multi-valued Stochastic Differential Equations Large Deviations for Multi-valued Stochastic Differential Equations Joint work with Siyan Xu, Xicheng Zhang We shall talk about the Freidlin-Wentzell Large Deviation Principle


slide-1
SLIDE 1

Large Deviations for Multi-valued Stochastic Differential Equations

slide-2
SLIDE 2

Large Deviations for Multi-valued Stochastic Differential Equations

Joint work with Siyan Xu, Xicheng Zhang

slide-3
SLIDE 3

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation

slide-4
SLIDE 4

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1].

slide-5
SLIDE 5

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type:

slide-6
SLIDE 6

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)
slide-7
SLIDE 7

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)

for F ⊂ Cx([0, 1], D(A)),

slide-8
SLIDE 8

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)

for F ⊂ Cx([0, 1], D(A)), where

slide-9
SLIDE 9

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)

for F ⊂ Cx([0, 1], D(A)), where F o = the interior of F

slide-10
SLIDE 10

We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)

for F ⊂ Cx([0, 1], D(A)), where F o = the interior of F F = the closure of F

slide-11
SLIDE 11
  • I: a rate function on Cx([0, 1], D(A));
slide-12
SLIDE 12
  • I: a rate function on Cx([0, 1], D(A));

(I ∈ [0, ∞], {I c} is compact ∀c > 0).

slide-13
SLIDE 13
  • I: a rate function on Cx([0, 1], D(A));

(I ∈ [0, ∞], {I c} is compact ∀c > 0). and

slide-14
SLIDE 14
  • I: a rate function on Cx([0, 1], D(A));

(I ∈ [0, ∞], {I c} is compact ∀c > 0). and I(A) := inf

x∈A I(x)

slide-15
SLIDE 15
  • I: a rate function on Cx([0, 1], D(A));

(I ∈ [0, ∞], {I c} is compact ∀c > 0). and I(A) := inf

x∈A I(x)

Let us begin by explaining the notion of

slide-16
SLIDE 16
  • I: a rate function on Cx([0, 1], D(A));

(I ∈ [0, ∞], {I c} is compact ∀c > 0). and I(A) := inf

x∈A I(x)

Let us begin by explaining the notion of

slide-17
SLIDE 17

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

slide-18
SLIDE 18

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt

slide-19
SLIDE 19

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A

slide-20
SLIDE 20

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it?

slide-21
SLIDE 21

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator.

slide-22
SLIDE 22

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:

slide-23
SLIDE 23

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:

  • A is multivalued: ∀x, A(x) is a set, not necessarily a single point
slide-24
SLIDE 24

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:

  • A is multivalued: ∀x, A(x) is a set, not necessarily a single point
  • A is monotone: ∀x, y, all u ∈ A(x), v ∈ A(y)

x − y, u − v 0

slide-25
SLIDE 25

MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS

dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:

  • A is multivalued: ∀x, A(x) is a set, not necessarily a single point
  • A is monotone: ∀x, y, all u ∈ A(x), v ∈ A(y)

x − y, u − v 0

  • A is maximal:

(x1, y1) ∈ Gr(A) ⇔ y1 − y2, x1 − x2 0, ∀(x2, y2) ∈ Gr(A).

slide-26
SLIDE 26

Definition of Solution

slide-27
SLIDE 27

Definition of Solution

Definition: A pair of continuous and Ft−adapted processes (X, K) is called a strong solution of

slide-28
SLIDE 28

Definition of Solution

Definition: A pair of continuous and Ft−adapted processes (X, K) is called a strong solution of dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) dt, X(0) = x ∈ D(A).

slide-29
SLIDE 29

Definition of Solution

Definition: A pair of continuous and Ft−adapted processes (X, K) is called a strong solution of dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) dt, X(0) = x ∈ D(A). if

slide-30
SLIDE 30

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t;

slide-31
SLIDE 31

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.;

slide-32
SLIDE 32

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.;

slide-33
SLIDE 33

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+,

slide-34
SLIDE 34

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt

slide-35
SLIDE 35

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0

slide-36
SLIDE 36

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0 (Formally, this amounts to saying

slide-37
SLIDE 37

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0 (Formally, this amounts to saying X(t) − α(t), K′(t)dt − β(t)dt 0

slide-38
SLIDE 38

(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0 (Formally, this amounts to saying X(t) − α(t), K′(t)dt − β(t)dt 0

  • r

X(t) − α(t), K′(t) − β(t) 0 )

slide-39
SLIDE 39

Existence and Uniqueness

slide-40
SLIDE 40

Existence and Uniqueness

Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1].

slide-41
SLIDE 41

Existence and Uniqueness

Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K).

slide-42
SLIDE 42

Existence and Uniqueness

Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of

slide-43
SLIDE 43

Existence and Uniqueness

Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1].

slide-44
SLIDE 44

Existence and Uniqueness

Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. will be denoted by

slide-45
SLIDE 45

Existence and Uniqueness

Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. will be denoted by (Xε, Kε)

slide-46
SLIDE 46

normally reflected SDE as a special case of MSDE

slide-47
SLIDE 47

normally reflected SDE as a special case of MSDE

Suppose

slide-48
SLIDE 48

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm,

slide-49
SLIDE 49

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O,

slide-50
SLIDE 50

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) =

slide-51
SLIDE 51

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O,

slide-52
SLIDE 52

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O.

slide-53
SLIDE 53

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by

slide-54
SLIDE 54

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O}

slide-55
SLIDE 55

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =   

slide-56
SLIDE 56

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =    ∅, if x / ∈ O,

slide-57
SLIDE 57

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =    ∅, if x / ∈ O, {0}, if x ∈ Int(O),

slide-58
SLIDE 58

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =    ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O,

slide-59
SLIDE 59

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =    ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O, where

slide-60
SLIDE 60

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =    ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O, where Int(O) is the interior of O

slide-61
SLIDE 61

normally reflected SDE as a special case of MSDE

Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =    ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O, where Int(O) is the interior of O Λx is the exterior normal cone at x.

slide-62
SLIDE 62

Then ∂IO is a multivalued maximal monotone operator.

slide-63
SLIDE 63

Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O.

slide-64
SLIDE 64

Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally,

slide-65
SLIDE 65

Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally, the subdifferential of any convex and lower semi-continuous function is a maximal monotone operator.

slide-66
SLIDE 66

Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally, the subdifferential of any convex and lower semi-continuous function is a maximal monotone operator. ∂ϕ(x) := {y ∈ Rm : y, z − xRm ϕ(z) − ϕ(x), ∀z ∈ Rm}

slide-67
SLIDE 67

Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally, the subdifferential of any convex and lower semi-continuous function is a maximal monotone operator. ∂ϕ(x) := {y ∈ Rm : y, z − xRm ϕ(z) − ϕ(x), ∀z ∈ Rm}

slide-68
SLIDE 68

Towards a Freidlin-Wentzell theory

slide-69
SLIDE 69

Towards a Freidlin-Wentzell theory

We now look at the problem of small perturbation:

slide-70
SLIDE 70

Towards a Freidlin-Wentzell theory

We now look at the problem of small perturbation: dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1].

slide-71
SLIDE 71

Classical theory

slide-72
SLIDE 72

Classical theory

Recall the classical case:

slide-73
SLIDE 73

Classical theory

Recall the classical case: A ≡ 0

slide-74
SLIDE 74

Classical theory

Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1].

slide-75
SLIDE 75

Classical theory

Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1]. The theory begins in the seminal paper

slide-76
SLIDE 76

Classical theory

Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1]. The theory begins in the seminal paper Wentzell-Freidlin: On small random perturbation of dynamical system, 1969

slide-77
SLIDE 77

Classical theory

Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1]. The theory begins in the seminal paper Wentzell-Freidlin: On small random perturbation of dynamical system, 1969 There are two standard and well known approaches to this classical problem.

slide-78
SLIDE 78
  • 1. Use of Euler Scheme
slide-79
SLIDE 79
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984

slide-80
SLIDE 80
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme

slide-81
SLIDE 81
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε

n(t) = x +

t b(Xε

n(n−1[sn])) ds + √ε

t σ(Xε

n(n−1[sn])) dW(s),

slide-82
SLIDE 82
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε

n(t) = x +

t b(Xε

n(n−1[sn])) ds + √ε

t σ(Xε

n(n−1[sn])) dW(s),

Then you have to estimate the quantity

slide-83
SLIDE 83
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε

n(t) = x +

t b(Xε

n(n−1[sn])) ds + √ε

t σ(Xε

n(n−1[sn])) dW(s),

Then you have to estimate the quantity P( max

0t1 |Xε(t, x) − Xε n| δ)

slide-84
SLIDE 84
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε

n(t) = x +

t b(Xε

n(n−1[sn])) ds + √ε

t σ(Xε

n(n−1[sn])) dW(s),

Then you have to estimate the quantity P( max

0t1 |Xε(t, x) − Xε n| δ)

If you have for any δ > 0

slide-85
SLIDE 85
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε

n(t) = x +

t b(Xε

n(n−1[sn])) ds + √ε

t σ(Xε

n(n−1[sn])) dW(s),

Then you have to estimate the quantity P( max

0t1 |Xε(t, x) − Xε n| δ)

If you have for any δ > 0 lim

n→∞ lim sup ε↓0

ε log P( max

0t1 |Xε(t, x) − Xε n| δ) = −∞,

slide-86
SLIDE 86
  • 1. Use of Euler Scheme

D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε

n(t) = x +

t b(Xε

n(n−1[sn])) ds + √ε

t σ(Xε

n(n−1[sn])) dW(s),

Then you have to estimate the quantity P( max

0t1 |Xε(t, x) − Xε n| δ)

If you have for any δ > 0 lim

n→∞ lim sup ε↓0

ε log P( max

0t1 |Xε(t, x) − Xε n| δ) = −∞,

then you are done.

slide-87
SLIDE 87

Unfortunately, this approach does NOT apply to MSDE case

slide-88
SLIDE 88

Unfortunately, this approach does NOT apply to MSDE case simply because the Euler scheme cannot be defined for MSDE

slide-89
SLIDE 89

Unfortunately, this approach does NOT apply to MSDE case simply because the Euler scheme cannot be defined for MSDE since either A(Xn(t)) is not defined or may be a set.

slide-90
SLIDE 90
  • 2. Use of Freidlin-Wentzell estimate
slide-91
SLIDE 91
  • 2. Use of Freidlin-Wentzell estimate

(R. Azencott: Grandes deviations et applications, 1980

slide-92
SLIDE 92
  • 2. Use of Freidlin-Wentzell estimate

(R. Azencott: Grandes deviations et applications, 1980

  • P. Priouret: Remarques sur les petite perturbations de syst`

emes dynamiques, 1982)

slide-93
SLIDE 93
  • 2. Use of Freidlin-Wentzell estimate

(R. Azencott: Grandes deviations et applications, 1980

  • P. Priouret: Remarques sur les petite perturbations de syst`

emes dynamiques, 1982) Consider an ODE

slide-94
SLIDE 94
  • 2. Use of Freidlin-Wentzell estimate

(R. Azencott: Grandes deviations et applications, 1980

  • P. Priouret: Remarques sur les petite perturbations de syst`

emes dynamiques, 1982) Consider an ODE dg(t) = b(g(t)) dt + σ(g(t))f′(t) dt, g(0) = x.

slide-95
SLIDE 95
  • 2. Use of Freidlin-Wentzell estimate

(R. Azencott: Grandes deviations et applications, 1980

  • P. Priouret: Remarques sur les petite perturbations de syst`

emes dynamiques, 1982) Consider an ODE dg(t) = b(g(t)) dt + σ(g(t))f′(t) dt, g(0) = x. where f satisfies

slide-96
SLIDE 96
  • 2. Use of Freidlin-Wentzell estimate

(R. Azencott: Grandes deviations et applications, 1980

  • P. Priouret: Remarques sur les petite perturbations de syst`

emes dynamiques, 1982) Consider an ODE dg(t) = b(g(t)) dt + σ(g(t))f′(t) dt, g(0) = x. where f satisfies 1 f′(t)2dt < ∞.

slide-97
SLIDE 97

If you can prove

slide-98
SLIDE 98

If you can prove ∀R > 0,

slide-99
SLIDE 99

If you can prove ∀R > 0, ρ > 0,

slide-100
SLIDE 100

If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0,

slide-101
SLIDE 101

If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0,

slide-102
SLIDE 102

If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0

slide-103
SLIDE 103

If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0 such that

slide-104
SLIDE 104

If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0 such that ε log P(Xε − g > ρ, ε

1 2 W − f < α) −R,

∀ε ε0

slide-105
SLIDE 105

If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0 such that ε log P(Xε − g > ρ, ε

1 2 W − f < α) −R,

∀ε ε0 then you have the large deviation principle.

slide-106
SLIDE 106

But this approach does NOT apply neither

slide-107
SLIDE 107

But this approach does NOT apply neither since to obtain the Freidlin-Wentzell estimate directly

slide-108
SLIDE 108

But this approach does NOT apply neither since to obtain the Freidlin-Wentzell estimate directly

  • ne has to suppose b and σ are Lipschitz
slide-109
SLIDE 109

But this approach does NOT apply neither since to obtain the Freidlin-Wentzell estimate directly

  • ne has to suppose b and σ are Lipschitz

(Priouret had to.).

slide-110
SLIDE 110
  • 3. Use of weak convergence
slide-111
SLIDE 111
  • 3. Use of weak convergence

This approach was developed a few years ago by

slide-112
SLIDE 112
  • 3. Use of weak convergence

This approach was developed a few years ago by

  • P. Dupuis and R.S. Ellis: A weak convergence approach to the

theory of large deviatins, 1997

slide-113
SLIDE 113
  • 3. Use of weak convergence

This approach was developed a few years ago by

  • P. Dupuis and R.S. Ellis: A weak convergence approach to the

theory of large deviatins, 1997 and was tested in several recent work to treat Wentzell-Freidlin large deviation principle for SDEs with irregular coefficients. It has proved to be highly efficient.

slide-114
SLIDE 114
  • 3. Use of weak convergence

This approach was developed a few years ago by

  • P. Dupuis and R.S. Ellis: A weak convergence approach to the

theory of large deviatins, 1997 and was tested in several recent work to treat Wentzell-Freidlin large deviation principle for SDEs with irregular coefficients. It has proved to be highly efficient. The hard core of this approach is the following observation

slide-115
SLIDE 115
  • Xn: a sequence of r.v., valued in a complete metric space X ;
slide-116
SLIDE 116
  • Xn: a sequence of r.v., valued in a complete metric space X ;
  • I: a rate function on X ;
slide-117
SLIDE 117
  • Xn: a sequence of r.v., valued in a complete metric space X ;
  • I: a rate function on X ;

{Xn} is said to satisfy the LDP with rate function I if

slide-118
SLIDE 118
  • Xn: a sequence of r.v., valued in a complete metric space X ;
  • I: a rate function on X ;

{Xn} is said to satisfy the LDP with rate function I if lim sup

n→∞

1 n log P(Xn ∈ F) −I(F), F closed

slide-119
SLIDE 119
  • Xn: a sequence of r.v., valued in a complete metric space X ;
  • I: a rate function on X ;

{Xn} is said to satisfy the LDP with rate function I if lim sup

n→∞

1 n log P(Xn ∈ F) −I(F), F closed lim inf

n→∞

1 n log P(Xn ∈ G) −I(G), F open {Xn} is said to satisfy the Laplace Principle with rate function I if

slide-120
SLIDE 120
  • Xn: a sequence of r.v., valued in a complete metric space X ;
  • I: a rate function on X ;

{Xn} is said to satisfy the LDP with rate function I if lim sup

n→∞

1 n log P(Xn ∈ F) −I(F), F closed lim inf

n→∞

1 n log P(Xn ∈ G) −I(G), F open {Xn} is said to satisfy the Laplace Principle with rate function I if lim

n→∞

1 n log E{exp[−nh(Xn)]} = − inf

x∈X {h(x) + I(x)}.

slide-121
SLIDE 121

Theorem: LP⇐ ⇒ LDP

slide-122
SLIDE 122

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997)

slide-123
SLIDE 123

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE.

slide-124
SLIDE 124

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE

slide-125
SLIDE 125

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x

slide-126
SLIDE 126

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D.

slide-127
SLIDE 127

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type

slide-128
SLIDE 128

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)
slide-129
SLIDE 129

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)

for F ⊂ Cx([0, 1], D(A)),

slide-130
SLIDE 130

Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type −I(F o)

  • lim inf

ε↓0

ε log P(Xε ∈ F)

  • lim sup

ε↓0

ε log P(Xε ∈ F)

  • −I(F)

for F ⊂ Cx([0, 1], D(A)), the story is

slide-131
SLIDE 131
slide-132
SLIDE 132

(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0;

slide-133
SLIDE 133

(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0; (ii) by Doss and Priouret when σ may be degenerated;

slide-134
SLIDE 134

(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0; (ii) by Doss and Priouret when σ may be degenerated; (iii) by Dupuis for general convex sets D

slide-135
SLIDE 135

(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0; (ii) by Doss and Priouret when σ may be degenerated; (iii) by Dupuis for general convex sets D

slide-136
SLIDE 136

Return to our original problem.

slide-137
SLIDE 137

Return to our original problem. If we manage to prove a Laplace principle, then we are done.

slide-138
SLIDE 138

Return to our original problem. If we manage to prove a Laplace principle, then we are done. But how to obtain the Laplace principle?

slide-139
SLIDE 139

Return to our original problem. If we manage to prove a Laplace principle, then we are done. But how to obtain the Laplace principle? Budhiraja and Dupuis pointed out the way for us.

slide-140
SLIDE 140

Things to do

slide-141
SLIDE 141

Things to do

Set B := C([0, 1], Rd)

slide-142
SLIDE 142

Things to do

Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2

H :=

1 |˙ h(s)|2ds < ∞}

slide-143
SLIDE 143

Things to do

Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2

H :=

1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B

slide-144
SLIDE 144

Things to do

Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2

H :=

1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space.

slide-145
SLIDE 145

Things to do

Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2

H :=

1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let

slide-146
SLIDE 146

Things to do

Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2

H :=

1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let DN := {h ∈ H : hH N}

slide-147
SLIDE 147

Things to do

Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2

H :=

1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let DN := {h ∈ H : hH N} Then DN with weak topology is a metrizable compact Polish space.

slide-148
SLIDE 148

Things to do

Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2

H :=

1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let DN := {h ∈ H : hH N} Then DN with weak topology is a metrizable compact Polish space. AN := {h : h is an adapted process , h(·) ∈ DN a.s.}

slide-149
SLIDE 149

The way pointed out by Budhiraja and Duuis

slide-150
SLIDE 150

The way pointed out by Budhiraja and Duuis

Let Yn : B → X , n = 1, 2, · · · .

slide-151
SLIDE 151

The way pointed out by Budhiraja and Duuis

Let Yn : B → X , n = 1, 2, · · · . If there exists Y0 : H → X such that

slide-152
SLIDE 152

The way pointed out by Budhiraja and Duuis

Let Yn : B → X , n = 1, 2, · · · . If there exists Y0 : H → X such that A For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges in distribution to h ∈ AN, then for some subsequence nk, Ynk

  • w +

hnk(w)

  • n−1

k

  • converges in distribution to

Y0(h) in X .

slide-153
SLIDE 153

The way pointed out by Budhiraja and Duuis

Let Yn : B → X , n = 1, 2, · · · . If there exists Y0 : H → X such that A For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges in distribution to h ∈ AN, then for some subsequence nk, Ynk

  • w +

hnk(w)

  • n−1

k

  • converges in distribution to

Y0(h) in X . B For any N > 0, if {hn, n ∈ N} ⊂ AN converges weakly to h ∈ H, then for some subsequence hnk, Y0(hnk) converges weakly to Y0(h) in X .

slide-154
SLIDE 154

Theorem: Under (A) and (B), the LP holds: lim

n→∞ log E{exp[−nf(Yn)]} = − inf x∈X {g(x) + I(x)}

slide-155
SLIDE 155

Verification of (A) and (B)

slide-156
SLIDE 156

Verification of (A) and (B)

We replace n by ε−1 and we take

slide-157
SLIDE 157

Verification of (A) and (B)

We replace n by ε−1 and we take Y0 = X

slide-158
SLIDE 158

Verification of (A) and (B)

We replace n by ε−1 and we take Y0 = X Yε := Xε,hε.

slide-159
SLIDE 159

Verification of (A) and (B)

We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A).

slide-160
SLIDE 160

Verification of (A) and (B)

We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A). Now Yε satisfies:

slide-161
SLIDE 161

Verification of (A) and (B)

We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A). Now Yε satisfies: Yε(t) = x + t b(Yε(s)) ds + t σ(Yε(s))˙ hε(s) ds +√ε t σ(Yε(s)) dW(s) − Kε(t)

slide-162
SLIDE 162

Verification of (A) and (B)

We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A). Now Yε satisfies: Yε(t) = x + t b(Yε(s)) ds + t σ(Yε(s))˙ hε(s) ds +√ε t σ(Yε(s)) dW(s) − Kε(t) Y0(t) = x + t b(Y0(s)) ds + t σ(Y0(s))˙ h(s) ds − K0(t).

slide-163
SLIDE 163

Note (A) is equivalent to the apparently weaker condition

slide-164
SLIDE 164

Note (A) is equivalent to the apparently weaker condition A’ For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges almost surely to h ∈ AN, in H, then for some subsequence nk, Ynk

  • w +

hnk(w)

  • n−1

k

  • converges in distribution to

Y0(h) in X .

slide-165
SLIDE 165

Note (A) is equivalent to the apparently weaker condition A’ For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges almost surely to h ∈ AN, in H, then for some subsequence nk, Ynk

  • w +

hnk(w)

  • n−1

k

  • converges in distribution to

Y0(h) in X . This equivalence can be easily achieved by using Skorohod representation.

slide-166
SLIDE 166

Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly

slide-167
SLIDE 167

Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly if hε − → h0 almost surely in H.

slide-168
SLIDE 168

Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly if hε − → h0 almost surely in H. This can be done by Itˆ

  • calculus.
slide-169
SLIDE 169

Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly if hε − → h0 almost surely in H. This can be done by Itˆ

  • calculus.

Similarly, B can be verified.