SLIDE 1
Large Deviations for Multi-valued Stochastic Differential Equations - - PowerPoint PPT Presentation
Large Deviations for Multi-valued Stochastic Differential Equations - - PowerPoint PPT Presentation
Large Deviations for Multi-valued Stochastic Differential Equations Large Deviations for Multi-valued Stochastic Differential Equations Joint work with Siyan Xu, Xicheng Zhang We shall talk about the Freidlin-Wentzell Large Deviation Principle
SLIDE 2
SLIDE 3
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation
SLIDE 4
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1].
SLIDE 5
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type:
SLIDE 6
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
SLIDE 7
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
for F ⊂ Cx([0, 1], D(A)),
SLIDE 8
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
for F ⊂ Cx([0, 1], D(A)), where
SLIDE 9
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
for F ⊂ Cx([0, 1], D(A)), where F o = the interior of F
SLIDE 10
We shall talk about the Freidlin-Wentzell Large Deviation Principle for the following multivalued stochastic differential equation dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. I.e., we seek a result of the following type: −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
for F ⊂ Cx([0, 1], D(A)), where F o = the interior of F F = the closure of F
SLIDE 11
- I: a rate function on Cx([0, 1], D(A));
SLIDE 12
- I: a rate function on Cx([0, 1], D(A));
(I ∈ [0, ∞], {I c} is compact ∀c > 0).
SLIDE 13
- I: a rate function on Cx([0, 1], D(A));
(I ∈ [0, ∞], {I c} is compact ∀c > 0). and
SLIDE 14
- I: a rate function on Cx([0, 1], D(A));
(I ∈ [0, ∞], {I c} is compact ∀c > 0). and I(A) := inf
x∈A I(x)
SLIDE 15
- I: a rate function on Cx([0, 1], D(A));
(I ∈ [0, ∞], {I c} is compact ∀c > 0). and I(A) := inf
x∈A I(x)
Let us begin by explaining the notion of
SLIDE 16
- I: a rate function on Cx([0, 1], D(A));
(I ∈ [0, ∞], {I c} is compact ∀c > 0). and I(A) := inf
x∈A I(x)
Let us begin by explaining the notion of
SLIDE 17
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
SLIDE 18
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt
SLIDE 19
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A
SLIDE 20
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it?
SLIDE 21
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator.
SLIDE 22
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:
SLIDE 23
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:
- A is multivalued: ∀x, A(x) is a set, not necessarily a single point
SLIDE 24
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:
- A is multivalued: ∀x, A(x) is a set, not necessarily a single point
- A is monotone: ∀x, y, all u ∈ A(x), v ∈ A(y)
x − y, u − v 0
SLIDE 25
MSDE: MULTIVALUED STOCHASTIC DIFFERENTIAL EQUATIONS
dX(t) ∈ b(X(t))dt + σ(X(t))dw(t) − A(X(t))dt Difference: An additional A What is it? A: a maximal monotone operator. main features:
- A is multivalued: ∀x, A(x) is a set, not necessarily a single point
- A is monotone: ∀x, y, all u ∈ A(x), v ∈ A(y)
x − y, u − v 0
- A is maximal:
(x1, y1) ∈ Gr(A) ⇔ y1 − y2, x1 − x2 0, ∀(x2, y2) ∈ Gr(A).
SLIDE 26
Definition of Solution
SLIDE 27
Definition of Solution
Definition: A pair of continuous and Ft−adapted processes (X, K) is called a strong solution of
SLIDE 28
Definition of Solution
Definition: A pair of continuous and Ft−adapted processes (X, K) is called a strong solution of dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) dt, X(0) = x ∈ D(A).
SLIDE 29
Definition of Solution
Definition: A pair of continuous and Ft−adapted processes (X, K) is called a strong solution of dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) dt, X(0) = x ∈ D(A). if
SLIDE 30
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t;
SLIDE 31
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.;
SLIDE 32
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.;
SLIDE 33
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+,
SLIDE 34
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt
SLIDE 35
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0
SLIDE 36
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0 (Formally, this amounts to saying
SLIDE 37
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0 (Formally, this amounts to saying X(t) − α(t), K′(t)dt − β(t)dt 0
SLIDE 38
(i) X0 = x0 and X(t) ∈ D(A) a.s., ∀t; (ii) K = {K(t), Ft; t ∈ R+} is of finite variation and K(0) = 0 a.s.; (iii) dX(t) = b(X(t))dt + σ(X(t))dw(t) − dK(t), t ∈ R+, a.s.; (iv) for any continuous processes (α, β) satisfying (α(t), β(t)) ∈ Gr(A), ∀t ∈ R+, the measure X(t) − α(t), dK(t) − β(t)dt 0 (Formally, this amounts to saying X(t) − α(t), K′(t)dt − β(t)dt 0
- r
X(t) − α(t), K′(t) − β(t) 0 )
SLIDE 39
Existence and Uniqueness
SLIDE 40
Existence and Uniqueness
Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1].
SLIDE 41
Existence and Uniqueness
Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K).
SLIDE 42
Existence and Uniqueness
Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of
SLIDE 43
Existence and Uniqueness
Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1].
SLIDE 44
Existence and Uniqueness
Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. will be denoted by
SLIDE 45
Existence and Uniqueness
Theorem (C´ epa, 1995) If σ and b are Lipschitz, then dX(t) ∈ b(X(t)) dt + σ(X(t)) dW(t) − A(X(t)) dt, X(0) = x ∈ D(A), ε ∈ (0, 1]. admits a unique solution, (X, K). The unique solution of dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1]. will be denoted by (Xε, Kε)
SLIDE 46
normally reflected SDE as a special case of MSDE
SLIDE 47
normally reflected SDE as a special case of MSDE
Suppose
SLIDE 48
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm,
SLIDE 49
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O,
SLIDE 50
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) =
SLIDE 51
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O,
SLIDE 52
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O.
SLIDE 53
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by
SLIDE 54
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O}
SLIDE 55
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} =
SLIDE 56
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} = ∅, if x / ∈ O,
SLIDE 57
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} = ∅, if x / ∈ O, {0}, if x ∈ Int(O),
SLIDE 58
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} = ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O,
SLIDE 59
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} = ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O, where
SLIDE 60
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} = ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O, where Int(O) is the interior of O
SLIDE 61
normally reflected SDE as a special case of MSDE
Suppose O: a closed convex subset of Rm, IO: the indicator function of O, i.e, IO(x) = 0, if x ∈ O, + ∞, if x / ∈ O. The subdifferential of IO is given by ∂IO(x) = {y ∈ Rm : y, x − zRm 0, ∀z ∈ O} = ∅, if x / ∈ O, {0}, if x ∈ Int(O), Λx, if x ∈ ∂O, where Int(O) is the interior of O Λx is the exterior normal cone at x.
SLIDE 62
Then ∂IO is a multivalued maximal monotone operator.
SLIDE 63
Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O.
SLIDE 64
Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally,
SLIDE 65
Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally, the subdifferential of any convex and lower semi-continuous function is a maximal monotone operator.
SLIDE 66
Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally, the subdifferential of any convex and lower semi-continuous function is a maximal monotone operator. ∂ϕ(x) := {y ∈ Rm : y, z − xRm ϕ(z) − ϕ(x), ∀z ∈ Rm}
SLIDE 67
Then ∂IO is a multivalued maximal monotone operator. In this case, an MSDE is an SDE normally reflected at the boundary of O. More generally, the subdifferential of any convex and lower semi-continuous function is a maximal monotone operator. ∂ϕ(x) := {y ∈ Rm : y, z − xRm ϕ(z) − ϕ(x), ∀z ∈ Rm}
SLIDE 68
Towards a Freidlin-Wentzell theory
SLIDE 69
Towards a Freidlin-Wentzell theory
We now look at the problem of small perturbation:
SLIDE 70
Towards a Freidlin-Wentzell theory
We now look at the problem of small perturbation: dXε(t) ∈ b(Xε(t)) dt + √εσ(Xε(t)) dW(t) − A(Xε(t)) dt, Xε(0) = x ∈ D(A), ε ∈ (0, 1].
SLIDE 71
Classical theory
SLIDE 72
Classical theory
Recall the classical case:
SLIDE 73
Classical theory
Recall the classical case: A ≡ 0
SLIDE 74
Classical theory
Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1].
SLIDE 75
Classical theory
Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1]. The theory begins in the seminal paper
SLIDE 76
Classical theory
Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1]. The theory begins in the seminal paper Wentzell-Freidlin: On small random perturbation of dynamical system, 1969
SLIDE 77
Classical theory
Recall the classical case: A ≡ 0 dXε(t) = b(Xε(t)) dt + √εσ(Xε(t)) dW(t), Xε(0) = x, ε ∈ (0, 1]. The theory begins in the seminal paper Wentzell-Freidlin: On small random perturbation of dynamical system, 1969 There are two standard and well known approaches to this classical problem.
SLIDE 78
- 1. Use of Euler Scheme
SLIDE 79
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984
SLIDE 80
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme
SLIDE 81
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε
n(t) = x +
t b(Xε
n(n−1[sn])) ds + √ε
t σ(Xε
n(n−1[sn])) dW(s),
SLIDE 82
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε
n(t) = x +
t b(Xε
n(n−1[sn])) ds + √ε
t σ(Xε
n(n−1[sn])) dW(s),
Then you have to estimate the quantity
SLIDE 83
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε
n(t) = x +
t b(Xε
n(n−1[sn])) ds + √ε
t σ(Xε
n(n−1[sn])) dW(s),
Then you have to estimate the quantity P( max
0t1 |Xε(t, x) − Xε n| δ)
SLIDE 84
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε
n(t) = x +
t b(Xε
n(n−1[sn])) ds + √ε
t σ(Xε
n(n−1[sn])) dW(s),
Then you have to estimate the quantity P( max
0t1 |Xε(t, x) − Xε n| δ)
If you have for any δ > 0
SLIDE 85
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε
n(t) = x +
t b(Xε
n(n−1[sn])) ds + √ε
t σ(Xε
n(n−1[sn])) dW(s),
Then you have to estimate the quantity P( max
0t1 |Xε(t, x) − Xε n| δ)
If you have for any δ > 0 lim
n→∞ lim sup ε↓0
ε log P( max
0t1 |Xε(t, x) − Xε n| δ) = −∞,
SLIDE 86
- 1. Use of Euler Scheme
D.W.Stroock: An introduction to the theory of large deviation, 1984 You look at the Euler Scheme Xε
n(t) = x +
t b(Xε
n(n−1[sn])) ds + √ε
t σ(Xε
n(n−1[sn])) dW(s),
Then you have to estimate the quantity P( max
0t1 |Xε(t, x) − Xε n| δ)
If you have for any δ > 0 lim
n→∞ lim sup ε↓0
ε log P( max
0t1 |Xε(t, x) − Xε n| δ) = −∞,
then you are done.
SLIDE 87
Unfortunately, this approach does NOT apply to MSDE case
SLIDE 88
Unfortunately, this approach does NOT apply to MSDE case simply because the Euler scheme cannot be defined for MSDE
SLIDE 89
Unfortunately, this approach does NOT apply to MSDE case simply because the Euler scheme cannot be defined for MSDE since either A(Xn(t)) is not defined or may be a set.
SLIDE 90
- 2. Use of Freidlin-Wentzell estimate
SLIDE 91
- 2. Use of Freidlin-Wentzell estimate
(R. Azencott: Grandes deviations et applications, 1980
SLIDE 92
- 2. Use of Freidlin-Wentzell estimate
(R. Azencott: Grandes deviations et applications, 1980
- P. Priouret: Remarques sur les petite perturbations de syst`
emes dynamiques, 1982)
SLIDE 93
- 2. Use of Freidlin-Wentzell estimate
(R. Azencott: Grandes deviations et applications, 1980
- P. Priouret: Remarques sur les petite perturbations de syst`
emes dynamiques, 1982) Consider an ODE
SLIDE 94
- 2. Use of Freidlin-Wentzell estimate
(R. Azencott: Grandes deviations et applications, 1980
- P. Priouret: Remarques sur les petite perturbations de syst`
emes dynamiques, 1982) Consider an ODE dg(t) = b(g(t)) dt + σ(g(t))f′(t) dt, g(0) = x.
SLIDE 95
- 2. Use of Freidlin-Wentzell estimate
(R. Azencott: Grandes deviations et applications, 1980
- P. Priouret: Remarques sur les petite perturbations de syst`
emes dynamiques, 1982) Consider an ODE dg(t) = b(g(t)) dt + σ(g(t))f′(t) dt, g(0) = x. where f satisfies
SLIDE 96
- 2. Use of Freidlin-Wentzell estimate
(R. Azencott: Grandes deviations et applications, 1980
- P. Priouret: Remarques sur les petite perturbations de syst`
emes dynamiques, 1982) Consider an ODE dg(t) = b(g(t)) dt + σ(g(t))f′(t) dt, g(0) = x. where f satisfies 1 f′(t)2dt < ∞.
SLIDE 97
If you can prove
SLIDE 98
If you can prove ∀R > 0,
SLIDE 99
If you can prove ∀R > 0, ρ > 0,
SLIDE 100
If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0,
SLIDE 101
If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0,
SLIDE 102
If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0
SLIDE 103
If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0 such that
SLIDE 104
If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0 such that ε log P(Xε − g > ρ, ε
1 2 W − f < α) −R,
∀ε ε0
SLIDE 105
If you can prove ∀R > 0, ρ > 0, ∃ε0 > 0, α > 0, r > 0 such that ε log P(Xε − g > ρ, ε
1 2 W − f < α) −R,
∀ε ε0 then you have the large deviation principle.
SLIDE 106
But this approach does NOT apply neither
SLIDE 107
But this approach does NOT apply neither since to obtain the Freidlin-Wentzell estimate directly
SLIDE 108
But this approach does NOT apply neither since to obtain the Freidlin-Wentzell estimate directly
- ne has to suppose b and σ are Lipschitz
SLIDE 109
But this approach does NOT apply neither since to obtain the Freidlin-Wentzell estimate directly
- ne has to suppose b and σ are Lipschitz
(Priouret had to.).
SLIDE 110
- 3. Use of weak convergence
SLIDE 111
- 3. Use of weak convergence
This approach was developed a few years ago by
SLIDE 112
- 3. Use of weak convergence
This approach was developed a few years ago by
- P. Dupuis and R.S. Ellis: A weak convergence approach to the
theory of large deviatins, 1997
SLIDE 113
- 3. Use of weak convergence
This approach was developed a few years ago by
- P. Dupuis and R.S. Ellis: A weak convergence approach to the
theory of large deviatins, 1997 and was tested in several recent work to treat Wentzell-Freidlin large deviation principle for SDEs with irregular coefficients. It has proved to be highly efficient.
SLIDE 114
- 3. Use of weak convergence
This approach was developed a few years ago by
- P. Dupuis and R.S. Ellis: A weak convergence approach to the
theory of large deviatins, 1997 and was tested in several recent work to treat Wentzell-Freidlin large deviation principle for SDEs with irregular coefficients. It has proved to be highly efficient. The hard core of this approach is the following observation
SLIDE 115
- Xn: a sequence of r.v., valued in a complete metric space X ;
SLIDE 116
- Xn: a sequence of r.v., valued in a complete metric space X ;
- I: a rate function on X ;
SLIDE 117
- Xn: a sequence of r.v., valued in a complete metric space X ;
- I: a rate function on X ;
{Xn} is said to satisfy the LDP with rate function I if
SLIDE 118
- Xn: a sequence of r.v., valued in a complete metric space X ;
- I: a rate function on X ;
{Xn} is said to satisfy the LDP with rate function I if lim sup
n→∞
1 n log P(Xn ∈ F) −I(F), F closed
SLIDE 119
- Xn: a sequence of r.v., valued in a complete metric space X ;
- I: a rate function on X ;
{Xn} is said to satisfy the LDP with rate function I if lim sup
n→∞
1 n log P(Xn ∈ F) −I(F), F closed lim inf
n→∞
1 n log P(Xn ∈ G) −I(G), F open {Xn} is said to satisfy the Laplace Principle with rate function I if
SLIDE 120
- Xn: a sequence of r.v., valued in a complete metric space X ;
- I: a rate function on X ;
{Xn} is said to satisfy the LDP with rate function I if lim sup
n→∞
1 n log P(Xn ∈ F) −I(F), F closed lim inf
n→∞
1 n log P(Xn ∈ G) −I(G), F open {Xn} is said to satisfy the Laplace Principle with rate function I if lim
n→∞
1 n log E{exp[−nh(Xn)]} = − inf
x∈X {h(x) + I(x)}.
SLIDE 121
Theorem: LP⇐ ⇒ LDP
SLIDE 122
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997)
SLIDE 123
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE.
SLIDE 124
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE
SLIDE 125
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x
SLIDE 126
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D.
SLIDE 127
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type
SLIDE 128
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
SLIDE 129
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
for F ⊂ Cx([0, 1], D(A)),
SLIDE 130
Theorem: LP⇐ ⇒ LDP (⇐ =: Varadhan, 1966; = ⇒: Dupuis-Ellis, 1997) Now let us return to our MSDE. Suppose D ⊂ Rd is an open set. Consider the following reflected SDE dXε(t) ∈ b(Xε(t))dt+σ(Xε(t))◦dw(t)−v(Xε(t))da(t), X(0) = x where a increases only when Xε is on the boundary of D, v(x) is in the outward normal cone of D at x ∈ ∂D. For the result of the following type −I(F o)
- lim inf
ε↓0
ε log P(Xε ∈ F)
- lim sup
ε↓0
ε log P(Xε ∈ F)
- −I(F)
for F ⊂ Cx([0, 1], D(A)), the story is
SLIDE 131
SLIDE 132
(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0;
SLIDE 133
(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0; (ii) by Doss and Priouret when σ may be degenerated;
SLIDE 134
(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0; (ii) by Doss and Priouret when σ may be degenerated; (iii) by Dupuis for general convex sets D
SLIDE 135
(i) by Anderson and Orey in 1976 when D is (in addition to convex) smooth and σσ∗ > 0; (ii) by Doss and Priouret when σ may be degenerated; (iii) by Dupuis for general convex sets D
SLIDE 136
Return to our original problem.
SLIDE 137
Return to our original problem. If we manage to prove a Laplace principle, then we are done.
SLIDE 138
Return to our original problem. If we manage to prove a Laplace principle, then we are done. But how to obtain the Laplace principle?
SLIDE 139
Return to our original problem. If we manage to prove a Laplace principle, then we are done. But how to obtain the Laplace principle? Budhiraja and Dupuis pointed out the way for us.
SLIDE 140
Things to do
SLIDE 141
Things to do
Set B := C([0, 1], Rd)
SLIDE 142
Things to do
Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2
H :=
1 |˙ h(s)|2ds < ∞}
SLIDE 143
Things to do
Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2
H :=
1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B
SLIDE 144
Things to do
Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2
H :=
1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space.
SLIDE 145
Things to do
Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2
H :=
1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let
SLIDE 146
Things to do
Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2
H :=
1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let DN := {h ∈ H : hH N}
SLIDE 147
Things to do
Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2
H :=
1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let DN := {h ∈ H : hH N} Then DN with weak topology is a metrizable compact Polish space.
SLIDE 148
Things to do
Set B := C([0, 1], Rd) H := {h ∈ B such that h is a. c. and h2
H :=
1 |˙ h(s)|2ds < ∞} µ the standard Wiener measure on B Then (B, H, µ) is a classical Wiener space. ∀N > 0, let DN := {h ∈ H : hH N} Then DN with weak topology is a metrizable compact Polish space. AN := {h : h is an adapted process , h(·) ∈ DN a.s.}
SLIDE 149
The way pointed out by Budhiraja and Duuis
SLIDE 150
The way pointed out by Budhiraja and Duuis
Let Yn : B → X , n = 1, 2, · · · .
SLIDE 151
The way pointed out by Budhiraja and Duuis
Let Yn : B → X , n = 1, 2, · · · . If there exists Y0 : H → X such that
SLIDE 152
The way pointed out by Budhiraja and Duuis
Let Yn : B → X , n = 1, 2, · · · . If there exists Y0 : H → X such that A For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges in distribution to h ∈ AN, then for some subsequence nk, Ynk
- w +
hnk(w)
- n−1
k
- converges in distribution to
Y0(h) in X .
SLIDE 153
The way pointed out by Budhiraja and Duuis
Let Yn : B → X , n = 1, 2, · · · . If there exists Y0 : H → X such that A For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges in distribution to h ∈ AN, then for some subsequence nk, Ynk
- w +
hnk(w)
- n−1
k
- converges in distribution to
Y0(h) in X . B For any N > 0, if {hn, n ∈ N} ⊂ AN converges weakly to h ∈ H, then for some subsequence hnk, Y0(hnk) converges weakly to Y0(h) in X .
SLIDE 154
Theorem: Under (A) and (B), the LP holds: lim
n→∞ log E{exp[−nf(Yn)]} = − inf x∈X {g(x) + I(x)}
SLIDE 155
Verification of (A) and (B)
SLIDE 156
Verification of (A) and (B)
We replace n by ε−1 and we take
SLIDE 157
Verification of (A) and (B)
We replace n by ε−1 and we take Y0 = X
SLIDE 158
Verification of (A) and (B)
We replace n by ε−1 and we take Y0 = X Yε := Xε,hε.
SLIDE 159
Verification of (A) and (B)
We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A).
SLIDE 160
Verification of (A) and (B)
We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A). Now Yε satisfies:
SLIDE 161
Verification of (A) and (B)
We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A). Now Yε satisfies: Yε(t) = x + t b(Yε(s)) ds + t σ(Yε(s))˙ hε(s) ds +√ε t σ(Yε(s)) dW(s) − Kε(t)
SLIDE 162
Verification of (A) and (B)
We replace n by ε−1 and we take Y0 = X Yε := Xε,hε. Verification of (A). Now Yε satisfies: Yε(t) = x + t b(Yε(s)) ds + t σ(Yε(s))˙ hε(s) ds +√ε t σ(Yε(s)) dW(s) − Kε(t) Y0(t) = x + t b(Y0(s)) ds + t σ(Y0(s))˙ h(s) ds − K0(t).
SLIDE 163
Note (A) is equivalent to the apparently weaker condition
SLIDE 164
Note (A) is equivalent to the apparently weaker condition A’ For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges almost surely to h ∈ AN, in H, then for some subsequence nk, Ynk
- w +
hnk(w)
- n−1
k
- converges in distribution to
Y0(h) in X .
SLIDE 165
Note (A) is equivalent to the apparently weaker condition A’ For any N > 0, if a family {hn} ⊂ AN (as random variables in DN) converges almost surely to h ∈ AN, in H, then for some subsequence nk, Ynk
- w +
hnk(w)
- n−1
k
- converges in distribution to
Y0(h) in X . This equivalence can be easily achieved by using Skorohod representation.
SLIDE 166
Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly
SLIDE 167
Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly if hε − → h0 almost surely in H.
SLIDE 168
Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly if hε − → h0 almost surely in H. This can be done by Itˆ
- calculus.
SLIDE 169
Hence, to prove (A), it suffices to prove that Yε − → Y0, weakly if hε − → h0 almost surely in H. This can be done by Itˆ
- calculus.