A new non-Markovian approach to weak convergence for SPDEs Adam - - PowerPoint PPT Presentation

a new non markovian approach to weak convergence for
SMART_READER_LITE
LIVE PREVIEW

A new non-Markovian approach to weak convergence for SPDEs Adam - - PowerPoint PPT Presentation

A new non-Markovian approach to weak convergence for SPDEs Adam Andersson Joint work with Raphael Kruse and Stig Larsson Mathematical Sciences Chalmers University of Technology G oteborg, Sweden Sixth Workshop on Random Dynamical Systems,


slide-1
SLIDE 1

A new non-Markovian approach to weak convergence for SPDEs Adam Andersson

Joint work with Raphael Kruse and Stig Larsson

Mathematical Sciences Chalmers University of Technology G¨

  • teborg, Sweden

Sixth Workshop on Random Dynamical Systems, Bielefeld 2 Nov, 2013

1 / 22

slide-2
SLIDE 2

Outline

◮ Stochastic integration in Hilbert space,

2 / 22

slide-3
SLIDE 3

Outline

◮ Stochastic integration in Hilbert space, ◮ Malliavin calculus,

2 / 22

slide-4
SLIDE 4

Outline

◮ Stochastic integration in Hilbert space, ◮ Malliavin calculus, ◮ Weak convergence,

2 / 22

slide-5
SLIDE 5

Outline

◮ Stochastic integration in Hilbert space, ◮ Malliavin calculus, ◮ Weak convergence, ◮ Strong convergence in a dual Watanabe-Sobolev norm.

2 / 22

slide-6
SLIDE 6

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd),

3 / 22

slide-7
SLIDE 7

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd), Q ∈ L(H) self-adjoint and positive semi-definite covariance operator,

3 / 22

slide-8
SLIDE 8

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd), Q ∈ L(H) self-adjoint and positive semi-definite covariance operator, U0 = Q

1 2 (H), Hilbert space with u, v0 = Q− 1 2 u, Q− 1 2 vH, u, v ∈ U0, 3 / 22

slide-9
SLIDE 9

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd), Q ∈ L(H) self-adjoint and positive semi-definite covariance operator, U0 = Q

1 2 (H), Hilbert space with u, v0 = Q− 1 2 u, Q− 1 2 vH, u, v ∈ U0,

An operator I : L2([0, T], U0) → L2(Ω) is said to be an isonormal process

  • n a probability space (Ω, F, P), if

3 / 22

slide-10
SLIDE 10

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd), Q ∈ L(H) self-adjoint and positive semi-definite covariance operator, U0 = Q

1 2 (H), Hilbert space with u, v0 = Q− 1 2 u, Q− 1 2 vH, u, v ∈ U0,

An operator I : L2([0, T], U0) → L2(Ω) is said to be an isonormal process

  • n a probability space (Ω, F, P), if

◮ I(φ) ∼ N(0, φL2([0,T],U0)),

∀φ ∈ L2([0, T], U0),

3 / 22

slide-11
SLIDE 11

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd), Q ∈ L(H) self-adjoint and positive semi-definite covariance operator, U0 = Q

1 2 (H), Hilbert space with u, v0 = Q− 1 2 u, Q− 1 2 vH, u, v ∈ U0,

An operator I : L2([0, T], U0) → L2(Ω) is said to be an isonormal process

  • n a probability space (Ω, F, P), if

◮ I(φ) ∼ N(0, φL2([0,T],U0)),

∀φ ∈ L2([0, T], U0),

◮ E[I(φ)I(ψ)] = φ, ψL2([0,T],U0),

∀φ, ψ ∈ L2([0, T], U0).

3 / 22

slide-12
SLIDE 12

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd), Q ∈ L(H) self-adjoint and positive semi-definite covariance operator, U0 = Q

1 2 (H), Hilbert space with u, v0 = Q− 1 2 u, Q− 1 2 vH, u, v ∈ U0,

An operator I : L2([0, T], U0) → L2(Ω) is said to be an isonormal process

  • n a probability space (Ω, F, P), if

◮ I(φ) ∼ N(0, φL2([0,T],U0)),

∀φ ∈ L2([0, T], U0),

◮ E[I(φ)I(ψ)] = φ, ψL2([0,T],U0),

∀φ, ψ ∈ L2([0, T], U0).

3 / 22

slide-13
SLIDE 13

Cylindrical Q-Wiener process

H, separable Hilbert space (H = L2(D), D ⊂ Rd), Q ∈ L(H) self-adjoint and positive semi-definite covariance operator, U0 = Q

1 2 (H), Hilbert space with u, v0 = Q− 1 2 u, Q− 1 2 vH, u, v ∈ U0,

An operator I : L2([0, T], U0) → L2(Ω) is said to be an isonormal process

  • n a probability space (Ω, F, P), if

◮ I(φ) ∼ N(0, φL2([0,T],U0)),

∀φ ∈ L2([0, T], U0),

◮ E[I(φ)I(ψ)] = φ, ψL2([0,T],U0),

∀φ, ψ ∈ L2([0, T], U0). W : [0, T] × U0 → L2(Ω) cylindrical Q-Wiener process: W (t)u := I(χ[0,t] ⊗ u) =

  • i=1

u, ui0βi(t), where (ui)i∈N ⊂ U0 is an ON-basis and (βi)i∈N are independent standard Brownian motions.

3 / 22

slide-14
SLIDE 14

The H-valued Wiener integral

Wiener integral for simple integrands: T χ[s,t] ⊗(h ⊗u) dW = [(W (t)−W (s))u]⊗h ∈ L2(Ω)⊗H = L2(Ω, H) Extends directly to linear combinations.

4 / 22

slide-15
SLIDE 15

The H-valued Wiener integral

Wiener integral for simple integrands: T χ[s,t] ⊗(h ⊗u) dW = [(W (t)−W (s))u]⊗h ∈ L2(Ω)⊗H = L2(Ω, H) Extends directly to linear combinations. Wiener’s isometry: E

  • T

φ dW

  • 2

H =

T φ2

L0

2 dt 4 / 22

slide-16
SLIDE 16

The H-valued Wiener integral

Wiener integral for simple integrands: T χ[s,t] ⊗(h ⊗u) dW = [(W (t)−W (s))u]⊗h ∈ L2(Ω)⊗H = L2(Ω, H) Extends directly to linear combinations. Wiener’s isometry: E

  • T

φ dW

  • 2

H =

T φ2

L0

2 dt

By density the integral extends to all of L2([0, T], L0

2). For stochastic

equations driven by additive noise this definition of the integral suffices.

4 / 22

slide-17
SLIDE 17

Malliavin calculus

Let C ∞

p (Rn) denote the space of all C ∞-functions over Rn with

polynomial growth. Define S = {X = f (I(φ1), . . . , I(φn)): f ∈ C ∞

p (Rn),

φ1, . . . , φn ∈ L2([0, T], U0), n ≥ 1} and S(H) =

  • F =

n

  • k=1

Xk ⊗ hk : X1, . . . , Xn ∈ S, h1, . . . , hn ∈ H, n ≥ 1

  • .

5 / 22

slide-18
SLIDE 18

Malliavin calculus

Let C ∞

p (Rn) denote the space of all C ∞-functions over Rn with

polynomial growth. Define S = {X = f (I(φ1), . . . , I(φn)): f ∈ C ∞

p (Rn),

φ1, . . . , φn ∈ L2([0, T], U0), n ≥ 1} and S(H) =

  • F =

n

  • k=1

Xk ⊗ hk : X1, . . . , Xn ∈ S, h1, . . . , hn ∈ H, n ≥ 1

  • .

We define the Malliavin derivative of F ∈ S(H) as the process DtF =

m

  • k=1

n

  • i=1

∂ifk(I(φ1), . . . , I(φn)) ⊗ (hk ⊗ φi(t)) and let, for v ∈ U0, Dv

t F = DtFv = m

  • k=1

n

  • i=1

∂ifk(I(φ1), . . . , I(φn)) ⊗ φi(t), v0 ⊗ hk

5 / 22

slide-19
SLIDE 19

Malliavin calculus: integration by parts

For all F ∈ S(H) and Φ ∈ L2([0, T], L0

2),

DF, ΦL2([0,T]×Ω,L0

2) =

  • F,

T Φ(t) dW (t)

  • L2(Ω,H).

6 / 22

slide-20
SLIDE 20

Malliavin calculus: integration by parts

For all F ∈ S(H) and Φ ∈ L2([0, T], L0

2),

DF, ΦL2([0,T]×Ω,L0

2) =

  • F,

T Φ(t) dW (t)

  • L2(Ω,H).

Let D1,p(H) be the closure of S(H) with respect to the norm FD1,p(H) =

  • E[Fp

H] + E

T DtFp

L0

2 dt

1

p . 6 / 22

slide-21
SLIDE 21

Malliavin calculus: integration by parts

For all F ∈ S(H) and Φ ∈ L2([0, T], L0

2),

DF, ΦL2([0,T]×Ω,L0

2) =

  • F,

T Φ(t) dW (t)

  • L2(Ω,H).

Let D1,p(H) be the closure of S(H) with respect to the norm FD1,p(H) =

  • E[Fp

H] + E

T DtFp

L0

2 dt

1

p .

Let (δ, D(δ)) be the adjoint of D : L2(Ω, H) → L2([0, T] × Ω, L0

2).

DF, ΦL2([0,T]×Ω,L0

2) =

  • F, δΦ
  • L2(Ω,H).

6 / 22

slide-22
SLIDE 22

Malliavin calculus: integration by parts

For all F ∈ S(H) and Φ ∈ L2([0, T], L0

2),

DF, ΦL2([0,T]×Ω,L0

2) =

  • F,

T Φ(t) dW (t)

  • L2(Ω,H).

Let D1,p(H) be the closure of S(H) with respect to the norm FD1,p(H) =

  • E[Fp

H] + E

T DtFp

L0

2 dt

1

p .

Let (δ, D(δ)) be the adjoint of D : L2(Ω, H) → L2([0, T] × Ω, L0

2).

DF, ΦL2([0,T]×Ω,L0

2) =

  • F, δΦ
  • L2(Ω,H).

D(δ) ⊂ L2([0, T] × Ω, L0

2) is large and contains in particular all

predictable L0

2-valued processes. In this case δ(Φ) =

T

0 Φ(t) dW (t).

6 / 22

slide-23
SLIDE 23

The stochastic equation

An easy equation for a difficult problem: dX(t) + AX(t) dt = F(X(t)) dt + dW (t), t ∈ (0, T], X(0) = X0.

7 / 22

slide-24
SLIDE 24

The stochastic equation

An easy equation for a difficult problem: dX(t) + AX(t) dt = F(X(t)) dt + dW (t), t ∈ (0, T], X(0) = X0.

◮ H = L2(D), where D is a bounded, convex and polygonal domain of

Rd, d = 1, 2, 3.

7 / 22

slide-25
SLIDE 25

The stochastic equation

An easy equation for a difficult problem: dX(t) + AX(t) dt = F(X(t)) dt + dW (t), t ∈ (0, T], X(0) = X0.

◮ H = L2(D), where D is a bounded, convex and polygonal domain of

Rd, d = 1, 2, 3.

◮ (A, D(A)) selfadjoint with compact inverse, −A the generator of an

analytic semigroup (S(t))t≥0.

7 / 22

slide-26
SLIDE 26

The stochastic equation

An easy equation for a difficult problem: dX(t) + AX(t) dt = F(X(t)) dt + dW (t), t ∈ (0, T], X(0) = X0.

◮ H = L2(D), where D is a bounded, convex and polygonal domain of

Rd, d = 1, 2, 3.

◮ (A, D(A)) selfadjoint with compact inverse, −A the generator of an

analytic semigroup (S(t))t≥0.

◮ Q ∈ L(H), positive semidefinite and selfadjoint with

A

β−1 2 Q 1 2 L2 < ∞ for some β ∈ [0, 1]. 7 / 22

slide-27
SLIDE 27

The stochastic equation

An easy equation for a difficult problem: dX(t) + AX(t) dt = F(X(t)) dt + dW (t), t ∈ (0, T], X(0) = X0.

◮ H = L2(D), where D is a bounded, convex and polygonal domain of

Rd, d = 1, 2, 3.

◮ (A, D(A)) selfadjoint with compact inverse, −A the generator of an

analytic semigroup (S(t))t≥0.

◮ Q ∈ L(H), positive semidefinite and selfadjoint with

A

β−1 2 Q 1 2 L2 < ∞ for some β ∈ [0, 1].

◮ (W (t))t∈[0,T] cylindrical Q-Wiener-process.

7 / 22

slide-28
SLIDE 28

The stochastic equation

An easy equation for a difficult problem: dX(t) + AX(t) dt = F(X(t)) dt + dW (t), t ∈ (0, T], X(0) = X0.

◮ H = L2(D), where D is a bounded, convex and polygonal domain of

Rd, d = 1, 2, 3.

◮ (A, D(A)) selfadjoint with compact inverse, −A the generator of an

analytic semigroup (S(t))t≥0.

◮ Q ∈ L(H), positive semidefinite and selfadjoint with

A

β−1 2 Q 1 2 L2 < ∞ for some β ∈ [0, 1].

◮ (W (t))t∈[0,T] cylindrical Q-Wiener-process. ◮ F ∈ C2 b(H, H).

7 / 22

slide-29
SLIDE 29

The stochastic equation

An easy equation for a difficult problem: dX(t) + AX(t) dt = F(X(t)) dt + dW (t), t ∈ (0, T], X(0) = X0.

◮ H = L2(D), where D is a bounded, convex and polygonal domain of

Rd, d = 1, 2, 3.

◮ (A, D(A)) selfadjoint with compact inverse, −A the generator of an

analytic semigroup (S(t))t≥0.

◮ Q ∈ L(H), positive semidefinite and selfadjoint with

A

β−1 2 Q 1 2 L2 < ∞ for some β ∈ [0, 1].

◮ (W (t))t∈[0,T] cylindrical Q-Wiener-process. ◮ F ∈ C2 b(H, H). ◮ X0 ∈ H.

7 / 22

slide-30
SLIDE 30

The stochastic equation

There exist for every p ≥ 2 a unique solution X ∈ C([0, T], Lp(Ω, H)) satisfying the integral equation X(t) = S(t)X0 + t S(t − s)F(X(s)) ds + t S(t − s) dW (s), t ∈ [0, T].

8 / 22

slide-31
SLIDE 31

The stochastic equation

There exist for every p ≥ 2 a unique solution X ∈ C([0, T], Lp(Ω, H)) satisfying the integral equation X(t) = S(t)X0 + t S(t − s)F(X(s)) ds + t S(t − s) dW (s), t ∈ [0, T]. Spatial regularity [Kruse, Larsson]: X(t) ∈ D(A

β 2 ),

a.s. for all t ∈ (0, T].

8 / 22

slide-32
SLIDE 32

The stochastic equation

There exist for every p ≥ 2 a unique solution X ∈ C([0, T], Lp(Ω, H)) satisfying the integral equation X(t) = S(t)X0 + t S(t − s)F(X(s)) ds + t S(t − s) dW (s), t ∈ [0, T]. Spatial regularity [Kruse, Larsson]: X(t) ∈ D(A

β 2 ),

a.s. for all t ∈ (0, T]. Regularity in the Malliavin sense [Fuhrman, Tessitore]: X(t) ∈ D1,p(H) for almost all t ∈ [0, T] and p <

2 1−β .

8 / 22

slide-33
SLIDE 33

Approximation by the finite element method

A discretized equation:

  • dXh(t) + [AhXh(t) − PhF(Xh(t))] dt = Ph dW (t),

t ∈ (0, T] Xh(0) = PhX0. Finite element spaces (Vh)h∈(0,1] of continuous piecewise linear functions corresponding to a quasi-uniform family of triangulations of D. Ah is the discrete Laplacian satisfying Ahψ, χH = ∇ψ, ∇χH, ∀ψ, χ ∈ Vh. Ph : H → Vh orthogonal projection w.r.t. ·, ·H.

9 / 22

slide-34
SLIDE 34

Mild solution of spatially discretized equation

Let (Sh(t))t≥0 be the analytic semigroup generated by −Ah. For every h ∈ (0, 1] ∃! solution Xh ∈ C([0, T], L2(Ω, Sh)) to the mild equation Xh(t) = Sh(t)PhX0 + t Sh(t − s)PhF(Xh(s)) ds + t Sh(t − s)Ph dW (s), t ∈ (0, T].

10 / 22

slide-35
SLIDE 35

Mild solution of spatially discretized equation

Let (Sh(t))t≥0 be the analytic semigroup generated by −Ah. For every h ∈ (0, 1] ∃! solution Xh ∈ C([0, T], L2(Ω, Sh)) to the mild equation Xh(t) = Sh(t)PhX0 + t Sh(t − s)PhF(Xh(s)) ds + t Sh(t − s)Ph dW (s), t ∈ (0, T]. Error estimate for Eh(t) = S(t) − Sh(t)Ph: Eh(t)A

̺ 2 L ≤ Ct− ̺+θ 2 hθ,

0 ≤ θ ≤ 2, 0 ≤ ̺ ≤ 1, ̺ + θ ≤ 2.

10 / 22

slide-36
SLIDE 36

Mild solution of spatially discretized equation

Let (Sh(t))t≥0 be the analytic semigroup generated by −Ah. For every h ∈ (0, 1] ∃! solution Xh ∈ C([0, T], L2(Ω, Sh)) to the mild equation Xh(t) = Sh(t)PhX0 + t Sh(t − s)PhF(Xh(s)) ds + t Sh(t − s)Ph dW (s), t ∈ (0, T]. Error estimate for Eh(t) = S(t) − Sh(t)Ph: Eh(t)A

̺ 2 L ≤ Ct− ̺+θ 2 hθ,

0 ≤ θ ≤ 2, 0 ≤ ̺ ≤ 1, ̺ + θ ≤ 2. Strong convergence: X(T) − Xh(T)Lp(Ω,H) ≤ Chβ−ǫ, n ∈ N.

10 / 22

slide-37
SLIDE 37

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

11 / 22

slide-38
SLIDE 38

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 b(H, R) (FEM) [A., Larsson,

2012], on ArXiv.

11 / 22

slide-39
SLIDE 39

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 b(H, R) (FEM) [A., Larsson,

2012], on ArXiv.

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 p(H, R) (FEM + Spectral) [A.,

Kruse, Larsson, 2013], soon on Arxiv.

11 / 22

slide-40
SLIDE 40

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 b(H, R) (FEM) [A., Larsson,

2012], on ArXiv.

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 p(H, R) (FEM + Spectral) [A.,

Kruse, Larsson, 2013], soon on Arxiv.

◮ Linear multiplicative noise, β ∈ [0, 1 2) and ϕ ∈ C2 b(H, R) (FEM) [A.,

Larsson, 2012], on ArXiv.

11 / 22

slide-41
SLIDE 41

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 b(H, R) (FEM) [A., Larsson,

2012], on ArXiv.

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 p(H, R) (FEM + Spectral) [A.,

Kruse, Larsson, 2013], soon on Arxiv.

◮ Linear multiplicative noise, β ∈ [0, 1 2) and ϕ ∈ C2 b(H, R) (FEM) [A.,

Larsson, 2012], on ArXiv.

◮ Linear multiplicative noise, β = 1 and ϕ ∈ C2 b(H, R) (Spectral) [A.,

Jentzen, Larsson, Schwab, 2014], writing in progress.

11 / 22

slide-42
SLIDE 42

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 b(H, R) (FEM) [A., Larsson,

2012], on ArXiv.

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 p(H, R) (FEM + Spectral) [A.,

Kruse, Larsson, 2013], soon on Arxiv.

◮ Linear multiplicative noise, β ∈ [0, 1 2) and ϕ ∈ C2 b(H, R) (FEM) [A.,

Larsson, 2012], on ArXiv.

◮ Linear multiplicative noise, β = 1 and ϕ ∈ C2 b(H, R) (Spectral) [A.,

Jentzen, Larsson, Schwab, 2014], writing in progress.

◮ Linear multiplicative noise, β = 1 and ϕ = · 2 (FEM + Spectral)

[A., Kruse, Larsson], theoretical development i progress.

11 / 22

slide-43
SLIDE 43

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 b(H, R) (FEM) [A., Larsson,

2012], on ArXiv.

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 p(H, R) (FEM + Spectral) [A.,

Kruse, Larsson, 2013], soon on Arxiv.

◮ Linear multiplicative noise, β ∈ [0, 1 2) and ϕ ∈ C2 b(H, R) (FEM) [A.,

Larsson, 2012], on ArXiv.

◮ Linear multiplicative noise, β = 1 and ϕ ∈ C2 b(H, R) (Spectral) [A.,

Jentzen, Larsson, Schwab, 2014], writing in progress.

◮ Linear multiplicative noise, β = 1 and ϕ = · 2 (FEM + Spectral)

[A., Kruse, Larsson], theoretical development i progress.

11 / 22

slide-44
SLIDE 44

Weak convergence: Results

Theorem

For every γ ∈ [0, β) the following weak convergence holds:

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ Ch2γ,

h ∈ (0, 1). under either of the following assumptions:

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 b(H, R) (FEM) [A., Larsson,

2012], on ArXiv.

◮ Additive noise, β ∈ [0, 1] and ϕ ∈ C2 p(H, R) (FEM + Spectral) [A.,

Kruse, Larsson, 2013], soon on Arxiv.

◮ Linear multiplicative noise, β ∈ [0, 1 2) and ϕ ∈ C2 b(H, R) (FEM) [A.,

Larsson, 2012], on ArXiv.

◮ Linear multiplicative noise, β = 1 and ϕ ∈ C2 b(H, R) (Spectral) [A.,

Jentzen, Larsson, Schwab, 2014], writing in progress.

◮ Linear multiplicative noise, β = 1 and ϕ = · 2 (FEM + Spectral)

[A., Kruse, Larsson], theoretical development i progress. Open question: Is the rate of weak convergence the same for all G ∈ C2

b(H, L0 2)?

11 / 22

slide-45
SLIDE 45

Weak convergence: Techniques of proof

We know of three methods to prove weak convergence:

12 / 22

slide-46
SLIDE 46

Weak convergence: Techniques of proof

We know of three methods to prove weak convergence:

◮ By a use of Itˆ

  • ’s formula and the Kolmogorov equation.

12 / 22

slide-47
SLIDE 47

Weak convergence: Techniques of proof

We know of three methods to prove weak convergence:

◮ By a use of Itˆ

  • ’s formula and the Kolmogorov equation.

◮ By duality and backward stochastic evolution equations.

12 / 22

slide-48
SLIDE 48

Weak convergence: Techniques of proof

We know of three methods to prove weak convergence:

◮ By a use of Itˆ

  • ’s formula and the Kolmogorov equation.

◮ By duality and backward stochastic evolution equations. ◮ By strong error estimates in a dual Watanabe-Sobolev norm.

12 / 22

slide-49
SLIDE 49

Weak convergence: Techniques of proof

We know of three methods to prove weak convergence:

◮ By a use of Itˆ

  • ’s formula and the Kolmogorov equation.

◮ By duality and backward stochastic evolution equations. ◮ By strong error estimates in a dual Watanabe-Sobolev norm.

12 / 22

slide-50
SLIDE 50

Weak convergence: Techniques of proof

We know of three methods to prove weak convergence:

◮ By a use of Itˆ

  • ’s formula and the Kolmogorov equation.

◮ By duality and backward stochastic evolution equations. ◮ By strong error estimates in a dual Watanabe-Sobolev norm.

Here I present the third method!

12 / 22

slide-51
SLIDE 51

Proof: Important spaces

Let p ≥ 2. We define the space M1,p(H) = D1,p(H) ∩ L2p(Ω, H), with norm XM1,p(H) = max(XD1,p(H), XL2p(Ω,H)).

13 / 22

slide-52
SLIDE 52

Proof: Important spaces

Let p ≥ 2. We define the space M1,p(H) = D1,p(H) ∩ L2p(Ω, H), with norm XM1,p(H) = max(XD1,p(H), XL2p(Ω,H)). The dual space M1,p(H)∗ is equipped with the norm XM1,p(H)∗ = sup

Υ∈B

Υ, XL2(Ω,H), where B denote the unit ball in M1,p(H).

13 / 22

slide-53
SLIDE 53

Proof: Bound of the weak error

Linearization: By a first order Taylor expansion E

  • ϕ(X(T)) − ϕ(Xn(T))
  • = Eϕ′(X(T)), Xn(T) − X(T)

+ 1 (1 − ̺)ϕ′′(X(T) + λ(Xn(T) − X(T))) · (X(T) − Xn(T))2 d̺.

14 / 22

slide-54
SLIDE 54

Proof: Bound of the weak error

Linearization: By a first order Taylor expansion E

  • ϕ(X(T)) − ϕ(Xn(T))
  • = Eϕ′(X(T)), Xn(T) − X(T)

+ 1 (1 − ̺)ϕ′′(X(T) + λ(Xn(T) − X(T))) · (X(T) − Xn(T))2 d̺. For p <

2 1−β : R = ϕ′(X(T))M1,p(H) < ∞.

14 / 22

slide-55
SLIDE 55

Proof: Bound of the weak error

Linearization: By a first order Taylor expansion E

  • ϕ(X(T)) − ϕ(Xn(T))
  • = Eϕ′(X(T)), Xn(T) − X(T)

+ 1 (1 − ̺)ϕ′′(X(T) + λ(Xn(T) − X(T))) · (X(T) − Xn(T))2 d̺. For p <

2 1−β : R = ϕ′(X(T))M1,p(H) < ∞.

Therefore

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ R
  • ER−1ϕ′(X(T)), Xn(T) − X(T)
  • + ϕ′′(X(T))L2(Ω,L[2](H,R))X(T) − Xn(T)2

L4(Ω,H)

≤ R sup

Υ∈B

EΥ, Xn(T) − X(T)

  • + CX(T) − Xn(T)2

L4(Ω,H).

14 / 22

slide-56
SLIDE 56

Proof: Bound of the weak error

Linearization: By a first order Taylor expansion E

  • ϕ(X(T)) − ϕ(Xn(T))
  • = Eϕ′(X(T)), Xn(T) − X(T)

+ 1 (1 − ̺)ϕ′′(X(T) + λ(Xn(T) − X(T))) · (X(T) − Xn(T))2 d̺. For p <

2 1−β : R = ϕ′(X(T))M1,p(H) < ∞.

Therefore

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • ≤ R
  • ER−1ϕ′(X(T)), Xn(T) − X(T)
  • + ϕ′′(X(T))L2(Ω,L[2](H,R))X(T) − Xn(T)2

L4(Ω,H)

≤ R sup

Υ∈B

EΥ, Xn(T) − X(T)

  • + CX(T) − Xn(T)2

L4(Ω,H).

Thus,

  • E
  • ϕ(X(T)) − ϕ(Xn(T))
  • Xn(T) − X(T)M1,p(H)∗ + X(T) − Xn(T)2

L4(Ω,H).

14 / 22

slide-57
SLIDE 57

Proof: Key Lemma

Lemma

Let p, p′ ∈ (1, ∞) satisfy 1

p + 1 p′ = 1.

(i) For random variables Z : Ω → H, we have ZM1,p(H)∗ ≤ ZL2(Ω,H). (ii) If for Φ: [0, T] × Ω → L the map Υ → Φ(t)∗Υ is bounded in M1,p(H) uniformly in t, then under mild assumptions on ψ

  • T

Φ(t, φ(t))ψ(t) dt

  • M1,p(H)∗ ≤ R

T ψ(t)M1,p(H)∗ dt. (iii) If Φ ∈ L2([0, T] × Ω, L0

2) is predictable, then

  • T

Φ(t) dW (t)

  • M1,p(H)∗ ≤ CΦLp′([0,T]×Ω,L0

2). 15 / 22

slide-58
SLIDE 58

Proof: Strong convergence in the M1,p(H)∗-norm

After a first order Taylor expansion the difference satisfy the equation: X(T) − Xh(T) = Eh(t)X0 + T Eh(T − s)F(X(t)) dt + T Sh(T − t)PhF ′(X(t))(X(t) − Xh(t))

  • dt

+ T Sh(T − t)Ph × 1 (1 − ̺)F ′′(X(t) + ̺(Xh(t) − X(t))) · (X(t) − Xh(t))2 d̺ dt + T Eh(T − t) dW (t).

16 / 22

slide-59
SLIDE 59

Proof: Strong convergence in the M1,p(H)∗-norm

By the Key Lemma (i) and (iii) X(T) − Xh(T)M1,p(H)∗ ≤ Eh(t)X0 + T

  • Eh(T − s)F(X(t))
  • L2(Ω,H) dt

+

  • T

Sh(T − t)PhF ′(X(t))(X(t) − Xh(t)) dt

  • M1,p(H)∗

+ T X(t) − Xh(t)2

L2(Ω,H) dt

+ T Eh(T − t)p′

L0

2 dt

1

p′ . 17 / 22

slide-60
SLIDE 60

Proof: Strong convergence in the M1,p(H)∗-norm

By the Key Lemma (i) and (iii) X(T) − Xh(T)M1,p(H)∗ ≤ Eh(t)X0 + T

  • Eh(T − s)F(X(t))
  • L2(Ω,H) dt

+

  • T

Sh(T − t)PhF ′(X(t))(X(t) − Xh(t)) dt

  • M1,p(H)∗

+ T X(t) − Xh(t)2

L2(Ω,H) dt

+ T Eh(T − t)p′

L0

2 dt

1

p′ .

To apply Key Lemma (ii) we need to check that Υ → F ′(X(t))∗Sh(T − t)PhΥ, bounded in M1,p(H).

17 / 22

slide-61
SLIDE 61

Proof: Strong convergence in the M1,p(H)∗-norm

Clearly

  • F ′(X(t))

∗Sh(T − t)Υ ∈ L2p(Ω, H).

18 / 22

slide-62
SLIDE 62

Proof: Strong convergence in the M1,p(H)∗-norm

Clearly

  • F ′(X(t))

∗Sh(T − t)Υ ∈ L2p(Ω, H). Remains to prove

  • F ′(X(t))

∗Sh(T − t)Υ

  • p

D1,p(H)

  • F ′(X(t))

∗Sh(T − t)Υ

  • p

Lp(Ω,H)

+ T

  • F ′(X(t))

∗Sh(T − t)DsΥ

  • p

Lp(Ω,L0

2) ds

+ T E

k∈N

  • F ′′(X(t))Duk

s X(t)

∗Sh(T − t)Υ

  • 2 p

2

ds ≤ |F|p

C1

b(H,H)

  • Υp

Lp(Ω,H) +

T DsΥp

Lp(Ω,H) ds

  • + |F|p

C2

b(H,H)

T E

k∈N

Duk

s X(t)2 H

p

2 Υp ds

Υp

D1,p(H) + Υ2 L2p(Ω,H) sup t∈[0,T]

X(t)2

D1,2p(H) < ∞.

18 / 22

slide-63
SLIDE 63

Proof: Strong convergence in the M1,p(H)∗-norm

Using Key Lemma (ii) we get X(T) − Xh(T)M1,p(H)∗ ≤ Eh(t)X0 + T

  • Eh(T − s)F(X(t))
  • L2(Ω,H) dt

+ T X(t) − Xh(t)2

L2(Ω,H) dt +

T Eh(T − t)p′

L0

2 dt

1

p′

+ T X(t) − Xh(t)M1,p(H)∗ dt. If we fix γ ∈ [0, β) and let p = 2/(1 − γ), then one can show that X(T) − Xh(T)M1,p(H)∗ ≤ (t−γ + 1)h2γ + T X(t) − Xh(t)M1,p(H)∗ dt.

19 / 22

slide-64
SLIDE 64

Proof: Strong convergence in the M1,p(H)∗-norm

Using Key Lemma (ii) we get X(T) − Xh(T)M1,p(H)∗ ≤ Eh(t)X0 + T

  • Eh(T − s)F(X(t))
  • L2(Ω,H) dt

+ T X(t) − Xh(t)2

L2(Ω,H) dt +

T Eh(T − t)p′

L0

2 dt

1

p′

+ T X(t) − Xh(t)M1,p(H)∗ dt. If we fix γ ∈ [0, β) and let p = 2/(1 − γ), then one can show that X(T) − Xh(T)M1,p(H)∗ ≤ (t−γ + 1)h2γ + T X(t) − Xh(t)M1,p(H)∗ dt. Gronwall’s Lemma applies and we are done!

19 / 22

slide-65
SLIDE 65

Path dependent test functions

Let µ be a Borel measure on [0, T] satisfying T t−γ dµ(t) < ∞, ∀γ ∈ [0, β).

20 / 22

slide-66
SLIDE 66

Path dependent test functions

Let µ be a Borel measure on [0, T] satisfying T t−γ dµ(t) < ∞, ∀γ ∈ [0, β). Then, for ϕ ∈ C2

b(H, R) we compute

  • E
  • ϕ

T X(t) dµ(t)

  • − ϕ

T Xh(t) dµ(t)

  • E
  • ϕ′ T

X(s) dµ(s)

  • ,

T X(t) − Xh(t) dµ(t)

  • + remainder

≤ T E

  • ϕ′ T

X(s) dµ(s)

  • , X(t) − Xh(t)
  • dµ(t) + h2γ
  • T

sup

Υ∈B

E

  • Υ, X(t) − Xh(t)
  • dµ(t) + h2γ

h2γ T t−γ dµ(t) + h2γ h2γ.

20 / 22

slide-67
SLIDE 67

Future work:

◮ Stochastic semilinear Volterra equation (non-Markovian),

(joint work with Kov´ acs and Larsson)

21 / 22

slide-68
SLIDE 68

Future work:

◮ Stochastic semilinear Volterra equation (non-Markovian),

(joint work with Kov´ acs and Larsson)

◮ More general multiplicative noise.

21 / 22

slide-69
SLIDE 69

Future work:

◮ Stochastic semilinear Volterra equation (non-Markovian),

(joint work with Kov´ acs and Larsson)

◮ More general multiplicative noise. ◮ Boundary control for SPDEs.

21 / 22

slide-70
SLIDE 70

Future work:

◮ Stochastic semilinear Volterra equation (non-Markovian),

(joint work with Kov´ acs and Larsson)

◮ More general multiplicative noise. ◮ Boundary control for SPDEs. ◮ Non-Gaussian noise.

21 / 22

slide-71
SLIDE 71

Thank you for your attention!

22 / 22