Local Convergence of the Lavrentiev Method for the Cauchy Problem - - PowerPoint PPT Presentation

local convergence of the lavrentiev method for the cauchy
SMART_READER_LITE
LIVE PREVIEW

Local Convergence of the Lavrentiev Method for the Cauchy Problem - - PowerPoint PPT Presentation

1 Local Convergence of the Lavrentiev Method for the Cauchy Problem via a Carleman Inequality DU Duc Thang (Vietnam National University, Hanoi) Faten JELASSI (LAMSIN, Ecole Nationale dIng enieurs de Tunis, Tunisie) In collaboration


slide-1
SLIDE 1

1

Local Convergence of the Lavrentiev Method for the Cauchy Problem via a Carleman Inequality

DU Duc Thang (Vietnam National University, Hanoi) Faten JELASSI (LAMSIN, ´ Ecole Nationale d’Ing´ enieurs de Tunis, Tunisie) In collaboration with: Faker BEN BELGACEM (Universit´ e de Technologie de Compi` egne, France)

Partially granted by:

  • NAFOSTED, Vietnam (for DU Duc Thang)
  • MERST, Tunisie (the LR99ES-20 contract for Faten JELASSI)
slide-2
SLIDE 2

2 The Data Completion problem

Find u such that − div(a∇u) + bu = f, in Ω, u = g,

  • n ΓC,

a∂nu = ϕ,

  • n ΓC,

u = ?,

  • n ΓI.

The problem is ill-posed

  • Uniqueness : TRUE (=

⇒) Holmgren Theorem

  • Existence : Not guaranteed
  • Stability : Not valid (=

⇒) A big problem when computations are aimed

slide-3
SLIDE 3

3 Many Motivations

  • 1. Geophysics/seismic prospect
  • 2. Identification of Cracks, Contact resistivity, Corrosion Factor, (. . . )

—many examples in Inverse Problems and Engineering—

  • 3. Electrical activities in the brain cortex (EEG, MEG), and in the heart

myocardial (ECG)

  • 4. Computer Tomography; Electrical Impedance Tomography; ...
slide-4
SLIDE 4

4 Different Approaches

  • 1. Bi-Laplacian problem, Quasi-Reversibility: Klibanov, Santosa (1991),

Cao, Pereverez (2007), Bourgeois, Dard´ e (2010)

  • 2. Backus-Gilbert method, Moment problem: Cheng, Hon, Wei, Yamamoto

(2001)

  • 3. Optimal Control Problem: Fursikov (1987), Kabanikhin, Karchevski

(1995), Chakib, Nachaoui (2006), Ben Abda, Henry, Jday (2009)

  • 4. Variational Formulation via Holmgren Theorem: Ben Belgacem, El Fekih,

Aza¨ ıez (2005), Andrieux, Baranger, Ben Abda (2006)

slide-5
SLIDE 5

5 The Variational Formulation (1)

Duplicate u into uD(λ, g) and uN(λ, ϕ)        − div(a∇uD) + buD = f, in Ω uD(λ, g) = g,

  • n ΓC

uD(λ, g) = λ,

  • n ΓI

and        − div(a∇uN) + buN = f, in Ω a∂nuN(λ, ϕ) = ϕ,

  • n ΓC

uN(λ, ϕ) = λ,

  • n ΓI

Cauchy problem (= ⇒) Steklov-Poincar´ e problem By Holmgren Theorem, the right λ ∈ H1/2(ΓI) is such that a∂nuD(λ, g) = a∂nuN(λ, ϕ),

  • n ΓI

Then (again by Holmgren) uD(λ, g) = uN(λ, ϕ) = u in Ω

slide-6
SLIDE 6

6 The Variational Formulation (2)

Find λ

  • ∈ H1/2(ΓI)
  • such that

[(a∇uD(λ)∇uD(µ) + buD(λ)uD(µ)) − (a∇uN(λ)∇uN(µ) + buN(λ)uN(µ))] dx = −

a∇˘ uD(g)∇uD(µ) + b˘ uD(g)uD(µ)dx −

  • ΓC

ϕuN(µ) dx, ∀µ (⇐ ⇒) Find λ such that s(λ, µ) = (sD(λ, µ) − sN(λ, µ)) = ℓ(µ) ∀µ Steklov-Poincar´ e operator: Find λ such that Sλ = (SD − SN)λ = ℓ

  • r in the preconditioned form (SD - preconditioner)

Tλ = f

slide-7
SLIDE 7

7 Lavrentiev’s Regularization - Global Convergence

Find λ̺ such that ̺sD(λ̺, µ) + s(λ̺, µ) = ℓ(µ), ∀µ. If λ is an exact solution. Then lim

̺→0 λ − λ̺sD = 0,

and λ − λ̺s ≤ ̺ 2λsD (gǫ, ϕǫ) = (g, ϕ) + ((δg), (δϕ)), size((δg), (δϕ)) = ǫ Lavrentiev regularization problem for noisy data: Find λǫ such that ̺sD(λǫ, µ) + s(λǫ, µ) = ℓǫ(µ)

slide-8
SLIDE 8

8 Extended-Domain Lavrentiev Regularization

Find λ♭(∈ H1/2(Γ♭

I)) such that

s♭(λ♭, µ) = ℓ♭(µ), ∀µ ∈ H1/2(Γ♭

I)

Extended-Domain Lavrentiev Regularization: Find λ♭

ǫ,̺ such that

̺s♭

D(λ♭ ǫ,̺, µ) + s♭(λ♭ ǫ,̺, µ) = ℓ♭ ǫ(µ),

∀µ Retrieve the solution on the real domain uǫ,̺ =

  • u♭

N(λ♭ ǫ,̺, ϕǫ)

  • Ω (∈ H1(Ω)),

λǫ,̺ =

  • u♭

N(λ♭ ǫ,̺, ϕǫ)

  • ΓI
slide-9
SLIDE 9

9 Discrepancy Principle of Morozov

The Kohn-Vogelius functional on Ω♭

  • 2KV ♭

ǫ (λ♭) = |u♭ D(λ♭, gǫ) − u♭ N(λ♭, ϕǫ)|H1(Ω♭) ≈ ǫ

We have that

  • 2KV ♭

ǫ (λ♭) ≈

  • 2KVǫ(λ♭) = |u♭

D(λ♭, gǫ) − u♭ N(λ♭, ϕǫ)|H1(Ω) ≈ ǫ

The Discrepancy Principle of Morozov: Fix σ > 1. Find ̺ = ̺(ǫ) verifying

  • 2KVǫ(λ♭

ǫ,̺) = σǫ

slide-10
SLIDE 10

10 Variance? Bias?

  • Thm. 1 (Variance). There holds that (ǫ = noise size)

λǫ,̺ − λ̺H1/2(ΓI) ≤ Cǫ̺−

1 2(1+2β) .

  • Thm. 2 (Bias). If Cauchy data are exact (noise free) then

lim

̺→0 λ̺ − λH1/2(ΓI) = 0.

  • Rem. 1. Lavrentiev regularization method converges if

lim

ǫ→0 ǫ̺−1/2 = 0.

The Extended-Domain Lavrentiev method is more resistant to noise.

  • Rem. 2. The deduced analytical results are only for particular geometry

domains (circles, annulus, rectangles, etc).

slide-11
SLIDE 11

11 Harmonical Extension and the General Source Condition

General Source Condition (GSC) for the problem (Tλ = f) λ ∈ R(T p) ⇐ ⇒ λ = T pχ This condition is widely used in the Analysis of Regularization Methods. Controversial! Rejected by some mathematicians (M. Klibanov, ...). A concrete meaning of it, in the Cauchy problem, is provided in

  • Thm. 3. Recall that λ = u|ΓI where u is the solution of the Cauchy problem.

Then λ ∈ R(T p) ⇐ ⇒ ∃u♭ ∈ H1(Ω♭), harmonic : u = (u♭)|Ω

slide-12
SLIDE 12

12 Convergence Results

  • Thm. 4 (A-priori convergence).

Assume that λ satisfies (GSC). The choice ̺ = ǫ

2(1+2β) 1+2p

yields that λǫ,̺ − λH1/2(ΓI) ≤ Cǫ

2p 2p+1

  • Thm. 5 (A-posteriori convergence).

Assume that λ satisfies (GSC). The choice of ̺ = ̺(ǫ) from (DP) of Morozov provides λǫ,̺ − λH1/2(ΓI) ≤ Cǫ

2p 2p+1

slide-13
SLIDE 13

13 A Carleman’s Inequality

Given θ ∈ C2(¯ Ω) satisfies |∇θ(x)| > 0, ∀x ∈ ¯ Ω, θ(x) > 0, ∀x ∈ ¯ Ω\ΓI, θ(x) = 0, x ∈ ΓI. Provided a weight function ψ(x) = eθ(x), x ∈ ¯ Ω. Carleman’s Estimate with boundary condition:

  • ΓI

Ω ΓC

τ

Ωτ = {x ∈ Ω, ψ(x) ≥ 1 + τ}

[a(∇v)2 + ζ2bv2]e2ζψdx ≤ C 1 ζ

[− div(a∇v) + bv]2e2ζψdx +

  • Γ

[(a∂nv)2 + ζ2v2]e2ζψdγ

  • for all v ∈ H2(Ω).
slide-14
SLIDE 14

14 Local Bias Estimate?

  • Thm. 6. β > 0 a small parameter. ∃q = q(τ) ∈ [0, 1/2[ and C = C(τ) such

that uN(λ̺, ϕ) − uNH1(Ωτ ) ≤ C̺qλ. Proof (Sketch.) We set w̺ = uN(λ̺, ϕ) − u, consider two small parameters (τ, η) such that β > τ > η > 0. A cut-off function ξ = ξτ,η is such that        0 ≤ ξτ,η(x) ≤ 1 ∀x ∈ Ω ξτ,η(x) = 1 ∀x ∈ Ωτ ξτ,η(x) = 0 ∀x ∈ Ω \ Ωη.

slide-15
SLIDE 15

15 Proof (cont.)

The Carleman Inequality is applied to v := w̺ξ as follows

  • Ωβ
  • a(∇(w̺ξ))2 + ζ2b(w̺ξ)2

e2ζψ dx ≤ C 1 ζ

  • − div(a∇(w̺ξ)) + bw̺ξ

2e2ζψ dx +

  • Γ
  • (a∂nw̺ξ)2 + ζ2(w̺ξ)2

e2ζψ dγ

  • .

which yields, after some calculations and simplifications

  • Ωβ
  • a(∇(w̺))2 + ζ2b(w̺)2

dx ≤ C e2ζ(τ−β) ζ

  • Ωη\Ωτ

[a(∇w̺)2 + bw2

̺]dx + ζ2e2ζ(σ−β)

  • ΓC

w2

̺dγ

  • .

where σ = max{ψ(x) − 1, x ∈ ΓC}.

slide-16
SLIDE 16

16 Proof (cont.)

An important inequality wN,̺H1/2(ΓC) + a∂nwD,̺H−1/2(ΓC) ≤ C√̺λsD leads to w̺H1(Ωβ) ≤ C

  • ρ2λ̺ − λsD + ̺

ρ2s λsD

  • where s = σ−β

β−τ and ρ = 1 √̺e−ζ(β−τ) tends to zero as ζ large. This yields

w̺H1(Ωβ) ≤ C̺

1 2(1+s) λsD

The proof is complete with q =

1 2(1+s).

slide-17
SLIDE 17

17 Some remarks

  • No use of the General Source Condition on the restricted area Ωβ.
  • Super-convergence result of the bias. Under a smoothness of λ, we may have,

λ̺ − λsD ≤ C̺p for some p ∈ [0, 1/2[, and an interpolation inequality w̺H1(Ωβ) ≤ C̺(1−µ)p+q = C̺(1−µ)p+µ/2, µ = 1 1 + s.

  • β → 0 ⇒ µ → 0
  • β grows then Ωβ reduces to a thin band concentrated around ΓC ⇒ µ → 1.
slide-18
SLIDE 18

18 Local Variance Estimate ?

(gǫ,h, ϕǫ,h) = (gh, ϕh) + ((δg)h, (δϕ)h), size((δg)h, (δϕ)h) = ǫ Lavrentiev regularization problem: Find λǫ such that ̺sD(λǫ, µ) + s(λǫ, µ) = ℓǫ(µ)

  • Thm. 7 (Variance). Let β > 0 be small, ∃q = q(β) ∈ [, 1/2[ and C > 0 s.t.

uN(λǫ, ϕǫ) − uN(λ̺, ϕ) ≤ Cǫ̺− 1

2 +q.

Proof (Sketch) - Inspired from the Local Bias Estimate. Set: (wN,ǫ, wD,ǫ) = (uN(λ̺, ϕ) − uN(λǫ, ϕǫ), uD(λ̺, g) − uD(λǫ, gǫ)). wN,ǫH1/2(ΓC) + a∂nwD,ǫH−1/2(ΓC) ≤ Cǫ. We have wǫH1(Ωβ) ≤ Cǫ ρ2 ̺ + 1 ρ2s 1/2 ≤ Cǫ̺−

s 2(1+s) .

The suitable choice of ̺ for small ρ completes the proof for q =

1 2(1+s).

slide-19
SLIDE 19

19 Local Estimate ?

  • Thm. 8. Let β > 0 be small, ∃q = q(β) ∈ [0, 1/2[ and C > 0 s.t.

uN(λǫ, ϕǫ) − u ≤ C̺q(λsD + ǫ̺−1/2). Remark Selecting the parameter ̺ = ̺(ǫ), to obtain the best bound, is the key issue

for a performing activation of the Lavrentiev strategy. A wide literature is dedicated to a-posteriori rules such that the Discrepancy principle or the balancing principle which are a reliable means to realize a trade-off between the convergence rate and the stability, necessary to an efficient computed solution. The discrepancy principle is the criterion adopted in our computations.

slide-20
SLIDE 20

20 An Analytical Example

  • We take a = 1, b = 0, Ω- an annular whose radii are rC < rT < rI.
  • Use of Fourier computation yields the bias estimate

uN(λ̺, ϕ) − uH1(ΩT ) ≤ C̺ q′ λsD. with q′ = 1 2(1 + s′) , s′ = log(rT ) − log(rC) log(rI) − log(rT )

  • Choice of ψ(x) = 1 + log rI

r , r = |x|, β = rI rT and the use of the Carleman

estimate yields the value

s = σ − β β − τ = ν(rC) − ν(rT ) ν(rT ) − ν(rτ) = log(rT ) − log(rC) log(rτ) − log(rT ) .

as τ → 0, rτ ≈ rI, which means that s tends towards s′ .

  • Similar observations are readily made on the variance. In this case, explicit

Fourier computations leads to

uN(λ̺, ϕ) − uN(λǫ, ϕǫ)H1(Ωβ) ≤ Cǫ̺ − 1

2 +q′.

with q turns out to be close to q′ as τ → 0.

slide-21
SLIDE 21

21

Numerical Discussion

slide-22
SLIDE 22

22 About the Bias u(x, y) = x3 − 3xy2 +

x−0.85 (x−0.85)2+(y−0.85)2

10

  • 7

10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

ρ

10

  • 3

10

  • 2

10

  • 1

10

H1-error

R = 0.60 R = 0.75 R = 0.80 R = 1.00

Ω rC rT rI ΩT

rT 0.6 0.75 0.80 1.00 Theoretical rates (ΩT ) 0.40 0.28 0.25 0.13 Computational rates (ΩT ) 0.41 0.32 0.27 0.12

slide-23
SLIDE 23

23 A Complex Geometry

−0.3 −0.2 −0.1 0.1 0.2 0.28 −0.17 −0.1 0.1 0.2 0.27

10

  • 7

10

  • 6

10

  • 5

10

  • 4

10

  • 3

10

  • 2

10

  • 1

ρ 10

  • 3

10

  • 2

10

  • 1

10 H1-error

(a) (b) (c) (d)

ΩT (a) (b) (c) (d) Computational rates (ΩT ) 0.43 0.29 0.21 0.13

slide-24
SLIDE 24

24 About the Variance and the Discrepancy Principle

u(x, y) = x3 − 3xy2 + 1 5 (x − 0.025) (x − 0.25)2 + (y − 0.09)2 + 1 5 (y − 0.02) (x + 0.08)2 + (y − 0.02)2 .

−0.17 −0.1 0.1 0.15 −0.06 0.1 0.17 −0.17 −0.1 0.1 0.15 −0.06 0.1 0.17

slide-25
SLIDE 25

25

0,2 0,4 0,6 0,8

Abcissa

  • 4
  • 2

2 Potentials Exact Computed

ǫ∞ 0.025 0.05 0.1 ǫ′ 0.3196 0.6601 1.285 (u − uN)∞,Ω 0.313(0.75 × 10−3) 0.325(0.1 × 10−2) 0.357(0.5 × 10−2) (u − uN)∞,ΩT 0.0655 0.0691 0.0859

slide-26
SLIDE 26

26

−8 5 −0.06 0.1 0.17 −0.17 −0.1 0.1 0.15 −8 5 −0.06 0.1 0.17 −0.17 −0.1 0.1 0.15 −4.5 −4 −3 −2 −1 1 2 3 −0.06 0.1 0.17 −0.17 −0.1 0.1 0.15 −4.5 −4 −3 −2 −1 1 2 3 −0.06 0.1 0.17 −0.17 −0.1 0.1 0.15

slide-27
SLIDE 27

27

THANK YOU FOR YOUR ATTENTION!!!

slide-28
SLIDE 28

28 The main idea?

Cauchy data (g, ϕ) = (1 + 1%noise, 0). Exact solution uex = 1

0.718 0.76 0.801 0.843 0.885 0.927 0.968 1.01 1.05 1.09 1.14 1.18 1.22 1.26 0.8 0.9 1 1.1 1.2 −1 1 −1 1

slide-29
SLIDE 29

29 The main idea? Compute u in an Extended Domain (= ⇒) Retrieve u in the Real Domain

0.954 0.961 0.968 0.975 0.982 0.989 0.996 1 1.01 1.02 1.02 1.03 1.04 1.04 0.8 0.9 1 1.1 1.2 −1 1 −1 1 0.989 0.991 0.992 0.994 0.995 0.997 0.998 1 1 1 1 1.01 1.01 1.01 0.8 0.9 1 1.1 1.2 −1 1 −1 1 0.996 0.996 0.997 0.998 0.999 0.999 1 1 1 1 1 1 1 1 0.8 0.9 1 1.1 1.2 −1 1 −1 1

0.8 ≤ r ≤ 1 0.7 ≤ r ≤ 1 0.5 ≤ r ≤ 1