The Sobolev gradient regularization strategy for optical tomography - - PowerPoint PPT Presentation

the sobolev gradient regularization strategy for optical
SMART_READER_LITE
LIVE PREVIEW

The Sobolev gradient regularization strategy for optical tomography - - PowerPoint PPT Presentation

The Sobolev gradient regularization strategy for optical tomography coupled with a finite element formulation of the radiative transfer equation Fabien Dubot, Olivier Balima, Yann Favennec, Daniel Rousse Universit de Nantes cole de


slide-1
SLIDE 1

The Sobolev gradient regularization strategy for optical tomography coupled with a finite element formulation of the radiative transfer equation

Fabien Dubot, Olivier Balima, Yann Favennec, Daniel Rousse

Université de Nantes – École de Technologie Supérieure (Québec)

– PICOF, Palaiseau, April 2-4, 2012 –

slide-2
SLIDE 2

Optical Tomography

Main Penetration of light within tissues Dependent on optical properties : absorption and diffusion Needs :

precise instrumentation large number of measurements robust and reliable reconstruction algorithms

Features No need for extra chemical agent

the contrast comes from variations of hemoglobin in cells

No harmful (compare with X-ray, nuclear imaging, etc.)

low light intensity no cumulative effect known so far

No expensive technology

Lasers sensors computer ⊲ but still not fully operational !

slide-3
SLIDE 3

Radiative Transfer Equation

Forms of RTE in OT Transient, frequency, steady-state formulations, Full RTE / Diffuse Approximation for high scattering media Boltzmann equation [in frequency domain] Transport of the light intensity (I is the radiant power per unit solid angle per unit area at spatial position r in direction Ω for the test k) iω c + Ω · ∇ + κ + σ

  • I(r,

Ω, ω, k) = σB(I) Directional dependency of I Integro-differential equation B(I) := 1 4π ˆ

I(r, Ω

′, ω, k)Φ(

′,

Ω) dΩ

Scattering of light through Φ Reflection, refraction on boundaries k = 1 k = 2

slide-4
SLIDE 4

Input and Cost function definition

Physical Model – Find I(r, Ω, ω, k) satisfies :   

c +

Ω · ∇ + κ + σ

  • I(r,

Ω, ω, k) = σB(I) I(r, Ω, ω, k) = q0(ω)1[r∈∂D0(k)]1[

Ω · n<0]δ(

Ω − Ω0)) Prediction P(I) = ˆ

∂Dd× Ω · n>0

I(r, Ω, ω, k) Ω · n dΩ Cost function j(κ, σ) := J (I) = 1 2

K

  • k=1

P(I) − M2

Y

k = 1 k = 2

slide-5
SLIDE 5

Forward modelling

State divided into two components I = Ic + Is satisfying : iω c + Ω0 · ∇ + κ + σ

  • Ic (r, ω, k) = 0

Ic (r, ω, k) = q0(ω) × 1[r∈∂D0(k)] × 1[

Ω0 · n<0]

iω c + Ω · ∇ + κ + σ

  • Is(r,

Ω, ω, k) = σB(Icδ( Ω′ − Ω0) + Is) Is(r, Ω, ω, k) = 0 × 1[

Ω · n<0)]

slide-6
SLIDE 6

Cost function derivative

Cost function derivative (in the direction α′ ∈ U) : j′(κ, σ; α′) := lim

ε→0

j((κ, σ) + εα′) − j(κ, σ) ε j′(κ, σ; α′) =

Ns

  • s=1

  • P ′(Is) (P − M) , I′

s(r,

Ω, ω; α′)

  • Y
  • Linearized system (for test k) :

iω c + Ω0 · ∇ + κ + σ

  • I′

c

  • r, ω, k; α′

+ (κ′ + σ′)Ic(r, ω, k) = 0 I′

c

  • r, ω, k; α′

= 0 × 1[r∈∂D0(k)] × 1[

Ω0 · n<0]

iω c + Ω · ∇ + κ + σ

  • I′

s(r,

Ω, ω, k; α′) + (κ′ + σ′)Is(r, Ω, ω, k) = σ′B

  • Icδ(

Ω′ − Ω0) + Is

  • + σB
  • I′

cδ(

Ω′ − Ω0) + I′

s

  • Is(r,

Ω, ω, k; α′) = 0 × 1[

Ω · n<0]

slide-7
SLIDE 7

Cost derivative and adjoint problem

Proposition The directional derivative of the cost function in the neighborhood of (κ, σ) in the direction α′ when the state is solution of the system (S) = ∪k(S(k)) is given by : j′(κ, σ; α′) =

  • k (κ′ + σ′)Ic, I∗

c X + (κ′ + σ′)Is, I∗ s X

  • σ′B
  • Icδ(

Ω′ − Ω0) + Is

  • , I∗

s

  • X

if I∗

s and I∗ c are additional (adjoint) variables solutions of the system (S(k))∗ gathering the four

relationships :

  • − iω

c − Ω · ∇ + κ + σ

  • I∗

s

  • r,

Ω, ω, k

  • = σB (I∗

s )

I∗

s

  • r,

Ω, ω, k

  • = −

Ω · n (P − M) × 1[r∈∂Dd] × 1[

Ω · n>0]

  • − iω

c − Ω0 · ∇ + κ + σ

  • I∗

c (r,

Ω, ω, k) = σB

  • I∗

s δ(

Ω′ − Ω0)

  • I∗

c (r,

Ω, ω, k) = 0 × 1[

Ω · n<0]

slide-8
SLIDE 8

Cost derivative and “ordinary” gradient

Let γ being either κ or σ, and ∇j(κ, σ) = (∇κj, ∇σj) j′(κ, σ; γ′) =

  • ∇j(κ, σ), γ′

If j(κ, σ) defined on the L2(D) space γ ∈ L2(D) there exists unique ∇L2

γ j(κ, σ) ∈ L2(D) such that

j′(κ, σ; γ′) =

  • ∇L2

γ j(κ, σ), γ′ L2(D)

for all γ′ ∈ L2(D)

slide-9
SLIDE 9

Noise propagation from measurements to the cost gradient

Recall the adjoint system

  • − iω

c − Ω · ∇ + κ + σ

  • I∗

s

  • r,

Ω, ω, k

  • = σB
  • I∗

s

  • I∗

s

  • r,

Ω, ω, k

  • = −

Ω · n (P − M) × 1[r∈∂Dd] × 1[

Ω · n>0]

  • − iω

c − Ω0 · ∇ + κ + σ

  • I∗

c (r,

Ω, ω, k) = σB

  • I∗

s δ(

Ω′ − Ω0)

  • I∗

c (r,

Ω, ω, k) = 0 × 1[

Ω · n<0]

Noise Propagation Measurements M are noisy → noise propagation through I∗

s

→ noise propagation through I∗

c

→ noise propagation through ∇γj(κ, σ)

slide-10
SLIDE 10

Sobolev gradient

Introduction of the Sobolev space z1, z2H1(D) := z1, z2L2(D) + ∇z1, ∇z2L2(D) ∀z1, z2 ∈ H1(D) If j(κ, σ) defined on the H1(D) space γ ∈ H1(D) there exists unique ∇H1

γ

j(κ, σ) ∈ H1(D) such that j′(κ, σ; γ′) =

  • ∇H1

γ

j(κ, σ), κ′

H1(D)

for all γ′ ∈ H1(D) Used in solving PDEs through optimization (e.g. Danaila et al. SIAM J. Sci. Comput., 2010, Raza et al. J. Comput. Physics 2009) image segmentation (e.g. Renka Nonlinear Analysis 2009)

slide-11
SLIDE 11

Weighted Sobolev gradient

Introduction of the space z1, z2H1(ℓ)(D) := z1, z2L2(D) + ℓ2 ∇z1, ∇z2L2(D) ∀z1, z2 ∈ H1(D) (e.g. Protas, J. Comput. Physics 2004) Cost gradient extraction

  • 1 − ℓ2∆
  • )∇H1(ℓ)

κ

j(κ, σ) = IcI∗

c + IsI∗ s

  • 1 − ℓ2∆
  • )∇H1(ℓ)

σ

j(κ, σ) = IcI∗

c + IsI∗ s

− I∗

s

´

  • Is(r,

′, ω) + Ic(r, ω)δ(

′ −

Ωc)

  • Φ(

′,

Ω) dΩ

Two steps

1 Extraction of ∇L2(D

γ

j(κ, σ)

2 Solve

  • 1 − ℓ2∆
  • )∇H1(ℓ)

κ

j(κ, σ) = ∇L2(D

γ

j(κ, σ)

slide-12
SLIDE 12

Numerical schemes

Handle the integrals over solid angles → Discrete Ordinate Method

  • Ωm · ∇ + iω

c + κ + σ

  • Im

s (r, ω) = Sm c (r, ω) + σ

M

  • m′ =1

Im

s

(r, ω)Φ

  • Ωm′ ,

Ωm

  • wm′

Space Approx. for the states and adjoint states : Discontinuous Galerkin Formulation Solver : UMFPACK on a ndof × M system Optimizer L-BFGS coupled with inexact line-search Environment Freefem++

slide-13
SLIDE 13

Parameter setting for inversion

Test synthetic data noise added different setting for data building and for the reconstruction Tikhonov regularization Parameters γ approximated with the P1 FE. re-scaling of the parameters Stop criteria : relative stabilization of the cost function to 10−3 or 100 iterations

slide-14
SLIDE 14

Numer Results with : ℓ = 0 ℓ2 = 10−4 ℓ2 = 10−3 ℓ2 = 5 × 10−3

κ σ

0.35 0.30 0.25 0.20 0.15 3.00 2.50 2.00 1.50 1.00

slide-15
SLIDE 15

Numer Results with : ℓ = 0 ℓ2 = 10−4 ℓ2 = 10−3 ℓ2 = 5 × 10−3

κ σ

0.35 0.30 0.25 0.20 0.15 3.00 2.50 2.00 1.50 1.00

slide-16
SLIDE 16

Numer Results with : ℓ = 0 ℓ2 = 10−4 ℓ2 = 10−3 ℓ2 = 5 × 10−3

κ σ

0.35 0.30 0.25 0.20 0.15 3.00 2.50 2.00 1.50 1.00

slide-17
SLIDE 17

Numer Results with : ℓ = 0 ℓ2 = 10−4 ℓ2 = 10−3 ℓ2 = 5 × 10−3

κ σ

0.35 0.30 0.25 0.20 0.15 3.00 2.50 2.00 1.50 1.00

slide-18
SLIDE 18

Comparison with reality

κ0 σ0

0.35 0.30 0.25 0.20 0.15 3.00 2.50 2.00 1.50 1.00

ℓ2 = 10−4 ℓ2 = 10−3 ℓ2 = 5 × 10−3 e2 = ´

D(θr i −θo i )2dx

´

D(θo i )2dx

1/2 e2,κ 0.0825 0.0744 0.0621 e2,σ 0.0832 0.0745 0.0699 e3 = ´

D

  • ∇θr

i − ∇θo i

2 dx 1/2 e3,κ 1.36255 1.16738 0.877141 e3,σ 11.3714 9.77341 8.36711 jf/j0 – 0.029 0.0377 0.0751