the sobolev gradient regularization strategy for optical
play

The Sobolev gradient regularization strategy for optical tomography - PowerPoint PPT Presentation

The Sobolev gradient regularization strategy for optical tomography coupled with a finite element formulation of the radiative transfer equation Fabien Dubot, Olivier Balima, Yann Favennec, Daniel Rousse Universit de Nantes cole de


  1. The Sobolev gradient regularization strategy for optical tomography coupled with a finite element formulation of the radiative transfer equation Fabien Dubot, Olivier Balima, Yann Favennec, Daniel Rousse Université de Nantes – École de Technologie Supérieure (Québec) – PICOF, Palaiseau, April 2-4, 2012 –

  2. Optical Tomography Main Penetration of light within tissues Dependent on optical properties : absorption and diffusion Needs : precise instrumentation large number of measurements robust and reliable reconstruction algorithms Features No need for extra chemical agent the contrast comes from variations of hemoglobin in cells No harmful (compare with X-ray, nuclear imaging, etc.) low light intensity no cumulative effect known so far No expensive technology Lasers sensors computer ⊲ but still not fully operational !

  3. Radiative Transfer Equation Forms of RTE in OT Transient, frequency, steady-state formulations, Full RTE / Diffuse Approximation for high scattering media Boltzmann equation [in frequency domain] k = 1 Transport of the light intensity ( I is the radiant power per unit solid angle per unit area at spatial position r in direction � Ω for the test k ) � iω � c + � I ( r, � Ω · ∇ + κ + σ Ω , ω, k ) = σ B ( I ) k = 2 Directional dependency of I Integro-differential equation B ( I ) := 1 ˆ ′ , ω, k )Φ( � ′ , � ′ I ( r, � Ω Ω Ω) dΩ 4 π 4 π Scattering of light through Φ Reflection, refraction on boundaries

  4. Input and Cost function definition Physical Model – Find I ( r, � Ω , ω, k ) satisfies :  � � iω c + � I ( r, � Ω · ∇ + κ + σ Ω , ω, k ) = σ B ( I )  I ( r, � n< 0] δ ( � Ω − � Ω , ω, k ) = q 0 ( ω ) 1 [ r ∈ ∂ D 0 ( k )] 1 [ � Ω 0 )) Ω · �  k = 1 Prediction ˆ I ( r, � Ω , ω, k ) � P ( I ) = Ω · � n dΩ ∂ D d × � Ω · � n> 0 k = 2 Cost function K j ( κ, σ ) := J ( I ) = 1 � � P ( I ) − M � 2 Y 2 k =1

  5. Forward modelling State divided into two components I = I c + I s satisfying : � iω � c + � Ω 0 · ∇ + κ + σ I c ( r, ω, k ) = 0 I c ( r, ω, k ) = q 0 ( ω ) × 1 [ r ∈ ∂ D 0 ( k )] × 1 [ � Ω 0 · � n< 0] � iω � Ω ′ − � c + � I s ( r, � Ω , ω, k ) = σ B ( I c δ ( � Ω · ∇ + κ + σ Ω 0 ) + I s ) I s ( r, � Ω , ω, k ) = 0 × 1 [ � Ω · � n< 0)]

  6. Cost function derivative Cost function derivative (in the direction α ′ ∈ U ) : j (( κ, σ ) + εα ′ ) − j ( κ, σ ) j ′ ( κ, σ ; α ′ ) := lim ε ε → 0 N s �� � � � s ( r, � j ′ ( κ, σ ; α ′ ) = P ′ ( I s ) ( P − M ) , I ′ Ω , ω ; α ′ ) ℜ Y s =1 Linearized system (for test k ) : � iω � + ( κ ′ + σ ′ ) I c ( r, ω, k ) = 0 c + � I ′ � r, ω, k ; α ′ � Ω 0 · ∇ + κ + σ c I ′ � r, ω, k ; α ′ � = 0 × 1 [ r ∈ ∂ D 0 ( k )] × 1 [ � Ω 0 · � c n< 0] � iω � Ω , ω, k ; α ′ ) + ( κ ′ + σ ′ ) I s ( r, � c + � I ′ s ( r, � Ω · ∇ + κ + σ Ω , ω, k ) = � Ω ′ − � � � Ω ′ − � � σ ′ B I c δ ( � I ′ c δ ( � Ω 0 ) + I ′ Ω 0 ) + I s + σ B s I s ( r, � Ω , ω, k ; α ′ ) = 0 × 1 [ � Ω · � n< 0]

  7. Cost derivative and adjoint problem Proposition The directional derivative of the cost function in the neighborhood of ( κ, σ ) in the direction α ′ when the state is solution of the system ( S ) = ∪ k ( S ( k )) is given by : k � ( κ ′ + σ ′ ) I c , I ∗ c � X + � ( κ ′ + σ ′ ) I s , I ∗ j ′ ( κ, σ ; α ′ ) = � s � X � � Ω ′ − � � � I c δ ( � σ ′ B , I ∗ − Ω 0 ) + I s s X c are additional (adjoint) variables solutions of the system ( S ( k )) ∗ gathering the four if I ∗ s and I ∗ relationships : � − iω � � � c − � r, � I ∗ = σ B ( I ∗ Ω · ∇ + κ + σ Ω , ω, k s ) s � � r, � = − � I ∗ Ω , ω, k Ω · � n ( P − M ) × 1 [ r ∈ ∂ D d ] × 1 [ � Ω · � s n> 0] � − iω � � Ω ′ − � � c − � c ( r, � s δ ( � I ∗ I ∗ Ω 0 · ∇ + κ + σ Ω , ω, k ) = σ B Ω 0 ) I ∗ c ( r, � Ω , ω, k ) = 0 × 1 [ � Ω · � n< 0]

  8. Cost derivative and “ordinary” gradient Let γ being either κ or σ , and ∇ j ( κ, σ ) = ( ∇ κ j, ∇ σ j ) j ′ ( κ, σ ; γ ′ ) = � ∇ j ( κ, σ ) , γ ′ � If j ( κ, σ ) defined on the L 2 ( D ) space γ ∈ L 2 ( D ) there exists unique ∇ L 2 γ j ( κ, σ ) ∈ L 2 ( D ) such that � γ j ( κ, σ ) , γ ′ � ∇ L 2 j ′ ( κ, σ ; γ ′ ) = L 2 ( D ) for all γ ′ ∈ L 2 ( D )

  9. Noise propagation from measurements to the cost gradient Recall the adjoint system Noise Propagation Measurements M are � − iω � � � c − � I ∗ r, � I ∗ noisy � � Ω · ∇ + κ + σ Ω , ω, k = σ B s s → noise propagation � � I ∗ r, � = − � Ω , ω, k Ω · � n ( P − M ) × 1 [ r ∈ ∂ D d ] × 1 [ � through I ∗ Ω · � s n> 0] s → noise propagation through I ∗ � − iω � c � Ω ′ − � � c − � I ∗ c ( r, � I ∗ s δ ( � Ω 0 · ∇ + κ + σ Ω , ω, k ) = σ B Ω 0 ) → noise propagation through ∇ γ j ( κ, σ ) I ∗ c ( r, � Ω , ω, k ) = 0 × 1 [ � Ω · � n< 0]

  10. Sobolev gradient Introduction of the Sobolev space ∀ z 1 , z 2 ∈ H 1 ( D ) � z 1 , z 2 � H 1 ( D ) := � z 1 , z 2 � L 2 ( D ) + �∇ z 1 , ∇ z 2 � L 2 ( D ) If j ( κ, σ ) defined on the H 1 ( D ) space γ ∈ H 1 ( D ) there exists unique ∇ H 1 j ( κ, σ ) ∈ H 1 ( D ) such that γ ∇ H 1 � j ( κ, σ ) , κ ′ � j ′ ( κ, σ ; γ ′ ) = γ H 1 ( D ) for all γ ′ ∈ H 1 ( D ) Used in solving PDEs through optimization (e.g. Danaila et al. SIAM J. Sci. Comput., 2010 , Raza et al. J. Comput. Physics 2009 ) image segmentation (e.g. Renka Nonlinear Analysis 2009 )

  11. Weighted Sobolev gradient Introduction of the space � z 1 , z 2 � H 1( ℓ ) ( D ) := � z 1 , z 2 � L 2 ( D ) + ℓ 2 �∇ z 1 , ∇ z 2 � L 2 ( D ) ∀ z 1 , z 2 ∈ H 1 ( D ) (e.g. Protas, J. Comput. Physics 2004 ) Cost gradient extraction ) ∇ H 1( ℓ ) 1 − ℓ 2 ∆ � � I c I ∗ c + I s I ∗ j ( κ, σ ) = κ s ) ∇ H 1( ℓ ) 1 − ℓ 2 ∆ � � I c I ∗ c + I s I ∗ j ( κ, σ ) = σ s ′ − � − I ∗ � � ′ , ω ) + I c ( r, ω ) δ ( � ′ , � ′ I s ( r, � Φ( � s ´ Ω Ω Ω c ) Ω Ω) dΩ 4 π 4 π Two steps 1 Extraction of ∇ L 2 ( D j ( κ, σ ) γ 2 Solve ) ∇ H 1( ℓ ) j ( κ, σ ) = ∇ L 2 ( D 1 − ℓ 2 ∆ � � j ( κ, σ ) γ κ

  12. Numerical schemes Handle the integrals over solid angles → Discrete Ordinate Method M � Ω m · ∇ + iω � c ( r, ω ) + σ ′ � � � � Ω m ′ , � � I m s ( r, ω ) = S m I m c + κ + σ ( r, ω )Φ Ω m w m ′ s 4 π m ′ =1 Space Approx. for the states and adjoint states : Discontinuous Galerkin Formulation Solver : UMFPACK on a ndof × M system Optimizer L-BFGS coupled with inexact line-search Environment Freefem++

  13. Parameter setting for inversion Test synthetic data noise added different setting for data building and for the reconstruction Tikhonov regularization Parameters γ approximated with the P 1 FE. re-scaling of the parameters Stop criteria : relative stabilization of the cost function to 10 − 3 or 100 iterations

  14. Numer Results with : ℓ = 0 ℓ 2 = 10 − 4 ℓ 2 = 10 − 3 ℓ 2 = 5 × 10 − 3 κ σ 0.35 3.00 0.30 2.50 0.25 2.00 0.20 1.50 0.15 1.00

  15. Numer Results with : ℓ = 0 ℓ 2 = 10 − 4 ℓ 2 = 10 − 3 ℓ 2 = 5 × 10 − 3 κ σ 0.35 3.00 0.30 2.50 0.25 2.00 0.20 1.50 0.15 1.00

  16. Numer Results with : ℓ = 0 ℓ 2 = 10 − 4 ℓ 2 = 10 − 3 ℓ 2 = 5 × 10 − 3 κ σ 0.35 3.00 0.30 2.50 0.25 2.00 0.20 1.50 0.15 1.00

  17. Numer Results with : ℓ = 0 ℓ 2 = 10 − 4 ℓ 2 = 10 − 3 ℓ 2 = 5 × 10 − 3 κ σ 0.35 3.00 0.30 2.50 0.25 2.00 0.20 1.50 0.15 1.00

  18. Comparison with reality κ 0 σ 0 0.35 3.00 0.30 2.50 0.25 2.00 0.20 1.50 0.15 1.00 ℓ 2 = 10 − 4 ℓ 2 = 10 − 3 ℓ 2 = 5 × 10 − 3 � 1 / 2 e 2 ,κ 0.0825 0.0744 0.0621 � ´ D ( θ r i − θ o i ) 2 dx e 2 = i ) 2 dx e 2 ,σ 0.0832 0.0745 0.0699 D ( θ o ´ � 1 / 2 e 3 ,κ 1.36255 1.16738 0.877141 � 2 dx � ´ ∇ θ r i − ∇ θ o � e 3 = e 3 ,σ 11.3714 9.77341 8.36711 D i j f /j 0 – 0.029 0.0377 0.0751

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend