the mathematics of x ray tomography tatiana a bubba
play

The Mathematics of X-ray Tomography Tatiana A. Bubba Department of - PowerPoint PPT Presentation

The Mathematics of X-ray Tomography Tatiana A. Bubba Department of Mathematics and Statistics, University of Helsinki tatiana.bubba@helsinki.fi Summer School on Very Finnish Inverse Problems Helsinki, June 3-7, 2019 Finnish Centre of


  1. The Mathematics of X-ray Tomography Tatiana A. Bubba Department of Mathematics and Statistics, University of Helsinki tatiana.bubba@helsinki.fi Summer School on Very Finnish Inverse Problems Helsinki, June 3-7, 2019 Finnish Centre of Excellence in Inverse Modelling and Imaging 2018-2025 2018-2025

  2. TSVD as Spectral Filtering We can regard the TSVD also as the result of a filtering operation, namely: min( m,n ) � r � u T i y δ u T i y δ f TSVD = φ TSVD v i = v i i σ i σ i i =1 i =1 where r is the truncation parameter and � 1 i = 1 , . . . , r φ TSVD = i 0 elsewhere are the filter factors associated with the method. These are called spectral filtering methods because the SVD basis can be re- garded as a spectral basis, since the vectors u i and v i are the eigenvectors of K T K and KK T . Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  3. The Tikhonov Method Let’s now consider the following filter factors:  σ 2  i = 1 , . . . , min( m, n ) i σ 2 i + α 2 φ TIKH = i  0 elsewhere which yield the reconstruction method: min( m,n ) min( m,n ) � � u T i y δ σ i ( u T i y δ ) f TIKH = φ TIKH v i = v i . i σ 2 σ i i + α 2 i =1 i =1 This choice of the filters result in a regularization technique called Tikhonov method and α > 0 is the so-called regularization parameter . The parameter α acts in the same way as the parameter r in the TSVD method: it controls which SVD components we want to damp or filter. Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  4. Tikhonov Regularization Similarly to SVD being the solution of the least squares problem, also Tikhonov regularization can be understood as the solution of a minimization problem: �� � � K f − y δ � � � f TIKH = argmin � 2 � 2 � f 2 + α . 2 f � � K f − y δ � � 2 This problem is motivated by the fact that we clearly want 2 to be small, but we also wish to avoid that it becomes zero. Indeed, by taking the Moore-Pensore solution f † we would have k � ( u T i y δ ) 2 � f † � 2 2 = σ 2 i i =1 which could become unrealistically large when the magnitude of the noise in some direction u i greatly exceeds the magnitude of the singular value σ i . The above minimization problem ensures that both the norm of the residual K f TIKH − y δ and the norm of the solution f TIKH are somewhat small and α balances the trade-off between the two terms. Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  5. Normal Equation and Stacked Form for Tikhonov Regularization The Tikhonov solution can be also formulated as a linear least squares problem: � � � � K � � � � 2 y δ f TIKH = argmin � � √ α ✶ � − � . 0 � � f 2 � K � � � y δ This is called stacked form . If we denote by � and � y δ = √ α ✶ K = then 0 the least square solution of the stacked form satisfies the normal equations: K T � K T � � K f = � y δ . It is easy to check that K T � K T � y δ = K T y δ . K = K T K + α ✶ � � and Hence we also have f TIKH = ( K T K + α ✶ ) − 1 K T y δ . Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  6. Naive Reconstruction (Moore-Penrose Pseudoinverse) f † : RE = 100% Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  7. Truncated SVD Regularization f TSVD : RE = 35% Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  8. Tikhonov Regularization f TIKH : RE = 32% Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  9. Where Tikhonov Solution Stands in The Geometry of Ill-Conditioned Problems Object Space Data Space R n = span { v 1 , . . . , v n } R m = span { u 1 , . . . , u m } y = K f true = = f true f TIKH f TSVD y δ = y + δ f † = K † y δ Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  10. About the Regularization Parameter By looking at the minimization problem formulation of the Tikhonov solution �� � � K f − y δ � � � f TIKH = argmin � 2 � f � 2 2 + α 2 f it is clear that: a large α results in strong regularity and possible over smoothing a small α small yields a good fitting, with the risk of over fitting. In general, choosing the regularization parameter for an ill-posed problem is not a trivial task and there are no rule of thumbs. Usually, it is a combination of good heuristics and prior knowledge of the noise in the observations. Delving into this is out of the scope, but there are methods that can be found in the literature (Morozov’s discrepancy principle, generalized cross validation, L-curve criterion), and more recent approaches tailored to specific problems. Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  11. Influence of the Choice of α in Tikhonov Regularization f TIKH : α = 10 3 Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  12. Influence of the Choice of α in Tikhonov Regularization f TIKH : α = 10 2 Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  13. Influence of the Choice of α in Tikhonov Regularization f TIKH : α = 10 Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  14. Influence of the Choice of α in Tikhonov Regularization f TIKH : α = 1 Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  15. Influence of the Choice of α in Tikhonov Regularization f TIKH : α = 10 − 1 Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  16. Influence of the Choice of α in Tikhonov Regularization f TIKH : α = 10 − 2 Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  17. Influence of the Choice of α in Tikhonov Regularization f TIKH : α = 10 − 3 Original phantom Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  18. Generalized Tikhonov Regularization Sometimes we have a priori information about the solution of the inverse prob- lem. This can be incorporated in the minimization formulation of the Tikhonov method. For instance: f is close to a know f ∗ �� � � K f − y δ � � � f − f ∗ � f GTIKH = argmin � 2 � 2 2 + α 2 f f is known to be smooth �� � � K f − y δ � � � f GTIKH = argmin � 2 � L f � 2 2 + α 2 f f has similar smoothing properties as f ∗ �� � � K f − y δ � � � f GTIKH = argmin � 2 � L ( f − f ∗ ) � 2 2 + α 2 f where L is a suitable operator. Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  19. Generalized Tikhonov Regularization A common choice for generalized Tikhonov regularization is to take L as a discretized differential operator. For example, using forward differences:   − 1 1 0 0 0 0 . . .  0 − 1 1 0 0 . . . 0     0 0 − 1 1 0 . . . 0     . .  ... . .   1 . .   L =   . . ... ∆ s  . .  . .     0 . . . 0 − 1 1 0     0 . . . 0 0 − 1 1 1 . . . 0 0 0 − 1 where ∆ s is the length of the discretization interval. This choice promotes smoothness in the reconstruction. Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  20. Variational Regularization In general, a minimization problem of the form: � 1 � � � K f − y δ � � 2 Γ α ( y δ ) = argmin 2 + α R ( f ) 2 f is called variational formulation : � � K f − y δ � � 2 The data fidelity (or data fitting) term 2 keeps the estimation of the solution close to the data under the forward physical system. The regularization parameter α > 0 controls the trade-off between a good fit and the requirements from the regularization. R ( f ) incorporates a priori information or assumptions on the unknown f . A non exhaustive list: Tikhonov regularization: � f � 2 2 Generalized Tikhonov regularization: � L f � 2 2 Compress sensing or sparse regularization: � f � 0 or � f � 1 or � L f � 1 Indicator functions of constraints sets: ι R + ( f ) A combination of the above Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  21. ℓ p Norms for R n Let f ∈ R n . The ℓ p norms for 1 ≤ p < ∞ are defined by � � 1 /p n � | f j | p � f � p = . j =1 Also important, but not a norm: � � p → 0 � f � p � { j : f j � = 0 } � . � f � 0 = lim p = The ℓ 0 “norm” counts the number of non-zeros components in f : this is used to measure sparsity. Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

  22. Sparse Regularization Finding the sparsest solution: � 1 � � � K f − y δ � � 2 argmin 2 + α � L f � 0 2 f is known as Compressed Sensing (CS). However, the problem above is NP-hard, since it requires a combinatorial search of exponential size for considering all possible supports. Under certain conditions on L f and K , replacing ℓ 0 with ℓ 1 yields “similar” results. This relaxation leads to a convex problem: � 1 � 2 � K f − y δ � 2 argmin 2 + α � L f � 1 . f which is at the basis of optimization-based methods for CS. Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend