The Mathematics of X-ray Tomography Tatiana A. Bubba Department of - - PowerPoint PPT Presentation

the mathematics of x ray tomography tatiana a bubba
SMART_READER_LITE
LIVE PREVIEW

The Mathematics of X-ray Tomography Tatiana A. Bubba Department of - - PowerPoint PPT Presentation

The Mathematics of X-ray Tomography Tatiana A. Bubba Department of Mathematics and Statistics, University of Helsinki tatiana.bubba@helsinki.fi Summer School on Very Finnish Inverse Problems Helsinki, June 3-7, 2019 Finnish Centre of


slide-1
SLIDE 1

The Mathematics of X-ray Tomography Tatiana A. Bubba

Department of Mathematics and Statistics, University of Helsinki tatiana.bubba@helsinki.fi Summer School on Very Finnish Inverse Problems Helsinki, June 3-7, 2019

Finnish Centre of Excellence in Inverse Modelling and Imaging

2018-2025 2018-2025

slide-2
SLIDE 2

What is Computed Tomography?

CT is a non-invasive device that provides information about the inside of an object by taking measurements from the outside (indirect information).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-3
SLIDE 3

What is Computed Tomography?

CT is a non-invasive device that provides information about the inside of an object by taking measurements from the outside (indirect information).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-4
SLIDE 4

What is Computed Tomography?

CT is a non-invasive device that provides information about the inside of an object by taking measurements from the outside (indirect information). At the core: measurements are taken exploiting X-ray absorption of tissues; X-ray source and detector are rotated around the object to take measurements at different location.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-5
SLIDE 5

Toy example: a line inside homogeneous matter

X-ray source I0 I1 = I0 e−µs s I0: initial intensity of the X-ray s: length of the path of the X-ray inside the body µ > 0: X-ray attenuation coefficient

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-6
SLIDE 6

Toy example: two homogeneous blocks

X-ray source I0 I1 = I0 e−µ1s1 s1 I0 I1 I2

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-7
SLIDE 7

Toy example: two homogeneous blocks

X-ray source I0 I1 = I0 e−µ1s1 I2 = I1 e−µ2s2 s1 s2 I0 I1 I2 I2

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-8
SLIDE 8

The Beer-Lambert Law

Homogeneous material: I1 = I0 e−µs

µ I0 I1 s

Non-homogeneous material: I1 = I0 e−

  • ℓ µ(x) dx

µ(x) I0 I1

X-ray ℓ

Rearranging we get a so-called line integral:

I1 = I0 e−

  • ℓ µ(x) dx

⇐ ⇒ − log I1 I0

  • =

µ(x) dx

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-9
SLIDE 9

Beer-Lambert Law and Radon Transform

The Beer-Lambert law connects the initial and final intensities of an X-ray:

I1 = I0 e−

  • ℓ f(x) dx

⇐ ⇒ − log I1 I0

  • =

f(x) dx

and it is connected to the Radon transform R(f) =

f(x) dx through the identifications: f(x) = µ(x) and R(f) = − log I1 I0

  • .

Usually ℓ = ℓ(θ,τ) where (θ, τ) ∈ [0, 2π) × R.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-10
SLIDE 10

Beer-Lambert Law and Radon Transform

In general, during a tomographic scan: I0 is known from calibration and I1 from measurement. I1 is measured along many lines ℓ(θ,τ) to get many line integral values through the object from which to determine f(x). The intensity I1 is called the transmission, while the corresponding − log(I1/I0) is called absorption or projection. A collection of projections is called sinogram.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-11
SLIDE 11

In Practice: Transmission vs Absorption

X-ray source

1000

photon count

1000

Detector A digital X-ray detector counts how many photons arrive at each pixel. digital X-ray detector counts how many photons arrive at each pixel.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-12
SLIDE 12

In Practice: Transmission vs Absorption

1000 1000 1000

photon count

1000 500 250

Adding material between the source and detector reveals the exponential X-ray attenuation law.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-13
SLIDE 13

In Practice: Transmission vs Absorption

log

6.9 6.2 5.5

1000 1000 1000

photon count

1000 500 250

We take logarithm of the photon counts to compensate for the exponential attenuation law.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-14
SLIDE 14

In Practice: Transmission vs Absorption

log

6.9 6.2 5.5

1000 1000 1000

photon count

1000 500 250

line integral

0.0 0.7 1.4

The final calibration step is to subtract the logarithms from the empty space value (here 6.9).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-15
SLIDE 15

In Practice: Discretization

To represent the scanned object and the projection data in a computer we need to discretize: Sinogram (measured data): these are already discrete (finite set of angles, finite set of detectors) Object: this discretization can be freely chosen, for instance pixels or voxels. Forward model (Radon transform): there are various approaches to discretize (compute or approximate) the line integrals

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-16
SLIDE 16

Tomography as a Linear System

θ

Object: target f ∈ Rn Data: sinogram y ∈ Rm

  • R2 δ(τ − x, ωθ)
  • K(x,θ,τ)

f(x) dx = y(θ, τ) = (Rf)(θ, τ) K f = y where K ∈ Rm×n is the system matrix (discretization of the forward model).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-17
SLIDE 17

Briefly About Noise in CT

The intensity I1 in the Beer-Lambert law can be understood as the number of transmitted photons in a interval of time. This can be modelled as a Poisson random variable, namely the probability that k photons are transmitted is: P(N = k) = λk k! e−λ where λ is the expected value of the number of transmitted photons. In practice, for large number of photons a Poisson distribution can be closely approximated by a Gaussian distribution, with constant mean and varying stan- dard deviation (the standard deviation decays with increasing intensity). This means that in the discrete linear model of CT we have to consider also noise: K f = y + δ = yδ where δ ∈ Rm is the noise on the data.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-18
SLIDE 18

Direct and Inverse Problem Object f Data y

Direct problem: K

Direct problem: given object f, determine data y

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-19
SLIDE 19

Direct and Inverse Problem Object f Data y

Direct problem: K Inverse problem

Direct problem: given object f, determine data y Inverse problem: given noisy data yδ, recover object f

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-20
SLIDE 20

How to Solve the Inverse Problem?

Solving the inverse problems means reconstructing the object from the measured data:

  • bject

data

K

?

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-21
SLIDE 21

How to Solve the Inverse Problem?

Solving the inverse problems means reconstructing the object from the measured data:

  • bject

data

K K−1?

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-22
SLIDE 22

How to Solve the Inverse Problem?

Solving the inverse problems means reconstructing the object from the measured data:

  • bject

data

K K−1

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-23
SLIDE 23

Recall the Singular Value Decomposition (SVD)

If K ∈ Rm×n, there exist unitary matrices U ∈ Rm×m and V ∈ Rn×n such that K = U Σ VT = U      σ1 . . . σ2 . . . . . . . . . ... . . . . . . σmin(m,n)      VT where σ1 ≥ σ2 ≥ . . . ≥ σmin(m,n) ≥ 0 are the so-called singular values of K and if we write U = (u1, . . . , um) and V = (v1, . . . , vn) the ui and vi are, respectively, the left and right singular vectors associated with σi, for i = 1, . . . , min(m, n). The SVD can be written also as a sum of matrices of rank equal to 1: K = U Σ VT =

min(m,n)

  • i=1

σi uivT

i

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-24
SLIDE 24

SVD and Eigenvalue Decomposition

The matrices KT K and KKT are symmetric. As a consequence, their eigenval- ues are real and their eigenvectors are real and orthonormal. A simple computa- tion yields: KKT ui = σ2

i vi

and KT Kvi = σ2

i ui

  • r equivalently

KT ui = σivi and Kvi = σiui meaning that the square of the singular values of K are the non-zero eigenvalues

  • f KT K and KKT and the left (right, resp.) singular vectors ui (vi, resp.) are

the eigenvectors of KKT (KT K, resp.). In particular, the singular values of K are unique. The singular vector ui (vi, resp.) are unique only when σ2

i is a simple eigenvalue of KKT (KT K, resp.).

For multiple singular values, the corresponding singular vectors can be chosen as any orthonormal basis for the unique subspace that they span.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-25
SLIDE 25

SVD and the Fundamental Subspaces of a Matrix

The SVD gives complete information about the four fundamental subspaces associated with K. Indeed, if k = rank(K) we have: ker(K) = span{vk, . . . , vn}, range(K) = span{u1, . . . , uk}, range

  • KT

= span{v1, . . . , vk}, ker

  • KT

= span{uk, . . . , um}, and the following well-known relations hold true: ker(K)⊥ = range

  • KT

, ker

  • KT

= range(K)⊥.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-26
SLIDE 26

The Moore-Penrose Pseudoinverse

The Moore-Penrose pseudoinverse is defined to be the matrix K† = V Σ† UT = V

  • Σ−1

k

  • UT

where Σk =      σ1 . . . σ2 . . . . . . . . . ... . . . . . . σk      . In particular we have: K†y ⊥ ker(K) and K2 = σ1, K†2 = 1 σk and cond(K) = K2 K†2 = σ1 σk .

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-27
SLIDE 27

Can We Use the Pseudoinverse to Solve Our Inverse Problem?

Original phantom, values between 0 (black) and 1 (white) N¨ aive reconstruction with minimum −14.9 and maximum 18.5 (data has 0.1% relative noise)

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-28
SLIDE 28

The Problem is Ill-posedness

Hadamard (1903): a problem is well-posed if the following conditions hold true:

  • 1. a solution exists,
  • 2. the solution is unique,
  • 3. the solution depends continuously
  • n the input.

If one of these conditions fails, the problem is said ill-posed.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-29
SLIDE 29

Ill-posedness & SVD

Put it simply, we have: Existence: this fails when m > n since range(K) < m and range(K)⊥ is nontrivial. Uniqueness: this fails when m < n since dim(ker(K)) > 0. Stability: this fails when the condition number of K is large. The Moore-Penrose pseudoinverse takes care only of conditions 1 and 2, that is there always exists a minimum norm solution given by: f † = K†yδ =

k

  • i=1

uT

i yδ

σi vi. Condition 3 is trickier and requires a deeper understanding of its connection to the SVD.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-30
SLIDE 30

Ill-posedness & SVD

The singular values decay to zero, with no gap in the spectrum. The decay rate determines how difficult the problem is to solve. The discretization of the Radon transformation (with dense angle sampling) gives a mildly ill-posed problem.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-31
SLIDE 31

Ill-posedness & SVD

No noise With noise When there is no noise, the singular values σi and the coefficients |uT

i y| both

level off at the machine precision. When there is noise, the coefficients |uT

i yδ| level off at the noise level, and only

a few SVD components are reliable.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-32
SLIDE 32

Ill-Conditioned Problems

Discrete ill-posed problems are characterized by system matrices with a large condition number: these problems are caller ill-conditioned. This translates into the solution being very sensitive to errors in the data. If y are the noiseless data and yδ = y+δ are the noisy data, classical perturbation theory leads to the bound f true − f2 f true2 ≤ cond(K) δ2 f true2 Since cond(K) = σ1/σk is large, the pseudoinverse solution f † = K†yδ can be very far from f true.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-33
SLIDE 33

The Geometry of Ill-Conditioned Problems

Object Space Rn = span{v1, . . . , vn} Data Space Rm = span{u1, . . . , um} = f true y = Kf true = yδ = y + δ f † = K†yδ

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-34
SLIDE 34

Illustration of the Ill-posedness of Tomography

K K Difference 0.00254

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-35
SLIDE 35

Illustration of the Ill-posedness of Tomography

K K Difference 0.00124

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-36
SLIDE 36

Illustration of the Ill-posedness of Tomography

K K Difference 0.00004

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-37
SLIDE 37

How to Cure Ill-posedness?

Object Space Rn = span{v1, . . . , vn} Data Space Rm = span{u1, . . . , um} = f true y = Kf true = yδ = y + δ

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-38
SLIDE 38

How to Cure Ill-posedness? Robust Solution: Regularization

Object Space Rn = span{v1, . . . , vn} Data Space Rm = span{u1, . . . , um} = f true y = Kf true = yδ = y + δ Γα(yδ) We need to define a family of continuous functions Γα : Rn → Rm so that the reconstruction error Γα(δ)(yδ) − f true2 vanishes asymptotically as δ → 0.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-39
SLIDE 39

SVD & Noisy Data

Recall that the Moore-Penrose pseudoinverse solution is given by f † =

k

  • i=1

uT

i yδ

σi vi = argmin

f

  • Kf − yδ

2

2.

where the last step states that it is equivalent to finding the solution of the least squares problem. When dealing with noisy data, we have: uT

i yδ = uT i y + uT i δ ≈

  • uT

i y

|uT

i y| > |uT i δ|

uT

i δ

|uT

i y| < |uT i δ|

Notice that the “noisy” components |uT

i yδ| are those for which |uT i y| is small

and they correspond to the smallest singular values σi.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-40
SLIDE 40

Truncated SVD (TSVD)

A simple approach is to discard the SVD coefficients corresponding to the small- est singular values. This turns out to be a regularization technique, called truncated SVD: f TSVD =

r

  • i=1

uT

i yδ

σi vi where the truncation parameter r is dictated by the coefficients |uT

i yδ|, not

the singular values. In practice, r has to be chosen as the index i where |uT

i yδ| start to “level off”

due to the noise. It is clear that the condition number for the TSVD solution is σ1 σr < σ1 σk = cond(K).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-41
SLIDE 41

Naive Reconstruction (Moore-Penrose Pseudoinverse)

Original phantom f †: RE = 100%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-42
SLIDE 42

Truncated SVD

Original phantom f TSVD: RE = 35%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

slide-43
SLIDE 43

Where TSVD Stands in The Geometry of Ill-Conditioned Problems

Object Space Rn = span{v1, . . . , vn} Data Space Rm = span{u1, . . . , um} = f true y = Kf true = yδ = y + δ f † = K†yδ = f TSVD

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019