Numerical Methods for Large-Scale Ill-Posed Inverse Problems - - PowerPoint PPT Presentation

numerical methods for large scale ill posed inverse
SMART_READER_LITE
LIVE PREVIEW

Numerical Methods for Large-Scale Ill-Posed Inverse Problems - - PowerPoint PPT Presentation

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Numerical Methods for Large-Scale Ill-Posed Inverse Problems Julianne Chung University of Maryland Collaborators: James


slide-1
SLIDE 1

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Numerical Methods for Large-Scale Ill-Posed Inverse Problems

Julianne Chung University of Maryland

Collaborators: James G. Nagy (Emory) Eldad Haber (Emory) Dianne O’Leary (University of Maryland) Ioannis Sechopoulos (Emory) Chao Yang (Lawrence Berkeley National Laboratory)

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-2
SLIDE 2

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

What is an inverse problem?

Physical System Input Signal Output Signal Forward Model

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-3
SLIDE 3

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

What is an inverse problem?

Physical System Input Signal Output Signal Forward Model

Inverse Problem

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-4
SLIDE 4

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Image Deblurring

Given: Blurred image and some information about the blurring Goal: Compute approximation of true image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-5
SLIDE 5

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Image Deblurring

Given: Blurred image and some information about the blurring Goal: Compute approximation of true image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-6
SLIDE 6

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Super-Resolution Imaging

Given: LR images and some information about the motion parameters

1−th low resolution image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-7
SLIDE 7

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Super-Resolution Imaging

Given: LR images and some information about the motion parameters

8−th low resolution image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-8
SLIDE 8

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Super-Resolution Imaging

Given: LR images and some information about the motion parameters

15−th low resolution image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-9
SLIDE 9

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Super-Resolution Imaging

Given: LR images and some information about the motion parameters

22−th low resolution image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-10
SLIDE 10

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Super-Resolution Imaging

Given: LR images and some information about the motion parameters

29−th low resolution image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-11
SLIDE 11

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Super-Resolution Imaging

Given: LR images and some information about the motion parameters Goal: Improve parameters and approximate HR image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-12
SLIDE 12

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Tomographic Imaging

Given: 2D projection images Goal: Approximate 3D volume

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-13
SLIDE 13

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Application: Tomographic Imaging

Given: 2D projection images Goal: Approximate 3D volume

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-14
SLIDE 14

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

What is an Ill-Posed Inverse Problem?

Hadamard (1923): A problem is ill-posed if the solution does not exist, is not unique, or does not depend continuously on the data.

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-15
SLIDE 15

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

What is an Ill-Posed Inverse Problem?

Hadamard (1923): A problem is ill-posed if the solution does not exist, is not unique, or does not depend continuously on the data.

Invers e Problem Forward Problem Naive inverse solution is corrupted with noise!

True image: Blurred & noisy image: Inverse Solution: Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-16
SLIDE 16

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Outline

1

Regularization for Least Squares Systems

2

High Performance Implementation

3

Polyenergetic Tomosynthesis

4

Concluding Remarks

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-17
SLIDE 17

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Outline

1

Regularization for Least Squares Systems

2

High Performance Implementation

3

Polyenergetic Tomosynthesis

4

Concluding Remarks

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-18
SLIDE 18

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

The Linear Problem

b = Ax + ε where x ∈ Rn - true data A ∈ Rm×n - large, ill-conditioned matrix ε ∈ Rm - noise, statistical properties may be known b ∈ Rm - known, observed data Goal: Given b and A, compute approximation of x

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-19
SLIDE 19

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Regularization

Tikhonov Regularization min

x

  • ||b − Ax||2

2 + λ2||Lx||2 2

min

x

  • b

A λL

  • x
  • 2

Selecting a good regularization parameter, λ, is difficult

Discrepancy Principle Generalized Cross-Validation L-curve

Difficult for large-scale problems

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-20
SLIDE 20

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Illustration of Semi-convergence Behavior

min

x b − Ax2

20 40 60 80 100 120 140 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 C G

Iteration Relative Error Typical Behavior for Ill-Posed Problems

Iteration 0 Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-21
SLIDE 21

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Illustration of Semi-convergence Behavior

min

x b − Ax2

20 40 60 80 100 120 140 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 C G

Iteration Relative Error Typical Behavior for Ill-Posed Problems

Iteration 0 Iteration 10 Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-22
SLIDE 22

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Illustration of Semi-convergence Behavior

min

x b − Ax2

20 40 60 80 100 120 140 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 C G

Iteration Relative Error Typical Behavior for Ill-Posed Problems

Iteration 0 Iteration 10 Iteration 28 Solution gets better Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-23
SLIDE 23

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Illustration of Semi-convergence Behavior

min

x b − Ax2

20 40 60 80 100 120 140 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 C G

Iteration Relative Error Typical Behavior for Ill-Posed Problems

Iteration 0 Iteration 10 Iteration 28 Iteration 85 Solution gets better Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-24
SLIDE 24

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Illustration of Semi-convergence Behavior

min

x b − Ax2

20 40 60 80 100 120 140 0.4 0.6 0.8 1

CG

Iteration Relative Error Typical Behavior for Ill-Posed Problems

Iteration 0 Iteration 10 Iteration 28 Iteration 85 Iteration 150 Solution gets better Noise corrupts! Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-25
SLIDE 25

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Illustration of Semi-convergence Behavior

min

x b − Ax2

20 40 60 80 100 120 140 0.4 0.6 0.8 1

CG

Iteration Relative Error Typical Behavior for Ill-Posed Problems

Iteration 0 Iteration 10 Iteration 28 Iteration 85 Iteration 150 Solution gets better Noise corrupts!

Either find a good stopping criteria or ...

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-26
SLIDE 26

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Motivation to use a Hybrid Method

... avoid semi-convergence behavior altogether!

Iteration 150 Iteration 0

20 40 60 80 100 120 140 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 C GLS HyBR

Iteration Relative Error Hybrid Method Stabilizes the Error Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-27
SLIDE 27

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Previous Work on Hybrid Methods

Regularization embedded in iterative method: O’Leary and Simmons, SISSC, 1981. Björck, BIT 1988. Björck, Grimme, and Van Dooren, BIT, 1994. Larsen, PhD Thesis, 1998. Hanke, BIT 2001. Kilmer and O’Leary, SIMAX, 2001. Kilmer, Hansen, Espanol, 2006. Use iterative method to solve regularized problem: Golub, Von Matt, Numer. Math.,1991 Calvetti, Golub, Reichel, BIT, 1999 Frommer, Maass , SISC, 1999

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-28
SLIDE 28

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Lanczos Bidiagonalization(LBD)

Given A and b, for k = 1, 2, ..., compute W =

  • w1

w2 · · · wk wk+1

  • ,

w1 = b/||b|| Y =

  • y1

y2 · · · yk

  • B =

       α1 β2 α2 ... ... βk αk βk+1        where W and Y have orthonormal columns, and AY = WB

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-29
SLIDE 29

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

The Projected Problem

After k steps of LBD, we solve the projected LS problem: min

x∈R(Y) ||b − Ax||2

= min

f

||WTb − Bf||2 where x = Yf. Remarks: Ill-posed problem ⇒ B may be very ill-conditioned. B is much smaller than A Standard techniques (e.g. GCV) to find λ and stopping point

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-30
SLIDE 30

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Lanczos Hybrid Method in Action: Satellite

50 100 150 200 250 300 0.3 0.4 0.5 0.6 0.7 0.8 0.9 iteration relative error (Satellite) LSQR (no regularization)

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-31
SLIDE 31

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Lanczos Hybrid Method in Action: Satellite

50 100 150 200 250 300 0.3 0.4 0.5 0.6 0.7 0.8 0.9 iteration relative error (Satellite) LSQR (no regularization) Tikhonov with optimal λk

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-32
SLIDE 32

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Lanczos Hybrid Method in Action: Satellite

50 100 150 200 250 300 0.3 0.4 0.5 0.6 0.7 0.8 0.9 iteration relative error (Satellite) LSQR (no regularization) Tikhonov with optimal λk Tikhonov with GCV λk

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-33
SLIDE 33

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

A Novel Approach: Weighted GCV

min

f

||WTb − Bf||2 GCV tends to over smooth, use weighted GCV function with ω < 1: G(ω, λ) = n||(I − BB†

λ)WTb||2

  • trace(I − ωBB†

λ)

2 New adaptive approach to select ω MATLAB implementation: >> x = HyBR(A, b);

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-34
SLIDE 34

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Results for Satellite

50 100 150 200 250 300 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 iteration relative error (S atellite) LS QR (no regularization) Tikhonov with optimal λk Tikhonov with GCV λk WGCV, adaptive ω

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-35
SLIDE 35

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

The Nonlinear Problem

b = A(y)x + ε where x - true data A(y) - large, ill-conditioned matrix defined by parameters y

(registration, blur, etc.)

ε - additive noise b - known, observed data Goal: Approximate x and improve parameters y

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-36
SLIDE 36

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Mathematical Representation

We want to find x and y so that b = A(y)x + e With Tikhonov regularization, solve min

x,y

  • A(y)

λI

  • x −

b

  • 2

2

Some Considerations: Problem is linear in x, nonlinear in y. y ∈ Rp, x ∈ Rn, with p ≪ n.

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-37
SLIDE 37

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Separable Nonlinear Least Squares

Variable Projection Method: Implicitly eliminate linear term. Optimize over nonlinear term. Some general references: Golub and Pereyra, SINUM 1973 (also IP 2003) Kaufman, BIT 1975 Osborne, SINUM 1975 (also ETNA 2007) Ruhe and Wedin, SIREV, 1980

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-38
SLIDE 38

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Variable Projection Method

Instead of optimizing over both x and y: min

x,y φ(x, y) = min x,y

  • A(y)

λI

  • x −

b

  • 2

2

Minimize the reduced cost functional: min

y ψ(y) ,

ψ(y) = φ(x(y), y) where x(y) is the solution of min

x φ(x, y) = min x

  • A(y)

λI

  • x −

b

  • 2

2

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-39
SLIDE 39

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Variable Projection Method

Instead of optimizing over both x and y: min

x,y φ(x, y) = min x,y

  • A(y)

λI

  • x −

b

  • 2

2

Minimize the reduced cost functional: min

y ψ(y) ,

ψ(y) = φ(x(y), y) where x(y) is the solution of min

x φ(x, y) = min x

  • A(y)

λI

  • x −

b

  • 2

2

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-40
SLIDE 40

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Gauss-Newton Algorithm

choose initial y0 for k = 0, 1, 2, . . . xk = arg min

x

  • A(yk)

λkI

  • x −

b

  • 2

rk = b − A(yk) xk dk = arg min

d Jψd − rk2

yk+1 = yk + dk end

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-41
SLIDE 41

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Gauss-Newton Algorithm with HyBR

choose initial y0 for k = 0, 1, 2, . . . xk = arg min

x

  • A(yk)

λkI

  • x −
  • b
  • 2

⇒ xk =HyBR(A(yk), b) rk = b − A(yk) xk dk = arg min

d Jψd − rk2

yk+1 = yk + dk end

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-42
SLIDE 42

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Numerical Results: Super-resolution

Inverse Problem

Given: Goal:

———————————————————————————

Gauss-Newton Iterations Error of yk λk 0.5810 0.2519 1 0.3887 0.2063 2 0.2495 0.1765 3 0.1546 0.1476 4 0.1077 0.1254 5 0.0862 0.1139 6 0.0763 0.1102 7 0.0706 0.1077 8 0.0667 0.1067 Reconstructed Image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-43
SLIDE 43

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks The Linear Problem: b = Ax + ε The Nonlinear Problem: b = A(y)x + ε

Numerical Results: Super-resolution

Inverse Problem

Given: Goal:

———————————————————————————

Gauss-Newton Iterations Error of yk λk 0.5810 0.2519 1 0.3887 0.2063 2 0.2495 0.1765 3 0.1546 0.1476 4 0.1077 0.1254 5 0.0862 0.1139 6 0.0763 0.1102 7 0.0706 0.1077 8 0.0667 0.1067 Reconstructed Image

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-44
SLIDE 44

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

Outline

1

Regularization for Least Squares Systems

2

High Performance Implementation

3

Polyenergetic Tomosynthesis

4

Concluding Remarks

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-45
SLIDE 45

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

Mathematical Model

min

x

1 2||Ax − b||2 where A =    A1 . . . Am    , b =    b1 . . . bm    Some Applications: Super-resolution Tomography - Cryo-Electron Microscopy

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-46
SLIDE 46

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

An Application: Cryo-EM

Inverse Problem

Given: Goal:

——————————————————————————— min

x ρ(x) ≡ 1

2

m

  • i=1

||Aix − bi||2 where x ∈ Rn3 represents the 3-D electron density map bi ∈ Rn2(i = 1, 2, ..., m) represents 2-D projection images Ai = A(yi) ∈ Rn2×n3 represents projection

yi - translation parameters and Euler angles

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-47
SLIDE 47

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

An Application: Cryo-EM

Inverse Problem

Given: Goal:

——————————————————————————— min

x ρ(x) ≡ 1

2

m

  • i=1

||Aix − bi||2 where x ∈ Rn3 represents the 3-D electron density map bi ∈ Rn2(i = 1, 2, ..., m) represents 2-D projection images Ai = A(yi) ∈ Rn2×n3 represents projection

yi - translation parameters and Euler angles

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-48
SLIDE 48

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

Parallelization using 1D data distribution

EM 2D Images Distribution

  • f Images and

Angles Independent Partial Reconstructions Combination: All reduce Back Projected Volume

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-49
SLIDE 49

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

New Parallelization using 2D data distribution

Distribute images along rows. Distribute volume along columns.

p1,1 p1,2 p2,3 p2,2 p2,1 p1,3

gc2 gr1 b1 b2 b3 b4 b5 b6

x(1)

  • x(2)
  • x(3)

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-50
SLIDE 50

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

Forward and Back Projection on 2D Topology

Ai =

  • A(1)

i

A(2)

i

· · · A(nc)

i

  • ,

x =      x(1) x(2) . . . x(nc)      , ∇ρ =      ∇ρx(1) ∇ρx(2) . . . ∇ρx(nc)      ——————————————————————————— Aix =

nc

  • j=1

A(j)

i x(j)

⇒ All Reduce along Rows ∇ρx(j) =

m

  • i=1

(A(j)

i )Tr(i)

⇒ All Reduce along Columns

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-51
SLIDE 51

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Mathematical Problem 2D Data Distribution

New MPI Parallel Performance

Good for very large problems

Adenovirus Data Set: 500 × 500 pixels, 959 (×60) images nr nc Wall clock seconds speedup 137 7 9635 1 959 2 4841 2 959 4 2406 4 959 8 1335 7.2 959 16 609 15.8 SPARX software package

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-52
SLIDE 52

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Outline

1

Regularization for Least Squares Systems

2

High Performance Implementation

3

Polyenergetic Tomosynthesis

4

Concluding Remarks

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-53
SLIDE 53

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Digital Tomosynthesis

X-ray Mammography Digital Tomosynthesis Computed Tomography

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-54
SLIDE 54

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

An Inverse Problem

Given: 2D projection images Goal: Reconstruct a 3D volume True Images

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-55
SLIDE 55

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Simulated Problem

Original object: 300 × 300 × 200 voxels (7.5 × 7.5 × 5 cm) 21 projection images: 200 × 300 pixels (10 × 15 cm) −30◦ to 30◦, every 3◦ Reconstruction: 150 × 150 × 50 voxels (7.5 × 7.5 × 5 cm)

Detector Center of Rotation Compressed Breast X-ray Tube Support Plate Compression Plate X-ray Tube Chest Wall Detector

Front view Side view with X-ray tube at 0◦

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-56
SLIDE 56

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Polyenergetic Model

Incident X-ray has a distribution

  • f energies

43 energy levels: 5keV - 26keV Consequences: Beam Hardening: Low energy photons preferentially absorbed Unnecessary radiation Linear attenuation coefficient depends on energy

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-57
SLIDE 57

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Polyenergetic Model

Incident X-ray has a distribution

  • f energies

43 energy levels: 5keV - 26keV Consequences: Beam Hardening: Low energy photons preferentially absorbed Unnecessary radiation Linear attenuation coefficient depends on energy

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-58
SLIDE 58

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Monoenergetic Algorithm

Lange and Fessler’s Convex MLEM Algorithm Beam hardening artifacts Monoenergetic Reconstruction

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-59
SLIDE 59

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Previous Methods

Methods for eliminating beam hardening artifacts: Dual Energy Methods

Alvarez and Macovski (1976), Fessler et al (2002)

FBP + Segmentation

Joseph and Spital (1978)

Filter function based on density

De Man et al (2001), Elbakri and Fessler (2003)

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-60
SLIDE 60

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

A Polyenergetic Mathematical Representation

Energy-dependent Attenuation Coefficient: µ(e)(j) = s(e)x(j) + z(e) Voxel j where x(j) represents unknown glandular fraction of jth voxel s(e) and z(e) are known linear fit coefficients

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-61
SLIDE 61

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Computing Image Projections

Ray Trace:

  • Li

µ(e)dl ≈

N

  • j=1

µ(e)(j)a(ij) Vector Notation µ(e) = s(e)x+z(e) ⇒ s(e)Aθx+z(e)Aθ1

1 2 3 4 5 6 2 4 6 10 20 30 40 50 60

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-62
SLIDE 62

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Computing Image Projections

Ray Trace:

  • Li

µ(e)dl ≈

N

  • j=1

µ(e)(j)a(ij) Vector Notation µ(e) = s(e)x+z(e) ⇒ s(e)Aθx+z(e)Aθ1

1 2 3 4 5 6 2 4 6 10 20 30 40 50 60

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-63
SLIDE 63

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Computing Image Projections

Ray Trace:

  • Li

µ(e)dl ≈

N

  • j=1

µ(e)(j)a(ij) Vector Notation µ(e) = s(e)x+z(e) ⇒ s(e)Aθx+z(e)Aθ1

1 2 3 4 5 6 2 4 6 10 20 30 40 50 60

Polyenergetic Projection:

ne

  • e=1

̺(e) exp (−[s(e)Aθxtrue + z(e)Aθ1])

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-64
SLIDE 64

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Statistical Model

Given x, define for pixel i the expected value: ¯ b(i)

θ

=

ne

  • e=1

̺(e) exp (−[s(e)Aθx + z(e)Aθ1]). Let ¯ η(i) be the statistical mean of the noise. Then ¯ b(i)

θ + ¯

η(i) ∈ R is the expected or average observation. Observed Data: b(i)

θ

∼ Poisson(¯ b(i)

θ + ¯

η(i))

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-65
SLIDE 65

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Statistical Model

Likelihood Function: p(bθ, x) =

M

  • i=1

e−(¯

b(i)

θ +¯

η(i))(¯

b(i)

θ + ¯

η(i))b(i)

θ

b(i)

θ !

Negative Log Likelihood Function: −Lθ(x) = − log p(bθ, x) =

M

  • i=1

(¯ b(i)

θ + ¯

η(i)) − b(i)

θ log(¯

b(i)

θ + ¯

η(i))

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-66
SLIDE 66

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Volume Reconstruction

Maximum Likelihood Estimate: xMLE = argmin

x

  • θ=1

−Lθ(x)

  • Numerical Optimization:

Gradient Descent: xk+1 = xk − αk∇L(xk), where ∇L(xk) = ATvk Newton Approach: xk+1 = xk − αkH−1

k ∇L(xk),

where Hk = ATWkA

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-67
SLIDE 67

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Numerical Results

Initial guess: 50% glandular tissue Newton-CG inner stopping criteria:

Max 50 inner iterations residual tolerance < 0.1

Gradient Descent Newton Iteration Iteration Relative Error Iteration Relative Error CG Iterations 1.7691 1.7691

  • 1

1.0958 1 1.1045 3 5 0.8752 2 0.8630 2 10 0.8320 3 0.8403 2 25 0.8024 4 0.7925 16

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-68
SLIDE 68

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Compare Images

True

20 40 60 80 100

Mono−convex

0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9

Gradient

20 40 60 80 100

Newton−CG

20 40 60 80 100

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-69
SLIDE 69

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks Motivation Mathematical Problem Reconstruction Algorithms

Some Considerations

Convexity

Severe nonlinearities ⇒ Cost function is not convex

Regularization xMAP = argmin

x

{−L(x) + λR(x)}

Need good regularizer, R(x):

Huber penalty, Markov Random Fields, Total Variation

Need good methods for choosing λ

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-70
SLIDE 70

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Outline

1

Regularization for Least Squares Systems

2

High Performance Implementation

3

Polyenergetic Tomosynthesis

4

Concluding Remarks

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-71
SLIDE 71

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

Concluding Remarks

Inverse problems arise in many imaging applications. Hybrid methods:

efficient solvers for large scale LS problems effective linear solvers for nonlinear problems

Separable nonlinear LS models exploit high level structure High performance implementation allows reconstruction of large volumes with high resolution Polyenergetic tomosynthesis:

Novel mathematical framework Standard optimization made feasible Better reconstructed images

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

slide-72
SLIDE 72

Regularization for Least Squares Systems High Performance Implementation Polyenergetic Tomosynthesis Concluding Remarks

References

Linear LS (HyBR):

Chung, Nagy, O’Leary. ETNA (2008)

http://www.cs.umd.edu/∼jmchung/Home/HyBR.html Nonlinear LS:

Chung, Haber, Nagy. Inverse Problems (2006) Chung, Nagy. Journal of Physics Conference Series (2008) Chung, Nagy. SISC (Accepted 2009)

High Performance Computing:

Chung, Sternberg, Yang. Int. J. High Perf. Computing (Accepted 2009) Project featured in DOE publication, DEIXIS 2009

Digital Tomosynthesis:

Chung, Nagy, Sechopoulos. (Submitted 2009)

Thank you!

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems