Numerical Methods for Ill-Posed Problems III Lothar Reichel Summer - - PowerPoint PPT Presentation

numerical methods for ill posed problems iii lothar
SMART_READER_LITE
LIVE PREVIEW

Numerical Methods for Ill-Posed Problems III Lothar Reichel Summer - - PowerPoint PPT Presentation

Numerical Methods for Ill-Posed Problems III Lothar Reichel Summer School on Applied Analysis TU Chemnitz, Oct. 4-8, 2010 1 Outline of Lecture 3: Tikhonov regularization of large-scale problems The discrepancy principle Solution


slide-1
SLIDE 1

Numerical Methods for Ill-Posed Problems III Lothar Reichel Summer School on Applied Analysis TU Chemnitz, Oct. 4-8, 2010

1

slide-2
SLIDE 2

Outline of Lecture 3:

  • Tikhonov regularization of large-scale problems

– The discrepancy principle – Solution norm constraint – Nonnegativity constraint

  • Truncated iteration
  • Multilevel methods
  • Alternating iterative methods

2

slide-3
SLIDE 3

Tikhonov regularization and the discrepancy principle

3

slide-4
SLIDE 4

Write the Tikhonov minimization problem in the form min{Ax − b2 + 1 µx2}, µ > 0. Define discrepancy associated with xµ, dµ = b − Axµ Assume that an upper bound ǫ for norm of error e in b known. The discrepancy principle prescribes that µ should be chosen so that dµ = ǫ.

4

slide-5
SLIDE 5

To avoid underregularization, choose ˆ µ so that ǫ ≤ dˆ

µ ≤ ǫη

for some η > 1 and compute approximation of xˆ

µ. 5

slide-6
SLIDE 6

Define φ(µ) := b − Axµ2. After ℓ Lanczos bidiagonalization steps, we can evaluate the ℓ-point Gauss rule φℓ(µ) := b2eT

1 (µCℓCT ℓ + Iℓ)−2e1

and the (ℓ + 1)-point Gauss-Radau rule ¯ φℓ(µ) := b2eT

1 (µ ¯

Cℓ ¯ CT

ℓ + Iℓ+1)−2e1. 6

slide-7
SLIDE 7

Recall that φℓ(µ) < φ(µ) < ¯ φℓ(µ). We want to determine ℓ and ˆ µ such that ǫ2 ≤ φℓ(ˆ µ) ¯ φℓ(ˆ µ) ≤ η2ǫ2, from which it follows that ǫ2 ≤ φ(µ) ≤ η2ǫ2. Let µ∗ solve φ(µ∗) = ǫ2.

7

slide-8
SLIDE 8

10

2

10

3

10

4

10

5

10

−11

10

−10

10

−9

10

−8

10

−7

10

−6

10

−5

10

−4

8

slide-9
SLIDE 9

Selection of ˆ µ according to Morozov discrepany principle.

  • 1. Choose µ < µ∗, e.g., µ = 0.
  • 2. Solve φℓ(µ) = ǫ2 e.g. by Newton’s method.

ˆ µ = solution.

  • 3. If ¯

φℓ(ˆ µ) ≤ ǫ2η2 then accept ˆ µ. Done!

  • 4. ℓ = ℓ + 1
  • 5. Go to 2) using ˆ

µ as starting value for Newton’s method Monotonic convergence to solution from the left, thus avoiding underregularization

9

slide-10
SLIDE 10

Baart, n = 100,

e b = 10−3

η = 2, ℓ = 3 = ⇒ 6 mat-vec; xµ3,3 − ˆ x = 1.2 · 10−1

10 20 30 40 50 60 70 80 90 100 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18

ˆ x (blue), xµ3,3 (red), Tikhonov solution, xµ3 (green) 10

slide-11
SLIDE 11

Shaw, n = 100,

e b = 10−3

η = 2, ℓ = 5 = ⇒ 10 mat-vec; xµ5,5 − ˆ x = 7.3 · 10−2

10 20 30 40 50 60 70 80 90 100 0.5 1 1.5 2 2.5

ˆ x (blue), xµ5,5 (red), Tikhonov solution, xµ5 (green) 11

slide-12
SLIDE 12

Phillips: n = 200,

e b = 10−3

η = 2, ℓ = 4 = ⇒ 8 mat-vec

20 40 60 80 100 120 140 160 180 200 −0.1 0.1 0.2 0.3 0.4 0.5 0.6

ˆ x (blue), xµ4,4 (red), Tikhonov solution, xµ4 (green) 12

slide-13
SLIDE 13

Tikhonov regularization with a solution norm constraint

13

slide-14
SLIDE 14

Assume that ˆ x = ∆ is known. Solve min

x|=∆ b − Ax

(a so-called trust-region subproblem) Denote the solution by x∗. Theorem: Under suitable conditions, there exist a unique µ∗ > 0, such that x∗ = (ATA + µ∗I)−1ATb.

  • Cf. Tikhonov regularization.

14

slide-15
SLIDE 15

Thus xµ2 = bTA(ATA + µI)−2ATb. We use Gauss and Gauss-Radau quadrature rules to

  • btain lower and upper bounds for xµ2.

Determine ℓ and µ, so that η2∆2 ≤ ˆ Gℓ(φ), ˆ Rℓ(φ) ≤ ∆2 for some 0 < η < 1. Then η2∆2 ≤ xµ2 ≤ ∆2. Generally, η ≈ 1.

15

slide-16
SLIDE 16

Phillips; n = 300, η = 0.999,

e b = 6.5 · 10−3.

∆ = 2.9999, ℓ = 8, = ⇒ 16 mat-vec.

50 100 150 200 250 300 −0.05 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 exact computed

16

slide-17
SLIDE 17

Tikhonov regularization with nonnegativity constraint

17

slide-18
SLIDE 18

Assume ˆ x = ∆ known and ˆ x ≥ 0. Consider min

x=∆ x≥0

b − Ax Introduce barrier function fγ(x) = 1 2b − Ax2 − γ

n

  • i=1

log ξi, x = [ξ1, ξ2, . . . , ξn]T. Minimization problem min

x=∆ fγ(x) 18

slide-19
SLIDE 19

Quadratic model qγ(x + h) = fγ(x) + ∇fγ(x)Th + 1 2hT∇2fγ(x)h. Trust-region subproblem min

h, x+h=∆ qγ(x + h)

Solution xµ = x + h is of the form xµ = (ATA + γX−2 + µI)−1(ATb + 2γX−1e), where X = diag[xµ], e = [1, 1, . . . , 1]T, µ ≥ 0 chosen so that xµ = ∆.

19

slide-20
SLIDE 20
  • 0. Solve problem without nonnegativity constraint for

initial approximate solution x of constrained problem. Let x := max{x, δ} for some δ > 0. Let X = diag[x]. Until convergence do

  • 1. Apply ℓ Lanczos steps to the matrix

ATA + γX−2 with initial vector ATb + 2γX−1e. Use Gauss rules to determine ℓ. Gives xµ,ℓ ≈ xµ with µ such that xµ,ℓ = ∆.

  • 2. Update γ, let X = diag[xµ,ℓ].
  • 3. Go to 1.

20

slide-21
SLIDE 21

Phillips, n = 300, ǫ = e

ˆ b = 5 · 10−3

a) Linear problem (in x) ℓ = 8,

xµ,ℓ−ˆ x ˆ x

= 1.9 · 10−2 xµ,ℓ,0 = max{xµ,ℓ, 0} componentwise xµ,ℓ,0 − ˆ x ˆ x = 1.4 · 10−2 b) Nonlinear problem 21+8 Lanczos steps give ˜ xµ ˜ xµ − ˆ x ˆ x = 5.4 · 10−3 Total number of mat-vec products: 79 21

slide-22
SLIDE 22

50 100 150 200 250 300 −0.05 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 exact and computed approximate solutions exact (2.20) projected (2.20) (3.16)

22

slide-23
SLIDE 23

Blow-up

50 100 150 200 250 300 −8 −6 −4 −2 2 4 6 8 x 10

−3

blow−up of exact and computed approximate solutions exact (2.20) projected (2.20) (3.16)

23

slide-24
SLIDE 24

Hemispheres, n = 2562, ǫ = e

ˆ b = 10−3,

Nonsymmetric blurring matrix that models Gaussian blur. a) Linear problem (in x) ℓ = 26,

xµ,ℓ−ˆ x ˆ x

= 8.3 · 10−2 xµ,ℓ,0 = max{xµ,ℓ, 0} componentwise xµ,ℓ,0 − ˆ x ˆ x = 8.1 · 10−2 b) Nonlinear problem 27 Lanczos steps give ˜ xµ ˜ xµ − ˆ x ˆ x = 7.3 · 10−2 Total number of mat-vec products: 110 24

slide-25
SLIDE 25

Blur- and noise-free image

25

slide-26
SLIDE 26

Blurred and noisy image

26

slide-27
SLIDE 27

Computed solution without nonnegativity constraint

27

slide-28
SLIDE 28

Solution without nonnegativity constraint after projection

28

slide-29
SLIDE 29

Computed solution with nonnegativity constraint

29

slide-30
SLIDE 30

Regularization by truncated iteration

30

slide-31
SLIDE 31

CGNR: CG applied to A∗Ax = A∗b Define the Krylov subspace Km(A∗A, A∗b) = span{A∗b, (A∗A)A∗b, . . . , (A∗A)m−2A∗b, (A∗A)m−1A∗b}. Then xm ∈ Km(A∗A, A∗b) and Axm − b = min

x∈Km(A∗A,A∗b) Ax − b

Therefore discrepancy dj = b − Axj satisfies b ≥ d1 ≥ . . . ≥ dm.

31

slide-32
SLIDE 32

Stopping Criterion Discrepancy principle Let α > 1 be fixed, e = ˆ b − b = δ. The iterate xm satisfies the discrepancy principle if Axm − b ≤ αδ Stopping rule Terminate the iterations as soon as iterate xm satisfies Axm − b ≤ αδ Axm−1 − b > αδ Denote the termination index by m(δ).

32

slide-33
SLIDE 33

An iterative method is a regularization method if lim

δց0 sup e≤δ

xm(δ) − ˆ x = 0 CGNR is a regularization method; see Nemirovskii, Hanke.

33

slide-34
SLIDE 34

Other iterative methods: Range-restrcited minimal residual methods for symmetric problems Define the Krylov subspace Km(A, Ab) = span{Ab, A2b, . . . , Amb}. Then xm ∈ Km(A, Ab) and Axm − b = min

x∈Km(A,Ab) Ax − b.

Hanke showed they are regularization methods.

34

slide-35
SLIDE 35

GMRES: A minimal residual method for nonsymmetric problems Define the Krylov subspace Km(A, b) = span{b, Ab, . . . , Am−1b}. Then xm ∈ Km(A, b) and Axm − b = min

x∈Km(A,b) Ax − b.

This is a regularization method under stronger conditions than CGNR.

35

slide-36
SLIDE 36

RRGMRES (Range Restricted GMRES): A minimal residual method for nonsymmetric problems Define the Krylov subspace Km(A, Ab) = span{Ab, A2b, . . . , Amb}. Then xm ∈ Km(A, Ab) and Axm − b = min

x∈Km(A,Ab) Ax − b.

This is a regularization method under the same conditions as GMRES.

36

slide-37
SLIDE 37

PDE methods for image restoration

37

slide-38
SLIDE 38

Continuous image degradation model f(x) =

  • Ω h(x − y)ˆ

u(y)dy + η(x), x ∈ Ω, with h a (for now) symmetric point spread function. The integral equation can be expressed as f = h ∗ ˆ u + η. Discretization yields b = Au with matrix A symmetric block Toeplitz with Toeplitz blocks.

38

slide-39
SLIDE 39

Nonlinear smoothing operators Return to Tikhonov regularization: min

u

  • Ω(h ∗ u − f)2 + µR(u)dx
  • .

Euler-Lagrange equations with gradient descent: ∂u ∂t = −h ∗ (h ∗ u − f) + µ D(u), u0 = f.

39

slide-40
SLIDE 40

Examples: R(u) = |∇u|2 = ⇒ D(u) = ∆u R(u) = |∇u| = ⇒ D(u) = div( ∇u |∇u|) TV operator Perona-Malik operator: D(u) = ∇ · (g(|∇u|2)∇u), g(s) = 1/(1 + s/ρ), ρ > 0, where g diffusivity.

40

slide-41
SLIDE 41

Semi-discretization of Euler-Lagrange equation: du dt = (µL(u) − A2)u + Ab, where L(u) a nonlinear operator.

  • Explicit integration method (Euler): CFL condition

imposes tiny time steps ⇒ method expensive

  • Semi-implicit integration method:

[I − τ(µL(uj−1) − A2)]uj = uj−1 + τAb, where uj ≈ u(jτ). System expensive to solve and small time steps τ required ⇒ method expensive

41

slide-42
SLIDE 42

Use Perona-Malik operator D to define a nonlinear prolongation operator for the multilevel scheme. Discretization of ∂u ∂t = D(u) gives du dt = L(u)u, u ∈ Wi, u(0) = Piui−1. May integrate for a few, say ≤ 10, (non-tiny) time steps by explicit method.

42

slide-43
SLIDE 43

Cascadic multilevel methods

43

slide-44
SLIDE 44

Consider

  • Ω k(s, t)x(s)ds = b(t),

t ∈ Ω Let W1 ⊂ W2 ⊂ . . . ⊂ Wℓ ⊂ L2(Ω) nested subspaces Ri : L2(Ω) → Wi restriction operator Q∗

i

: Wi → L2(Ω) prolongation operator, e.g., Q∗

i = R∗ i

ˆ bi = Riˆ b, bi = Rib, Ai = RiAQ∗

i

Pi : Wi−1 → Wi prolongation operator

44

slide-45
SLIDE 45

Algorithm 1 Multilevel Algorithm Input: A, b, δ, ℓ ≥ 1 (number of levels); Output: approximate solution ˜ x ∈ Wℓ; Determine Ai and bi for 1 ≤ i ≤ ℓ; x0 := 0; for i := 1, 2, . . . , ℓ do xi,0 := Pixi−1; ∆xi,mi := IM(Ai, bi − Aixi,0); Correction step: xi := xi,0 + ∆xi,mi; endfor ˜ x := xℓ;

45

slide-46
SLIDE 46

Noise reducing restrictions Ri: Solve weighted local least-squares problems (inspired by Buades et al.): 1D problems: Define Mi : Wi → Wi−1 by solving min

a0,a1

  • s∈{0,±1}
  • x(2j+s)

i

− (a0 + a1s)

2 ω(2j)

i

(s) ∀j where ω(2j)

i

(s) := exp

  • −γ
  • x(2j+s)

i

− x(2j)

i

2

, γ > 0

46

slide-47
SLIDE 47

Solution {ˆ a0, ˆ a1}. Let x(j)

i−1 := ˆ

a0 for all j. Define Ri = Mi+1Mi+2 . . . Mℓ, 1 ≤ i < ℓ. 2D problems:

47

slide-48
SLIDE 48

Pixel mask for 2D problems

(2j,2k)

(2j−1,2k−1) (2j−1,2k) (2j,2k+1) (2j+1,2k+1) (2j+1,2k) (2j+1,2k−1) (2j,2k−1) (2j−1,2k+1)

48

slide-49
SLIDE 49

Assume that ˆ b − b = δ. Then, under some simplifying assumptions, such as γ = 0, one can show that for 1D problems one can expect the projected errors to satisfy the bounds ˆ bi − bi ≤ ciδ, ci = 1 √ 3ci+1, 1 ≤ i < ℓ, cℓ = 1. For 2D problems, ci = 1 3ci+1.

49

slide-50
SLIDE 50

Edge preserving nonlinear prolongation The operator Pi consists of two parts: xi−1 ∈ Wi−1 → Lixi−1 ∈ Wi piecewise linear interpolation Lixi−1 is smoothed by solving an IBVP for a (discretized) Perona-Malik diffusion equation, ∂x ∂τ = ∂ ∂s

 ψ′  

∂sx

  • 2

 ∂

∂sx

  ,

x = x(τ, s), a ≤ s ≤ b,

  • ver a short τ-interval with ψ′(s) = ρ/(s + ρ), ρ > 0.

This removes noise and preserves edges.

50

slide-51
SLIDE 51

Theorem: Let ˆ bi − bi ≤ ciδi, ci > 1, 1 ≤ i ≤ ℓ. Assume that the Pi are linear(ized) and that R(Pi) ⊂ R(A∗

i ),

2 ≤ i ≤ ℓ. Then lim

δiց0

sup

||ˆ bi−bi||≤ciδi

||ˆ xi − xi,mi(δi)|| = 0, 1 ≤ i ≤ ℓ, where ˆ xi = A†

  • bi. Thus, the multilevel is a regularization

method.

51

slide-52
SLIDE 52

Examples: Noise reduction by Perona-Malik integration Example 2: Signal contaminated by 1% Gaussian noise. The noise-free signal is smooth.

50 100 150 200 250 300 350 400 450 500 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

52

slide-53
SLIDE 53

Example 2: Signal denoised by integrating the Perona-Malik equation 10 steps of size ∆τ = 0.3.

50 100 150 200 250 300 350 400 450 500 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

53

slide-54
SLIDE 54

Example 3: Signal contaminated by 1% Gaussian noise. The noise-free signal is piecewise constant.

100 200 300 400 500 600 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07

54

slide-55
SLIDE 55

Example 3: Signal denoised by integrating the Perona-Malik equation 10 steps of size ∆τ = 0.3.

100 200 300 400 500 600 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07

55

slide-56
SLIDE 56

Example 4: Signal contaminated by 10% Gaussian noise. The noise-free signal is piecewise constant.

100 200 300 400 500 600 −0.02 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

56

slide-57
SLIDE 57

Example 4: Signal denoised by integrating the Perona-Malik equation.

100 200 300 400 500 600 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07

57

slide-58
SLIDE 58

Example 5: Fredholm integral equation of the 1st kind

π

0 exp(−st)x(t)dt = 2sinh(s)

s , 0 ≤ s ≤ π 2; same as in Example 1. Discretization by Galerkin method using 512 piecewise constant test and trial functions. Relative error ν. Matrix sizes for 5-level method: A1 ∈ R32×32, A2 ∈ R64×64, A3 ∈ R128×128, A4 ∈ R256×256, A = A5 ∈ R512×512. Minimal residual Iterative method for symmetric problems.

58

slide-59
SLIDE 59

Pi Li ℓ ν ˜ x − ˆ x/ˆ x # iter ˜ x − ˆ x/ˆ x # iter 1 1 · 10−2 3.51 · 10−2 3 2 1 · 10−2 3.26 · 10−2 3 1 3.41 · 10−2 3 1 3 1 · 10−2 3.10 · 10−2 3 1 1 3.41 · 10−2 3 1 1 5 1 · 10−2 2.51 · 10−2 3 2 1 1 1 3.35 · 10−2 3 2 1 1 1 1 1 · 10−3 3.53 · 10−2 3 2 1 · 10−3 3.34 · 10−2 3 1 3.57 · 10−2 3 1 3 1 · 10−3 3.08 · 10−2 3 2 1 3.56 · 10−2 3 2 1 5 1 · 10−3 2.00 · 10−2 3 2 2 3 1 3.50 · 10−2 3 2 2 2 1

59

slide-60
SLIDE 60

Example 6: Signal deblurring by solution of Fredhom integral equation of the 1st kind with a Gaussian convolution kernel. Discretization gives symmetric Toeplitz matrices. Relative error 10%. Matrix sizes for 3-level method: A1 ∈ R128×128, A2 ∈ R256×256, A = A3 ∈ R512×512 κ(A) = 1 · 1017. Iterative method: MR-II

60

slide-61
SLIDE 61

Example 6: Signal contaminated by 10% Gaussian noise and blur. The noise-free signal is piecewise constant.

100 200 300 400 500 600 −0.02 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

61

slide-62
SLIDE 62

Example 6: Restored signal by multilevel method with 3 levels.

100 200 300 400 500 600 −0.01 0.01 0.02 0.03 0.04 0.05 0.06 0.07

62

slide-63
SLIDE 63

Pi Li ℓ ˜ x − ˆ x/ˆ x # iter ˜ x − ˆ x/ˆ x # iter 1 1.02 · 10−1 1 2 8.70 · 10−2 1 1 8.78 · 10−2 1 1 3 8.08 · 10−2 4 1 1 8.88 · 10−2 4 1 1

63

slide-64
SLIDE 64

Application to image restoration

64

slide-65
SLIDE 65

Example 7. Blur- and noise-free 512 × 512-pixel image.

65

slide-66
SLIDE 66

Restoration of image with 5% noise, Gaussian blur, determined by 2 RRGMRES iterations. Quantitative measure: PSNR= 20 log10

255 ||uℓ−ˆ u|| = 27.98

dB.

66

slide-67
SLIDE 67

RRGMRES CGNR ℓ ν PSNR # iter PSNR # iter 1 1 · 10−2 29.77 3 32.36 9 2 1 · 10−2 31.14 3 2 34.09 5 7 1 5 · 10−2 27.98 2 28.74 3 2 5 · 10−2 28.80 2 1 29.90 2 2

67

slide-68
SLIDE 68

Example 8. Original lizard image and image contaminated by 10% noise and motion blur.

68

slide-69
SLIDE 69

Lizard restorations: CGNR CGNR-based 3-level method PSNR=25.11 PSNR=26.11

69

slide-70
SLIDE 70

Example 9: Original 256 × 256 image.

70

slide-71
SLIDE 71

Image contaminated by Gaussian blur and 1% noise.

71

slide-72
SLIDE 72

Restoration by solving Euler-Lagrange equation, Perona-Mailk regularization operator, 500 time-steps, 1000 mvp, PSNR=28.84.

72

slide-73
SLIDE 73

Restoration by 3-level method, Perona-Malik-based prolongation operator, 4 mvp, PSNR=35.85.

73

slide-74
SLIDE 74

Carrying out too many iterations may give rise to instability and produce unexpected results ...

74

slide-75
SLIDE 75

such as ...

75

slide-76
SLIDE 76
  • r ...

76

slide-77
SLIDE 77

Application to image segmentation

77

slide-78
SLIDE 78

Algorithm 2 Enhanced Multilevel Algorithm Input: A, b, δ, ℓ ≥ 1 (number of levels); Output: approximate solution ˜ x := xℓ ∈ Wℓ; Determine Ai and bi for 1 ≤ i ≤ ℓ; x0 := 0; φ0 :=initial contour; for i := 1, 2, . . . , ℓ do xi,0 := Pixi−1; φi,0 := Siφi−1; ∆xi,mi := IM(Ai, bi − Aixi,0); Correction step: xi := xi,0 + ∆xi,mi; Segmentation step: φi := GAC(φi,0, xi); endfor

78

slide-79
SLIDE 79

Example 10. Segmentation of noise- and blur-free image.

79

slide-80
SLIDE 80

Available image with 10% noise and Gaussian blur restored by CGNR and then segmented.

80

slide-81
SLIDE 81

Available contaminated images restored and segmented by 3-level CGNR-based method. Segmentation on level 2.

81

slide-82
SLIDE 82

Segmentation on level 3.

82

slide-83
SLIDE 83

Wavelet-based multilevel methods

83

slide-84
SLIDE 84

Let φ and and ψ be scaling and wavelet functions,

  • respectively. Define

Ψj,k := 2j/2Ψ(2j · −k), φj,k := 2j/2φ(2j · −k), where j and k the scale and space parameters,

  • respectively. Then

x =

  • k∈Z

(x, φ0,k)φ0,k +

  • j≥0
  • k∈Z

(x, Ψj,k)Ψj,k , x ∈ L2(R). Let Ψ−1,k := φ0,k and define the subspaces Wj = span{Ψj,k}|k|≤k(j). Here k(j) finite since we consider a finite interval [a, b].

84

slide-85
SLIDE 85

Define Subspace: Ui :=

i

  • l=−1

Wl ⊂ L2([a, b]) Orthogonal projector: Qi : L2([a, b]) → Ui. Restricted operator Ai : Ui → Ui: Ai := QiAQi, i = −1, 0, 1, . . . .

85

slide-86
SLIDE 86

Prolongation operator Pi : Ui−1 → Ui: Pix :=

i

  • j=−1
  • |k|≤k(j)

˜ xj,kΨj,k with ˜ xj,k =

    

xj,k, for j ≤ i − 1, |k| ≤ k(i), 0, for j = i, |k| ≤ k(i). Then R(Pi|R(A∗

i−1)) ⊂ R(A∗

i ) = N(Ai)⊥. 86

slide-87
SLIDE 87

Theorem: Apply multilevel method based on the CGNR

  • r MR-II iterative methods. Then the computed

approximate solution on level i lives in N(Ai)⊥. Example 11. Consider the integral equation

π/2

κ(s, t)x(s)ds = sin2(3t) − 0.9t3 + t2, 0 ≤ t ≤ π/2, with κ(s, t) := exp(s cos(t)).

87

slide-88
SLIDE 88

Order ni of the matrices Ai. Submatrix A1 A2 A3 A4 A5 A6 A7 A8 ni 28 50 88 158 292 554 1071 2098

88

slide-89
SLIDE 89

Right-hand sides without noise and with 10% noise

89

slide-90
SLIDE 90

Exact and computed approximate solutions for 10% noise

90

slide-91
SLIDE 91

Exact and computed approximate solutions for 1% noise

91

slide-92
SLIDE 92

Exact and computed approximate solutions for 0.001% noise

92

slide-93
SLIDE 93

CGNR ML-CGNR noise k

xk−ˆ x ˆ x

ki

x8,k8−ˆ x ˆ x

Accel. 1 · 10−1 2 5.78 · 10−1 2, 0, 0, 0, 0, 0, 0, 0 5.66 · 10−1 17.6 1 · 10−2 3 5.35 · 10−1 3, 0, 0, 0, 0, 0, 0, 0 5.28 · 10−1 23.5 1 · 10−3 5 3.41 · 10−1 5, 0, 0, 0, 0, 0, 0, 0 3.24 · 10−1 35.2 1 · 10−4 5 3.32 · 10−1 5, 0, 0, 0, 0, 0, 0, 0 3.14 · 10−1 35.2 1 · 10−5 8 3.49 · 10−2 8, 0, 0, 0, 0, 0, 0, 0 3.46 · 10−2 52.6

93

slide-94
SLIDE 94

Alternating iterative methods

94

slide-95
SLIDE 95

Continuous image degradation model f(x) =

  • Ω h(x, y)ˆ

u(y)dy + η(x), x ∈ Ω with h the point spread function and f the available blur- and noise- contaminated image. Determine the blur- and noise-free image ˆ u by solving min

u,w J(u, w),

where

95

slide-96
SLIDE 96

J(u, w) :=

  • Ω h(x, y)u(y)dy − f(x)

2

+α (L(u − w)(x))2 + β |∇w(x)|

  • dx,

using an alternating iterative method. Here L is a differential operator, e.g., the Perona-Malik operator and α > 0, β > 0 are regularization parameters.

96

slide-97
SLIDE 97

Discrete setting: u(i) = Sh(w(i−1)) := argminu∈Kℓ {Hu − f δ2 +αL(i−1)(u − w(i−1))2}, w(i) = Stv(u(i)) := argminw∈Kℓ {L(i−1)(w − u(i))2 + βwtv}, for i = 1, 2, 3, . . . , where · tv is a discrete TV-norm, L(i−1) is a discretization of the operator L, and Kℓ a Krylov subspace.

97

slide-98
SLIDE 98

Available blur- and noise-contaminated image (Gaussian blur, 50% noise.

98

slide-99
SLIDE 99

Restored image.

99

slide-100
SLIDE 100

Available blur- and noise-contaminated image (Gaussian blur, 10% noise.

100

slide-101
SLIDE 101

Restored image.

101

slide-102
SLIDE 102

Vielen Dank!

102