A Double Regularization Approach for Inverse Problems with Noisy - - PowerPoint PPT Presentation

a double regularization approach for inverse problems
SMART_READER_LITE
LIVE PREVIEW

A Double Regularization Approach for Inverse Problems with Noisy - - PowerPoint PPT Presentation

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universit at - Linz Florian opolis - September, 2011. supported by Doctoral Program


slide-1
SLIDE 1

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

Ismael Rodrigo Bleyer

  • Prof. Dr. Ronny Ramlau

Johannes Kepler Universit¨ at - Linz

Florian´

  • polis - September, 2011.

supported by

Doctoral Program

Computational Mathematics

Numerical Analysis and Symbolic Computation

Bleyer, Ramlau JKU Linz 1 / 27

slide-2
SLIDE 2

Overview

  • Introduction
  • Proposed method: DBL-RTLS
  • Computational aspects
  • Numerical illustration
  • Outline and future work

Bleyer, Ramlau JKU Linz 2 / 27

slide-3
SLIDE 3

Introduction

Overview

  • Introduction
  • Proposed method: DBL-RTLS
  • Computational aspects
  • Numerical illustration
  • Outline and future work

Bleyer, Ramlau JKU Linz 2 / 27

slide-4
SLIDE 4

Introduction

Inverse problems

“Inverse problems are concerned with determining causes for a desired or an observed effect” [Engl, Hanke, and Neubauer, 2000] Consider a linear operator equation Ax = y. Inverse problems most oft do not fulfill Hadamard’s postulate [1902] of well posedness (existence, uniqueness and stability). Computational issues: observed effect has measurement errors or perturbations caused by noise.

Bleyer, Ramlau JKU Linz 3 / 27

slide-5
SLIDE 5

Introduction

1st Case: noisy data

Solve Ax = y0 out of the measurement yδ with

  • y0 − yδ
  • ≤ δ.

Need apply some regularization technique minimize

x

  • Ax − yδ
  • 2 + α
  • Lx
  • 2.

Tikhonov regularization fidelity term (based on LS); regularization parameter α; stabilization term (quadratic). [Tikhonov, 1963, Phillips, 1962]

Bleyer, Ramlau JKU Linz 4 / 27

slide-6
SLIDE 6

Introduction

1st Case: noisy data

Solve Ax = y0 out of the measurement yδ with

  • y0 − yδ
  • ≤ δ.

Need apply some regularization technique minimize

x

  • Ax − yδ
  • 2 + αR(x).

Tikhonov-type regularization fidelity term (based on LS); regularization parameter α; R is a proper, convex and weakly lower semicontinuous functional. [Burger and Osher, 2004, Resmerita, 2005]

Bleyer, Ramlau JKU Linz 4 / 27

slide-7
SLIDE 7

Introduction

Subgradient

The Fenchel subdifferential of a functional R : U → [0, +∞] at ¯ u ∈ U is the set ∂F R (¯ u) = {ξ ∈ U∗ | R(v) − R(¯ u) ≥

  • ξ , v − ¯

u

  • ∀v ∈ U}.

First in 1960 by Moreau & Rockafellar and extended by Clark 1973.

Optimality condition:

If ¯ u minimizes R then 0 ∈ ∂F R (¯ u)

Bleyer, Ramlau JKU Linz 5 / 27

slide-8
SLIDE 8

Introduction

Example

Consider the function R(u) = |u|

1 −1

Figure: Function (left) and its subdifferential (right).

Bleyer, Ramlau JKU Linz 6 / 27

slide-9
SLIDE 9

Introduction

2nd Case: inexact operator and noisy data

Solve A0x = y0 under the assumptions (i) noisy data

  • y0 − yδ
  • ≤ δ .

(ii) inexact operator

  • A0 − Aǫ
  • ≤ ǫ .

What have been done so far? Linear case - based on TLS [Golub and Van Loan, 1980]:

R-TLS: Regularized TLS [Golub et al., 1999]; D-RTLS: Dual R-TLS [Lu et al., 2007].

Nonlinear case: no publication (?) LS: yδ and A0 minimizey

  • y − yδ
  • 2

subject to y ∈ R(A0) TLS: yδ and Aǫ minimize

  • [A, y] − [Aǫ, yδ]
  • F

subject to y ∈ R(A)

Bleyer, Ramlau JKU Linz 7 / 27

slide-10
SLIDE 10

Introduction

2nd Case: inexact operator and noisy data

Solve A0x = y0 under the assumptions (i) noisy data

  • y0 − yδ
  • ≤ δ .

(ii) inexact operator

  • A0 − Aǫ
  • ≤ ǫ .

What have been done so far? Linear case - based on TLS [Golub and Van Loan, 1980]:

R-TLS: Regularized TLS [Golub et al., 1999]; D-RTLS: Dual R-TLS [Lu et al., 2007].

Nonlinear case: no publication (?) LS: yδ and A0 minimizey

  • y − yδ
  • 2

subject to y ∈ R(A0) TLS: yδ and Aǫ minimize

  • [A, y] − [Aǫ, yδ]
  • F

subject to y ∈ R(A)

Bleyer, Ramlau JKU Linz 7 / 27

slide-11
SLIDE 11

Introduction

Illustration

Solve 1D problem: am = b, find the slope m. Given:

  • 1. bδ, aǫ (red)

Solution:

  • 1. LS solution (blue)
  • 2. TLS solution (green)

−1 −0.5 0.5 1 1.5 2 2.5 −1 −0.5 0.5 1 1.5 2 2.5 3 slope 45.9078 TLS vs LS LS solution TLS solution noisy data true data

Example: arctan(1) = 45o [Van Huffel and Vandewalle, 1991]

Bleyer, Ramlau JKU Linz 8 / 27

slide-12
SLIDE 12

Introduction

R-TLS

The R-TLS method [Golub, Hansen, and O’leary, 1999] minimize

  • A − Aǫ
  • 2 +
  • y − yδ
  • 2

subject to

  • Ax = y
  • Lx
  • 2 ≤ M .

If the inequality constraint is active, then

  • AT

ǫ Aǫ + αLT L + βI

  • ˆ

x = AT

ǫ yδ and

x

  • = M

with α = µ(1 +

  • ˆ

x

  • 2), β = −
  • Aǫˆ

x − yδ

  • 2

1 +

  • ˆ

x

  • 2

and µ > 0 is the Lagrange multiplier. Difficulty: requires a reliable bound M for the norm

  • Lx†

2.

Bleyer, Ramlau JKU Linz 9 / 27

slide-13
SLIDE 13

Proposed method: DBL-RTLS

Overview

  • Introduction
  • Proposed method: DBL-RTLS
  • Computational aspects
  • Numerical illustration
  • Outline and future work

Bleyer, Ramlau JKU Linz 9 / 27

slide-14
SLIDE 14

Proposed method: DBL-RTLS

Consider the operator equation B(k, f) = g0 where B is a bilinear operator (nonlinear) B : U × V − → H (k, f) − → B(k, f) and B is characterized by a function k0. K· = B(˜ k, ·) compact linear operator for a fixed ˜ k ∈ U F· = B(·, ˜ f) linear operator for a fixed ˜ f ∈ V

  • B(k0, ·)
  • V→H ≤ C
  • k0
  • U ;
  • B(k, f)
  • H ≤ C
  • k
  • U
  • f
  • V ;

Example: B(k, f)(s) :=

k(s, t)f(t)dt .

Bleyer, Ramlau JKU Linz 10 / 27

slide-15
SLIDE 15

Proposed method: DBL-RTLS

Consider the operator equation B(k, f) = g0 where B is a bilinear operator (nonlinear) B : U × V − → H (k, f) − → B(k, f) and B is characterized by a function k0. K· = B(˜ k, ·) compact linear operator for a fixed ˜ k ∈ U F· = B(·, ˜ f) linear operator for a fixed ˜ f ∈ V

  • B(k0, ·)
  • V→H ≤ C
  • k0
  • U ;
  • B(k, f)
  • H ≤ C
  • k
  • U
  • f
  • V ;

Example: B(k, f)(s) :=

k(s, t)f(t)dt .

Bleyer, Ramlau JKU Linz 10 / 27

slide-16
SLIDE 16

Proposed method: DBL-RTLS

We want to solve B(k0, f) = g0

  • ut of the measurements kǫ and gδ with

(i) noisy data

  • g0 − gδ
  • H ≤ δ .

(ii) inexact operator

  • k0 − kǫ
  • U ≤ ǫ .

We introduce the DBL-RTLS minimize

k,f

J (k, f) := T(k, f, kǫ, gδ) + R(k, f) where T measures of accuracy (closeness/discrepancy) R promotes stability.

Bleyer, Ramlau JKU Linz 11 / 27

slide-17
SLIDE 17

Proposed method: DBL-RTLS

DBL-RTLS minimize

k,f

J (k, f) := T(k, f, kǫ, gδ) + R(k, f) (1) where T(k, f, kǫ, gδ) = 1 2

  • B(k, f) − gδ
  • 2

H + γ

2

  • k − kǫ
  • 2

U

R(k, f) = α 2

  • Lf
  • 2

V + βR(k)

T is based on TLS method, measures the discrepancy on both data and operator; L : V → V is a linear bounded operator; α, β are the regularization parameters and γ is a scaling parameter; double regularization [You and Kaveh, 1996], R : U → [0, +∞] is proper convex function and w-lsc.

Bleyer, Ramlau JKU Linz 12 / 27

slide-18
SLIDE 18

Proposed method: DBL-RTLS

Main theoretical results

Assumption: (A1) B is strongly continuous, ie, if (kn, f n) ⇀ (¯ k, ¯ f) then B(kn, f n) → B(¯ k, ¯ f)

Proposition

Let J be the functional defined on (1) and L be a bounded and positive operator. Then J is positive, weak lower semi-continuous and coercive functional.

Theorem (existence)

Let the assumptions of Proposition 1 hold. Then there exists a global minimum of minimize J (k, f) .

Bleyer, Ramlau JKU Linz 13 / 27

slide-19
SLIDE 19

Proposed method: DBL-RTLS

Theorem (stability)

δj → δ and ǫj → ǫ gδj → gδ and kǫj → kǫ α, β > 0 (kj, f j) is a minimizer of J with gδj and kǫj Then there exists a convergent subsequence of (kj, f j)j (kjm, f jm) − → (¯ k, ¯ f) where (¯ k, ¯ f) is a minimizer of J with gδ, kǫ, α and β.

Bleyer, Ramlau JKU Linz 14 / 27

slide-20
SLIDE 20

Proposed method: DBL-RTLS

Theorem (stability)

δj → δ and ǫj → ǫ gδj → gδ and kǫj → kǫ α, β > 0 (kj, f j) is a minimizer of J with gδj and kǫj Then there exists a convergent subsequence of (kj, f j)j (kjm, f jm) − → (¯ k, ¯ f) where (¯ k, ¯ f) is a minimizer of J with gδ, kǫ, α and β.

Bleyer, Ramlau JKU Linz 14 / 27

slide-21
SLIDE 21

Proposed method: DBL-RTLS

Consider the convex functional Φ(k, f) := 1 2

  • Lf
  • 2 + ηR(k)

where the parameter η represents the different scaling of f and k. For convergence results we need to define

Definition

We call (k†, f †) a Φ-minimizing solution if (k†, f †) = arg min

(k,f)

{Φ(k, f) | B(k, f) = g0} .

Bleyer, Ramlau JKU Linz 15 / 27

slide-22
SLIDE 22

Proposed method: DBL-RTLS

Theorem (convergence)

δj → 0 and ǫj → 0

  • gδj − g0
  • ≤ δj and
  • kǫj − k0
  • ≤ ǫj

αj = α(ǫj, δj) and βj = β(ǫj, δj), s.t. αj → 0, βj → 0, limj→∞ δ2

j + γǫ2 j

αj = 0 and limj→∞ βj αj = η (kj, f j) is a minimizer of J with gδj, kǫj, αj and βj Then there exists a convergent subsequence of (kj, f j)j (kjm, f jm) − → (k†, f †) where (k†, f †) is a Φ-minimizing solution.

Bleyer, Ramlau JKU Linz 16 / 27

slide-23
SLIDE 23

Proposed method: DBL-RTLS

Theorem (convergence)

δj → 0 and ǫj → 0

  • gδj − g0
  • ≤ δj and
  • kǫj − k0
  • ≤ ǫj

αj = α(ǫj, δj) and βj = β(ǫj, δj), s.t. αj → 0, βj → 0, limj→∞ δ2

j + γǫ2 j

αj = 0 and limj→∞ βj αj = η (kj, f j) is a minimizer of J with gδj, kǫj, αj and βj Then there exists a convergent subsequence of (kj, f j)j (kjm, f jm) − → (k†, f †) where (k†, f †) is a Φ-minimizing solution.

Bleyer, Ramlau JKU Linz 16 / 27

slide-24
SLIDE 24

Computational aspects

Overview

  • Introduction
  • Proposed method: DBL-RTLS
  • Computational aspects
  • Numerical illustration
  • Outline and future work

Bleyer, Ramlau JKU Linz 16 / 27

slide-25
SLIDE 25

Computational aspects

Optimality condition

If the pair (¯ k, ¯ f) is a minimizer of J (k, f), then (0, 0) ∈ ∂J ¯ k, ¯ f

  • .

Theorem

Let J : U × V → R be a nonconvex functional, J(u, v) = ϕ(u) + Q(u, v) + ψ(v) where Q is a nonlinear differentiable term and ϕ, ψ are lsc convex

  • functions. Then

∂J(u, v) = {∂ϕ (u) + DuQ(u, v)} × {∂ψ (v) + DvQ(u, v)} = {∂uJ(u, v)} × {∂vJ(u, v)}

Bleyer, Ramlau JKU Linz 17 / 27

slide-26
SLIDE 26

Computational aspects

Remark: is difficult to solve wrt both (k, f) J is bilinear and biconvex (linear and convex to each one) applied alternating minimization method.

Alternating minimization algorithm

Require: gδ, kǫ, L, γ, α, β

1: n = 0 2: repeat 3:

f n+1 ∈ arg minf J(k, f|kn)

4:

kn+1 ∈ arg mink J(k, f|f n+1)

5: until convergence

Bleyer, Ramlau JKU Linz 18 / 27

slide-27
SLIDE 27

Computational aspects

Remark: is difficult to solve wrt both (k, f) J is bilinear and biconvex (linear and convex to each one) applied alternating minimization method.

Alternating minimization algorithm

Require: gδ, kǫ, L, γ, α, β

1: n = 0 2: repeat 3:

f n+1 ∈ arg minf J(k, f|kn)

4:

kn+1 ∈ arg mink J(k, f|f n+1)

5: until convergence

Bleyer, Ramlau JKU Linz 18 / 27

slide-28
SLIDE 28

Computational aspects

Proposition

The sequence generated by the function J(kn, f n) is non-increasing, J(kn+1, f n+1) ≤ J(kn, f n+1) ≤ J(kn, f n). Assumptions: (A1) B is strongly continuous, ie., if (kn, f n) ⇀ (¯ k, ¯ f) then B(kn, f n) → B(¯ k, ¯ f) (A2) B is weakly sequentially closed, ie., if (kn, f n) ⇀ (¯ k, ¯ f) and B(kn, f n) ⇀ g then B(¯ k, ¯ f) = g (A3) the adjoint of B′ is strongly continuous, ie., if (kn, f n) ⇀ (¯ k, ¯ f) then B′(kn, f n)∗z → B′(¯ k, ¯ f)∗z, ∀z ∈ D(B′)

Bleyer, Ramlau JKU Linz 19 / 27

slide-29
SLIDE 29

Computational aspects

Proposition

The sequence generated by the function J(kn, f n) is non-increasing, J(kn+1, f n+1) ≤ J(kn, f n+1) ≤ J(kn, f n). Assumptions: (A1) B is strongly continuous, ie., if (kn, f n) ⇀ (¯ k, ¯ f) then B(kn, f n) → B(¯ k, ¯ f) (A2) B is weakly sequentially closed, ie., if (kn, f n) ⇀ (¯ k, ¯ f) and B(kn, f n) ⇀ g then B(¯ k, ¯ f) = g (A3) the adjoint of B′ is strongly continuous, ie., if (kn, f n) ⇀ (¯ k, ¯ f) then B′(kn, f n)∗z → B′(¯ k, ¯ f)∗z, ∀z ∈ D(B′)

Bleyer, Ramlau JKU Linz 19 / 27

slide-30
SLIDE 30

Computational aspects

Theorem

Given regularization parameters 0 < α ≤ α and β, compute AM

  • algorithm. The sequence {(kn+1, f n+1)}n+1 has a weakly

convergent subsequence, namely (knj+1, f nj+1) ⇀ (¯ k, ¯ f) and the limit has the property J(¯ k, ¯ f) ≤ J(¯ k, f) and J(¯ k, ¯ f) ≤ J(k, ¯ f) for all f ∈ V and for all k ∈ U.

Proposition

Let {(kn, f n)}n be a weakly convergent sequence generated by AM algorithm, where kn ⇀ ¯ k and f n ⇀ ¯

  • f. Then there exists a

subsequence {knj}nj such that knj → ¯ k and there exists {ξnj

k }nj

with ξnj

k ∈ ∂kJ(knj, f nj) such that ξnj k → 0.

Bleyer, Ramlau JKU Linz 20 / 27

slide-31
SLIDE 31

Computational aspects

Theorem

Given regularization parameters 0 < α ≤ α and β, compute AM

  • algorithm. The sequence {(kn+1, f n+1)}n+1 has a weakly

convergent subsequence, namely (knj+1, f nj+1) ⇀ (¯ k, ¯ f) and the limit has the property J(¯ k, ¯ f) ≤ J(¯ k, f) and J(¯ k, ¯ f) ≤ J(k, ¯ f) for all f ∈ V and for all k ∈ U.

Proposition

Let {(kn, f n)}n be a weakly convergent sequence generated by AM algorithm, where kn ⇀ ¯ k and f n ⇀ ¯

  • f. Then there exists a

subsequence {knj}nj such that knj → ¯ k and there exists {ξnj

k }nj

with ξnj

k ∈ ∂kJ(knj, f nj) such that ξnj k → 0.

Bleyer, Ramlau JKU Linz 20 / 27

slide-32
SLIDE 32

Computational aspects

Proposition

Let {n} be a subsequence of N such that the sequence {(kn, f n)}n generated by AM algorithm satisfies kn → ¯ k and f n ⇀ ¯

  • f. Then f nj → ¯

f and there exists {ξnj

f }nj with

ξnj

f

∈ ∂fJ(knj, f nj) such that ξnj

f

→ 0. Remark: Graph of subdifferential mapping is sw-closed, ie., if vn → ¯ v and ξn ⇀ ¯ ξ with ξn ∈ ∂ϕ (vn), then ¯ ξ ∈ ∂ϕ (¯ v).

Theorem

Let {(kn, f n)}n be the sequence generated by the AM algorithm, then there exists a subsequence converging towards to a critical point of J, ie., (0, 0) ∈ ∂J ¯ k, ¯ f

  • .

Bleyer, Ramlau JKU Linz 21 / 27

slide-33
SLIDE 33

Computational aspects

Proposition

Let {n} be a subsequence of N such that the sequence {(kn, f n)}n generated by AM algorithm satisfies kn → ¯ k and f n ⇀ ¯

  • f. Then f nj → ¯

f and there exists {ξnj

f }nj with

ξnj

f

∈ ∂fJ(knj, f nj) such that ξnj

f

→ 0. Remark: Graph of subdifferential mapping is sw-closed, ie., if vn → ¯ v and ξn ⇀ ¯ ξ with ξn ∈ ∂ϕ (vn), then ¯ ξ ∈ ∂ϕ (¯ v).

Theorem

Let {(kn, f n)}n be the sequence generated by the AM algorithm, then there exists a subsequence converging towards to a critical point of J, ie., (0, 0) ∈ ∂J ¯ k, ¯ f

  • .

Bleyer, Ramlau JKU Linz 21 / 27

slide-34
SLIDE 34

Numerical illustration

Overview

  • Introduction
  • Proposed method: DBL-RTLS
  • Computational aspects
  • Numerical illustration
  • Outline and future work

Bleyer, Ramlau JKU Linz 21 / 27

slide-35
SLIDE 35

Numerical illustration

First numerical result

Convolution in 1D

k(s − t)f(t)dt = g(s) characteristic kernel and hat function; space: Ω = [0, 1], discretization: N = 2048 points; R(k) =

  • k
  • w,p with p = 1

Haar wavelet for {φ}λ and J = 10; initial guess: k0 = kǫ, τ = 1.0;

  • 1st. relative error: 10% and 10%.
  • 2nd. relative error: 0.1% and 0.1%.

Bleyer, Ramlau JKU Linz 22 / 27

slide-36
SLIDE 36

Numerical illustration

0.5 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 noisy kernel 0.5 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 solution 0.5 1 −0.05 0.05 0.1 0.15 0.2 noisy data

0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 approach kernel 0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 solution function 0.2 0.4 0.6 0.8 1 −0.05 0.05 0.1 0.15 0.2 computed data

0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 function: func 0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 kernel: ker 0.2 0.4 0.6 0.8 1 −0.05 0.05 0.1 0.15 0.2 data: convolution

Bleyer, Ramlau JKU Linz 23 / 27

slide-37
SLIDE 37

Numerical illustration

0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 noisy kernel 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 solution 0.2 0.4 0.6 0.8 1 −0.05 0.05 0.1 0.15 0.2 noisy data 0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 approach kernel 0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 solution function 0.2 0.4 0.6 0.8 1 −0.05 0.05 0.1 0.15 0.2 computed data

0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 function: func 0.2 0.4 0.6 0.8 1 −0.2 0.2 0.4 0.6 0.8 1 1.2 kernel: ker 0.2 0.4 0.6 0.8 1 −0.05 0.05 0.1 0.15 0.2 data: convolution

Bleyer, Ramlau JKU Linz 24 / 27

slide-38
SLIDE 38

Outline and future work

Overview

  • Introduction
  • Proposed method: DBL-RTLS
  • Computational aspects
  • Numerical illustration
  • Outline and future work

Bleyer, Ramlau JKU Linz 24 / 27

slide-39
SLIDE 39

Outline and future work

Outline and future work

So far: introduced a method for nonlinear equation (bilinear operator) with noisy data and inexact operator; proved existence, stability and convergence; study of source conditions and convergence rates (k and f); suggested an iterative implementation; proved convergence of AM algorithm to a critical point; For further work: study variational inequalities; how to choose the best regularization parameter? a priori and a posteriori choice; implementations and numerical experiments (2D);

Bleyer, Ramlau JKU Linz 25 / 27

slide-40
SLIDE 40

Outline and future work

Outline and future work

So far: introduced a method for nonlinear equation (bilinear operator) with noisy data and inexact operator; proved existence, stability and convergence; study of source conditions and convergence rates (k and f); suggested an iterative implementation; proved convergence of AM algorithm to a critical point; For further work: study variational inequalities; how to choose the best regularization parameter? a priori and a posteriori choice; implementations and numerical experiments (2D);

Bleyer, Ramlau JKU Linz 25 / 27

slide-41
SLIDE 41

Outline and future work

  • M. Burger and S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5):

1411–1421, 2004. ISSN 0266-5611. doi: 10.1088/0266-5611/20/5/005. URL http://dx.doi.org/10.1088/0266-5611/20/5/005.

  • H. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer Academic Publishers,

Dordrecht, 2000.

  • G. H. Golub and C. F. Van Loan. An analysis of the total least squares problem. SIAM J. Numer. Anal., 17(6):

883–893, 1980. ISSN 0036-1429.

  • G. H. Golub, P. C. Hansen, and D. P. O’leary. Tikhonov regularization and total least squares. SIAM J. Matrix
  • Anal. Appl, 21:185–194, 1999.
  • S. Lu, S. V. Pereverzev, and U. Tautenhahn. Regularized total least squares: computational aspects and error
  • bounds. Technical Report 30, Ricam, Linz, Austria, 2007. URL

http://www.ricam.oeaw.ac.at/publications/reports/07/rep07-30.pdf.

  • D. L. Phillips. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc.
  • Comput. Mach., 9:84–97, 1962. ISSN 0004-5411.
  • E. Resmerita. Regularization of ill-posed problems in Banach spaces: convergence rates. Inverse Problems, 21(4):

1303–1314, 2005. ISSN 0266-5611. doi: 10.1088/0266-5611/21/4/007. URL http://dx.doi.org/10.1088/0266-5611/21/4/007.

  • A. N. Tikhonov. On the solution of incorrectly put problems and the regularisation method. In Outlines Joint
  • Sympos. Partial Differential Equations (Novosibirsk, 1963), pages 261–265. Acad. Sci. USSR Siberian Branch,

Moscow, 1963.

  • S. Van Huffel and J. Vandewalle. The total least squares problem, volume 9 of Frontiers in Applied Mathematics.

Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1991. ISBN 0-89871-275-0. Computational aspects and analysis, With a foreword by Gene H. Golub. Y.-L. You and M. Kaveh. A regularization approach to joint blur identification and image restoration. Image Processing, IEEE Transactions on, 5(3):416 –428, mar 1996. ISSN 1057-7149. Bleyer, Ramlau JKU Linz 26 / 27

slide-42
SLIDE 42

Outline and future work

Thank you for your kind attention!

Bleyer, Ramlau JKU Linz 27 / 27