Regularization of Inverse Problems Matthias J. Ehrhardt January 28, - - PowerPoint PPT Presentation

regularization of inverse problems
SMART_READER_LITE
LIVE PREVIEW

Regularization of Inverse Problems Matthias J. Ehrhardt January 28, - - PowerPoint PPT Presentation

Regularization of Inverse Problems Matthias J. Ehrhardt January 28, 2019 What is an Inverse Problem? A : U V mapping between Hilbert spaces U , V , A L ( U , V ) physical model A , cause u and effect A ( u ) = Au . Direct /


slide-1
SLIDE 1

Regularization of Inverse Problems

Matthias J. Ehrhardt January 28, 2019

slide-2
SLIDE 2

What is an Inverse Problem?

◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au.

slide-3
SLIDE 3

What is an Inverse Problem?

◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =

  • R

u(sθ + tθ⊥)dt θ s u t tθ⊥

slide-4
SLIDE 4

What is an Inverse Problem?

◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =

  • R

u(sθ + tθ⊥)dt

slide-5
SLIDE 5

What is an Inverse Problem?

◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =

  • R

u(sθ + tθ⊥)dt

slide-6
SLIDE 6

What is an Inverse Problem?

◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =

  • R

u(sθ + tθ⊥)dt

Inverse Problem: Given v, calculate u with Au = v. Infer from the effect the cause.

slide-7
SLIDE 7

What is the problem with Inverse Problems?

Examples A solution may ◮ not exist. Au = 0, v = 0

slide-8
SLIDE 8

What is the problem with Inverse Problems?

Examples A solution may ◮ not exist. Au = 0, v = 0 ◮ not be unique. Au = 0, v = 0

slide-9
SLIDE 9

What is the problem with Inverse Problems?

Examples A solution may ◮ not exist. Au = 0, v = 0 ◮ not be unique. Au = 0, v = 0 ◮ be sensitive to noise.

  • Positron Emission Tomography (PET)
  • Data: PET scanner in London
  • Model: ray transform, Au(L) =
  • L u(r)dr
  • Find u such that Au = v

slide-10
SLIDE 10

What is the problem with Inverse Problems?

Examples A solution may ◮ not exist. Au = 0, v = 0 ◮ not be unique. Au = 0, v = 0 ◮ be sensitive to noise.

  • Positron Emission Tomography (PET)
  • Data: PET scanner in London
  • Model: ray transform, Au(L) =
  • L u(r)dr
  • Find u such that Au = v

slide-11
SLIDE 11

What is the problem with Inverse Problems?

Definition (Jacques Hadamard, 1865-1963): An Inverse Problem “Au = v” is called well-posed, if the solution (1) exists. (2) is unique. (3) depends continuously on the data. “Small errors in v lead to small errors in u.” Otherwise, we call it ill-posed.

slide-12
SLIDE 12

What is the problem with Inverse Problems?

Definition (Jacques Hadamard, 1865-1963): An Inverse Problem “Au = v” is called well-posed, if the solution (1) exists. (2) is unique. (3) depends continuously on the data. “Small errors in v lead to small errors in u.” Otherwise, we call it ill-posed. Almost all interesting inverse problems are ill-posed.

slide-13
SLIDE 13

Generalized Solutions

Definition: Let v ∈ V. The set of all approximate solutions of “Au = v” is L :=

  • u ∈ U
  • Au − v ≤ Az − v

∀z ∈ U

  • .

If a solution z ∈ U exists, Az − v = 0, then L =

  • u ∈ U
  • Au = v
slide-14
SLIDE 14

Generalized Solutions

Definition: Let v ∈ V. The set of all approximate solutions of “Au = v” is L :=

  • u ∈ U
  • Au − v ≤ Az − v

∀z ∈ U

  • .

If a solution z ∈ U exists, Az − v = 0, then L =

  • u ∈ U
  • Au = v
  • Definition: An approximate solution u ∈ L is called

minimal-norm-solution, if u ≤ u ∀u ∈ L .

slide-15
SLIDE 15

Properties of Minimal-Norm-Solutions

Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B}

slide-16
SLIDE 16

Properties of Minimal-Norm-Solutions

Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥

A = V.

slide-17
SLIDE 17

Properties of Minimal-Norm-Solutions

Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥

A = V.

Example: A : ℓ2 → ℓ2, (Au)j = uj

j . Range RA not closed.

slide-18
SLIDE 18

Properties of Minimal-Norm-Solutions

Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥

A = V.

Example: A : ℓ2 → ℓ2, (Au)j = uj

j . Range RA not closed.

Theorem: Let v ∈ RA + R⊥

  • A. Then there exists a unique

minimal-norm-solution u of “Au = v”. We write A†v = u.

slide-19
SLIDE 19

Properties of Minimal-Norm-Solutions

Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥

A = V.

Example: A : ℓ2 → ℓ2, (Au)j = uj

j . Range RA not closed.

Theorem: Let v ∈ RA + R⊥

  • A. Then there exists a unique

minimal-norm-solution u of “Au = v”. We write A†v = u. Theorem: If RA is not closed, then u does not depend continuously on v, i.e. A† is not continuous.

slide-20
SLIDE 20

Regularization

U V u = A†v v A

slide-21
SLIDE 21

Regularization

U V u = A†v v vδ A†vδ A

slide-22
SLIDE 22

Regularization

U V u = A†v v vδ A†vδ Rαvδ A

slide-23
SLIDE 23

Regularization

U V u = A†v v vδ A†vδ Rαvδ A Definition: A family {Rα}α>0 is called regularization of A†, if ◮ for all α > 0 the mapping Rα : V → U is continuous. ◮ for all v ∈ RA + R⊥

A

limα→0 Rαv = A†v.

slide-24
SLIDE 24

Regularization

U V u = A†v v vδ A†vδ Rαvδ A Definition: A family {Rα}α>0 is called regularization of A†, if ◮ for all α > 0 the mapping Rα : V → U is continuous. ◮ for all v ∈ RA + R⊥

A

limα→0 Rαv = A†v.

slide-25
SLIDE 25

Regularization

U V u = A†v v vδ A†vδ Rαvδ A Definition: A family {Rα}α>0 is called regularization of A†, if ◮ for all α > 0 the mapping Rα : V → U is continuous. ◮ for all v ∈ RA + R⊥

A

limα→0 Rαv = A†v.

slide-26
SLIDE 26

Popular examples of regularization

Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min

u

  • Au − vδ2 + αu2
slide-27
SLIDE 27

Popular examples of regularization

Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min

u

  • Au − vδ2 + αu2

= (A∗A + αI)−1A∗vδ

slide-28
SLIDE 28

Popular examples of regularization

Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min

u

  • Au − vδ2 + αu2

= (A∗A + αI)−1A∗vδ Proposition: (A∗A + αI)−1 ∈ L(U, U) for all α > 0

slide-29
SLIDE 29

Popular examples of regularization

Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min

u

  • Au − vδ2 + αu2

= (A∗A + αI)−1A∗vδ Proposition: (A∗A + αI)−1 ∈ L(U, U) for all α > 0 Variational regularization Rαvδ = arg min

u

  • D(Au, vδ) + αJ(u)
  • ◮ data fit D: “divergence” D(x, y) ≥ 0, D(x, y) = 0 iff x = y

Examples: D(x, y) = x − y2, x − y1,

  • x − y + y log(y/x)

◮ regularizer J: Penalizes unwanted features; ensures stability Examples: J(u) = u2, u1, TV(u) = ∇u1

slide-30
SLIDE 30

Popular examples of regularization

Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min

u

  • Au − vδ2 + αu2

= (A∗A + αI)−1A∗vδ Proposition: (A∗A + αI)−1 ∈ L(U, U) for all α > 0 Variational regularization Rαvδ = arg min

u

  • D(Au, vδ) + αJ(u)
  • ◮ data fit D: “divergence” D(x, y) ≥ 0, D(x, y) = 0 iff x = y

Examples: D(x, y) = x − y2, x − y1,

  • x − y + y log(y/x)

◮ regularizer J: Penalizes unwanted features; ensures stability Examples: J(u) = u2, u1, TV(u) = ∇u1 ◮ decouples solution of inverse problem into 2 steps:

  • 1. Modelling: choose D, J, A, α.
  • 2. Optimization: connection to statistics, machine learning ...
slide-31
SLIDE 31

Summary

◮ Inverse problems

◮ forward / direct problem ◮ ill-posedness; interesting inverse problems are ill-posed ◮ generalized solutions, minimal-norm-solution

◮ Reguarization

◮ stable approximation of minimal-norm-solution ◮ Tikhonov regularization ◮ variational regularization