Regularization of Inverse Problems
Matthias J. Ehrhardt January 28, 2019
Regularization of Inverse Problems Matthias J. Ehrhardt January 28, - - PowerPoint PPT Presentation
Regularization of Inverse Problems Matthias J. Ehrhardt January 28, 2019 What is an Inverse Problem? A : U V mapping between Hilbert spaces U , V , A L ( U , V ) physical model A , cause u and effect A ( u ) = Au . Direct /
Matthias J. Ehrhardt January 28, 2019
◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au.
◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =
u(sθ + tθ⊥)dt θ s u t tθ⊥
◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =
u(sθ + tθ⊥)dt
◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =
u(sθ + tθ⊥)dt
◮ A : U → V mapping between Hilbert spaces U, V, A ∈ L(U, V) ◮ physical model A, cause u and effect A(u) = Au. Direct / Forward Problem: given u, calculate Au. ◮ Example 1: ray transform (used in CT, PET, ...) A : L2(Ω) → L2([0, 2π], [−1, 1]), Au(θ, s) =
u(sθ + tθ⊥)dt
Inverse Problem: Given v, calculate u with Au = v. Infer from the effect the cause.
Examples A solution may ◮ not exist. Au = 0, v = 0
Examples A solution may ◮ not exist. Au = 0, v = 0 ◮ not be unique. Au = 0, v = 0
Examples A solution may ◮ not exist. Au = 0, v = 0 ◮ not be unique. Au = 0, v = 0 ◮ be sensitive to noise.
Examples A solution may ◮ not exist. Au = 0, v = 0 ◮ not be unique. Au = 0, v = 0 ◮ be sensitive to noise.
Definition (Jacques Hadamard, 1865-1963): An Inverse Problem “Au = v” is called well-posed, if the solution (1) exists. (2) is unique. (3) depends continuously on the data. “Small errors in v lead to small errors in u.” Otherwise, we call it ill-posed.
Definition (Jacques Hadamard, 1865-1963): An Inverse Problem “Au = v” is called well-posed, if the solution (1) exists. (2) is unique. (3) depends continuously on the data. “Small errors in v lead to small errors in u.” Otherwise, we call it ill-posed. Almost all interesting inverse problems are ill-posed.
Definition: Let v ∈ V. The set of all approximate solutions of “Au = v” is L :=
∀z ∈ U
If a solution z ∈ U exists, Az − v = 0, then L =
Definition: Let v ∈ V. The set of all approximate solutions of “Au = v” is L :=
∀z ∈ U
If a solution z ∈ U exists, Az − v = 0, then L =
minimal-norm-solution, if u ≤ u ∀u ∈ L .
Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B}
Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥
A = V.
Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥
A = V.
Example: A : ℓ2 → ℓ2, (Au)j = uj
j . Range RA not closed.
Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥
A = V.
Example: A : ℓ2 → ℓ2, (Au)j = uj
j . Range RA not closed.
Theorem: Let v ∈ RA + R⊥
minimal-norm-solution u of “Au = v”. We write A†v = u.
Recall: ◮ Range / image of A: RA := {v ∈ V | ∃u ∈ U Au = v} ◮ Orthogonal complement: A⊥ := {v ∈ V | v, z = 0 ∀z ∈ A} ◮ Minkowski sum: A + B := {u + v | u ∈ A, v ∈ B} Proposition: RA is closed if and only if RA + R⊥
A = V.
Example: A : ℓ2 → ℓ2, (Au)j = uj
j . Range RA not closed.
Theorem: Let v ∈ RA + R⊥
minimal-norm-solution u of “Au = v”. We write A†v = u. Theorem: If RA is not closed, then u does not depend continuously on v, i.e. A† is not continuous.
U V u = A†v v A
U V u = A†v v vδ A†vδ A
U V u = A†v v vδ A†vδ Rαvδ A
U V u = A†v v vδ A†vδ Rαvδ A Definition: A family {Rα}α>0 is called regularization of A†, if ◮ for all α > 0 the mapping Rα : V → U is continuous. ◮ for all v ∈ RA + R⊥
A
limα→0 Rαv = A†v.
U V u = A†v v vδ A†vδ Rαvδ A Definition: A family {Rα}α>0 is called regularization of A†, if ◮ for all α > 0 the mapping Rα : V → U is continuous. ◮ for all v ∈ RA + R⊥
A
limα→0 Rαv = A†v.
U V u = A†v v vδ A†vδ Rαvδ A Definition: A family {Rα}α>0 is called regularization of A†, if ◮ for all α > 0 the mapping Rα : V → U is continuous. ◮ for all v ∈ RA + R⊥
A
limα→0 Rαv = A†v.
Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min
u
Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min
u
= (A∗A + αI)−1A∗vδ
Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min
u
= (A∗A + αI)−1A∗vδ Proposition: (A∗A + αI)−1 ∈ L(U, U) for all α > 0
Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min
u
= (A∗A + αI)−1A∗vδ Proposition: (A∗A + αI)−1 ∈ L(U, U) for all α > 0 Variational regularization Rαvδ = arg min
u
Examples: D(x, y) = x − y2, x − y1,
◮ regularizer J: Penalizes unwanted features; ensures stability Examples: J(u) = u2, u1, TV(u) = ∇u1
Tikhonov regularization (Andrey Tikhonov, 1906-1993) Rαvδ = arg min
u
= (A∗A + αI)−1A∗vδ Proposition: (A∗A + αI)−1 ∈ L(U, U) for all α > 0 Variational regularization Rαvδ = arg min
u
Examples: D(x, y) = x − y2, x − y1,
◮ regularizer J: Penalizes unwanted features; ensures stability Examples: J(u) = u2, u1, TV(u) = ∇u1 ◮ decouples solution of inverse problem into 2 steps:
◮ Inverse problems
◮ forward / direct problem ◮ ill-posedness; interesting inverse problems are ill-posed ◮ generalized solutions, minimal-norm-solution
◮ Reguarization
◮ stable approximation of minimal-norm-solution ◮ Tikhonov regularization ◮ variational regularization