Compressive Strategies for Inverse Problems Gerd Teschke joint work - - PowerPoint PPT Presentation

compressive strategies for inverse problems
SMART_READER_LITE
LIVE PREVIEW

Compressive Strategies for Inverse Problems Gerd Teschke joint work - - PowerPoint PPT Presentation

Compressive Strategies for Inverse Problems Compressive Strategies for Inverse Problems Gerd Teschke joint work with C. Borries and R. Ramlau, M. Zhariy Institute for Computational Mathematics in Science and Technology Neubrandenburg University


slide-1
SLIDE 1

Compressive Strategies for Inverse Problems

Compressive Strategies for Inverse Problems

Gerd Teschke

joint work with C. Borries and R. Ramlau, M. Zhariy

Institute for Computational Mathematics in Science and Technology Neubrandenburg University of Applied Sciences, Germany Konrad-Zuse-Institute Berlin (ZIB), Germany

AIP 2009, Wien, 2009

Gerd Teschke (23. Juli 2009) 1/29

slide-2
SLIDE 2

Compressive Strategies for Inverse Problems

1 Compressive strategies 2 Compression by adaptive recovery 3 Compression by acceleration 4 Nonlinear sensing and sparse recovery

Gerd Teschke (23. Juli 2009) 2/29

slide-3
SLIDE 3

Compressive Strategies for Inverse Problems Compressive strategies

Compressive strategies

Gerd Teschke (23. Juli 2009) 3/29

slide-4
SLIDE 4

Compressive Strategies for Inverse Problems Compressive strategies

Inverse problems and sparse recovery since a decade growing research topic, breakthrough for many applications several approaches: statistic and deterministic, numerous generalizations and extensions on the theme

  • bservation: in praxis numerical schemes often rather slow

compressive algorithm = reduction of operations and iterations

steepest descent, domain decomposition, semi-smooth Newton methods, projection methods [Bredies, Daubechies, Fornasier, Lorenz, Loris, ...] adaptive approximation [Dahmen, Dahlke, DeVore, Fornasier, Raasch, Stevenson, ...] compressive sampling techniques [Candes, Donoho, DeVore, Eldar, Rauhut, ...]

Gerd Teschke (23. Juli 2009) 4/29

slide-5
SLIDE 5

Compressive Strategies for Inverse Problems Compression by adaptive recovery

Compression by adaptive recovery

Gerd Teschke (23. Juli 2009) 5/29

slide-6
SLIDE 6

Compressive Strategies for Inverse Problems Compression by adaptive recovery

Solve Fx = y, minimize D(x) = Fx − y2 Often noisy data yδ − y ≤ δ Solve normal equation F ∗Ff = F ∗xδ Landweber iteration: xδ

m+1 = xδ m − γF ∗(Fxδ m − yδ)

Gerd Teschke (23. Juli 2009) 6/29

slide-7
SLIDE 7

Compressive Strategies for Inverse Problems Compression by adaptive recovery

Preassigned system of functions (frame) {φλ : λ ∈ Λ} ⊂ H Associated analysis and synthesis operator A : X → ℓ2 via x → x = {x, φλ}λ∈Λ F ∗ : ℓ2 → X via x →

  • λ∈Λ

xλφλ. Discretized normal equation: S = AF ∗FA∗ , x = A∗x and yδ = AF ∗yδ Sx = yδ Corresponding sequence space Landweber iteration, xδ

m+1 = xδ m − β(Sxδ m − yδ)

Gerd Teschke (23. Juli 2009) 7/29

slide-8
SLIDE 8

Compressive Strategies for Inverse Problems Compression by adaptive recovery

Assume that we have the following three routines at our disposal: RHSε[g] → gε. This routine determines a finitely supported gε ∈ ℓ2 satisfying gε − AF ∗g ≤ ε . APPLYε[f] → wε. This routine determines, for a finitely supported f ∈ ℓ2 and an infinite matrix S, a finitely supported wε satisfying wε − Sf ≤ ε . COARSEε[f] → fε. This routine creates, for a finitely supported with f ∈ ℓ2, a vector fε by replacing all but N coefficients of f by zeros such that fε − f ≤ ε .

Gerd Teschke (23. Juli 2009) 8/29

slide-9
SLIDE 9

Compressive Strategies for Inverse Problems Compression by adaptive recovery

This leads to an inexact/approximative variant of the sequence space iteration ˜ xδ

m+1 = COARSErδ(m)

˜ xδ

m−βAPPLYrδ(m)[˜

m]+βRHSrδ(m)[xδ]

  • .

Subscript m is related to the iteration index by specific refinement strategies of the form rδ : N → N Choice of proper refinement strategies rδ(m) enables convergence and regularization results

Gerd Teschke (23. Juli 2009) 9/29

slide-10
SLIDE 10

Compressive Strategies for Inverse Problems Compression by adaptive recovery

A-posteriori parameter rules are preferable Several rules require a frequent evaluation of the residual discrepancy Fxδ

m − yδ

Definition (approximate residuum) For some x ∈ ℓ2 and some integer m ≥ 0 the approximate residual discrepancy RES is defined by (RESm[x, y])2 := APPLYm[x], x − 2RHSm[y], x + y2. (1)

Gerd Teschke (23. Juli 2009) 10/29

slide-11
SLIDE 11

Compressive Strategies for Inverse Problems Compression by adaptive recovery

Lemma (monotone decay) Let δ > 0, 0 < c < 1, 0 < β < 2/(3S) and m0 ≥ 1. If there exists for 0 ≤ m ≤ m0 a refinement strategy rδ(m) such that the approximate discrepancies RESrδ(m)[˜ xδ

m, yδ] fulfill

c(RESrδ(m)[˜ xδ

m, yδ])2 >

δ2 + Crδ(m)(˜ xδ

m)

1 − 3

2βS

, (2) then, for 0 ≤ m ≤ m0, the approximation errors ˜ xδ

m − x†

decrease monotonically.

Gerd Teschke (23. Juli 2009) 11/29

slide-12
SLIDE 12

Compressive Strategies for Inverse Problems Compression by adaptive recovery U(i) Let r(0) be the smallest integer ≥ 0 with c(RESr(0)[˜ f0, g])2 ≥ Cr(0)(˜ f0) 1 − 3

2 βS

, (3) if r(0) with (3) does not exist, stop the iteration, set m0 = 0. U(ii) if for m ≥ 1 c(RESr(m−1)[˜ fm, g])2 ≥ Cr(m−1)(˜ fm) 1 − 3

2 βS

, (4) set r(m) = r(m − 1) U(iii) if c(RESr(m−1)[˜ fm, g])2 < Cr(m−1)(˜ fm) 1 − 3

2 βS

, (5) set r(m) = r(m − 1) + j, where j is the smallest integer with c(RESr(m−1)+j [˜ fm, g])2 ≥ Cr(m−1)+j (˜ fm) 1 − 3

2 βS

, (6) U(iv) if there is no integer j with (6), then stop the iteration, set m0 = m. Gerd Teschke (23. Juli 2009) 12/29

slide-13
SLIDE 13

Compressive Strategies for Inverse Problems Compression by adaptive recovery

Theorem (regularization result) Let x† denote the solution of the inverse problem and let x† be the sequence of associated frame coefficients, i.e. x† = A∗x†. Suppose ˜ xm is computed by the inexact Landweber iteration with exact data y in combination with the latter updating rule for the refinement strategy r. Then, for ˜ x0 arbitrarily chosen, the sequence ˜ xm converges in norm towards x†, i.e. lim

m→∞ ˜

xm → ˜ x†.

Gerd Teschke (23. Juli 2009) 13/29

slide-14
SLIDE 14

Compressive Strategies for Inverse Problems Compression by acceleration

Compression by acceleration

Gerd Teschke (23. Juli 2009) 14/29

slide-15
SLIDE 15

Compressive Strategies for Inverse Problems Compression by acceleration

So far sparse recovery by adaptive variant of xn+1 = xn + γF ∗(y − Fxn) Directly involving sparsity xn+1 = Sα(xn + γF ∗(y − Fxn)) Extension to nonlinear operators xn+1 = Sα(xn + γF ′(xn+1)∗(y − F(xn))) rather slow and numerically intensive iteration Consider instead min{D(x) = F(x) − y2} on BR := {x ∈ ℓ2; x1 ≤ R} xn+1 = PR(xn + γnF ∗(y − Fxn)).

Gerd Teschke (23. Juli 2009) 15/29

slide-16
SLIDE 16

Compressive Strategies for Inverse Problems Compression by acceleration

Lemma (necessary condition) If the vector ˜ xR ∈ ℓ2 is a minimizer of D(x) = F(x) − y2 on BR then for any γ > 0, PR(˜ xR + γF ′(˜ xR)∗(y − F(˜ xR)) = ˜ xR , which is equivalent to F ′(˜ xR)∗(y − F(˜ xR)), w − ˜ xR ≤ 0, for all w ∈ BR .

Gerd Teschke (23. Juli 2009) 16/29

slide-17
SLIDE 17

Compressive Strategies for Inverse Problems Compression by acceleration

Define r := max{2 supx∈BR F ′(x)2 , 2L

  • D(x0)}

reason: F(xn+1) − F(xn)2 ≤ r 2xn+1 − xn2 The sequence {βn}n∈N satisfies Condition (B) if there exists n0 such that: (B1) ¯ β := sup{βn; n ∈ N} < ∞ and inf{βn; n ∈ N} ≥ 1 (B2) βnF(xn+1) − F(xn)2 ≤ r 2xn+1 − xn2 ∀n ≥ n0 (B3) βnL

  • D(xn) ≤ r

2 .

Gerd Teschke (23. Juli 2009) 17/29

slide-18
SLIDE 18

Compressive Strategies for Inverse Problems Compression by acceleration

Lemma (surrogate functional technique and γ = β/r) Assume F to be twice Fr´ echet differentiable and β ≥ 1. For arbitrary fixed x ∈ BR assume βL

  • D(x) ≤ r/2 and define the

functional Φβ(·, x) by Φβ(w, x) := F(w) − y2 − F(w) − F(x)2 + r β w − x2 . (7) Then there exists a unique w ∈ BR that minimizes the restriction to BR of Φβ(w, x). We denote this minimizer by ˆ w which is given by ˆ w = PR

  • x + β

r F ′(ˆ w)∗(y − F(x))

  • .

Gerd Teschke (23. Juli 2009) 18/29

slide-19
SLIDE 19

Compressive Strategies for Inverse Problems Compression by acceleration

Lemma (contraction ) Assume βL

  • D(x) ≤ r/2. Then the map

Ψ(ˆ w) := PR(x + β/rF ′(ˆ w)∗(y − F(x))) is contractive and therefore converges to a unique fixed point.

Gerd Teschke (23. Juli 2009) 19/29

slide-20
SLIDE 20

Compressive Strategies for Inverse Problems Compression by acceleration

Lemma (monotone decay) Let now the iteration be given by xn+1 = PR

  • xn + βn

r F ′(xn+1)∗(y − F(xn))

  • ,

where the βn satisfy Condition (B) with respect to {xn}n∈N, then the sequence D(xn) is monotonically decreasing and limn→∞ xn+1 − xn = 0.

Gerd Teschke (23. Juli 2009) 20/29

slide-21
SLIDE 21

Compressive Strategies for Inverse Problems Compression by acceleration

Theorem (weak convergence) If x⋆ is a weak accumulation point of {xn}n∈N, then it fulfills the necessary condition for a minimum of D(x) on BR, i.e. for all w ∈ BR, F ′(x⋆)∗(y − F(x⋆)), w − x⋆ ≤ 0 .

Gerd Teschke (23. Juli 2009) 21/29

slide-22
SLIDE 22

Compressive Strategies for Inverse Problems Compression by acceleration

Theorem (norm convergence of subsequences) With the same technical assumptions as in the last theorem, there exists a subsequence {xn′

l }l∈N ⊂ {xn}n∈N such that {xn′ l }l∈N

converges in norm towards the weak accumulation point x⋆, i.e. lim

l→∞ xn′

l − x⋆ = 0 . Gerd Teschke (23. Juli 2009) 22/29

slide-23
SLIDE 23

Compressive Strategies for Inverse Problems Compression by acceleration Projected Steepest Descent Method for nonlinear inverse problems Given

  • perator F, its derivative F ′(x), data y, some initial guess x0, and R (sparsity constraint

ℓ1-ball BR ) Initialization r = max{2 supx∈BR F ′(x)2, 2L

  • D(x0)},

set q = 0.9 (as an example) Iteration for n = 0, 1, 2, . . . until a preassigned precision / maximum number of iterations 1. βn = max

  • supx∈BR

F′(x)2 L√ D(xn) ,

  • D(x0)

D(xn)

  • 2.

xn+1 = PR

  • xn + βn

r F ′(xn+1)∗(y − F(xn))

  • ; by fixed point iteration

3. verify (B2): βnF(xn+1) − F(xn)2 ≤ r

2 xn+1 − xn2

if (B2) is satisfied increase n and go to 1.

  • therwise set βn = q · βn and go to 2.

end Gerd Teschke (23. Juli 2009) 23/29

slide-24
SLIDE 24

Compressive Strategies for Inverse Problems Nonlinear sensing and sparse recovery

Nonlinear sensing and sparse recovery

Gerd Teschke (23. Juli 2009) 24/29

slide-25
SLIDE 25

Compressive Strategies for Inverse Problems Nonlinear sensing and sparse recovery

Nonlinear sensing and recovery: Reconstruction space A ⊂ X spanned by Ψ = {aλ : λ ∈ Λ}, associated analysis and synthesis operator A : f → {f , aλ}λ∈Λ and A∗ : x →

  • λ∈Λ

xλaλ . Assume f has a sparse expansion in A Nonlinear sensing model: M : f → M(f ) = |f |ε := √ f 2 + ε2 SM(f ) = {s(· − nT), M(f )Y }n∈Z where Σ = {s(· − nTs), n ∈ Z} forms a frame as well Goal: reconstruct f from its samples y = (S ◦ M ◦ A∗)(x).

Gerd Teschke (23. Juli 2009) 25/29

slide-26
SLIDE 26

Compressive Strategies for Inverse Problems Nonlinear sensing and sparse recovery Gerd Teschke (23. Juli 2009) 26/29

slide-27
SLIDE 27

Compressive Strategies for Inverse Problems Nonlinear sensing and sparse recovery Gerd Teschke (23. Juli 2009) 27/29

slide-28
SLIDE 28

Compressive Strategies for Inverse Problems Nonlinear sensing and sparse recovery Gerd Teschke (23. Juli 2009) 28/29

slide-29
SLIDE 29

Compressive Strategies for Inverse Problems Nonlinear sensing and sparse recovery

Thank You for Your Attention!

Gerd Teschke (23. Juli 2009) 29/29