CoSaMP Iterative signal recovery from incomplete and inaccurate - - PowerPoint PPT Presentation

cosamp
SMART_READER_LITE
LIVE PREVIEW

CoSaMP Iterative signal recovery from incomplete and inaccurate - - PowerPoint PPT Presentation

CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell (UC-Davis). Research supported


slide-1
SLIDE 1

CoSaMP

Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp

Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell (UC-Davis).

Research supported in part by NSF and DARPA 1

slide-2
SLIDE 2

The Sparsity Heuristic

A sparse signal has fewer degrees of freedom than its nominal dimension Sparse signal Nearly sparse signal

CoSaMP (DIMACS, 26 March 2009) 2

slide-3
SLIDE 3

Example: Wavelet Sparsity

Courtesy of J. Romberg

CoSaMP (DIMACS, 26 March 2009) 3

slide-4
SLIDE 4

Example: Time–Frequency Sparsity

Time (µ s) Frequency (MHz)

40.08 80.16 120.23 160.31 200.39 0.01 0.02 0.04 0.05 0.06 0.07

Data provided by L3 Communications

CoSaMP (DIMACS, 26 March 2009) 4

slide-5
SLIDE 5

Quantifying Sparsity

❧ Let {ψk : k = 1, 2, . . . , N} be an orthobasis for RN ❧ The coefficients of x with respect to the basis are fk = x, ψk for k = 1, 2, . . . , N ❧ The signal is s-sparse when #{k : fk = 0} ≤ s ❧ Generalization: the signal is p-compressible with magnitude R if |f|(k) ≤ R · k−1/p for k = 1, 2, . . . , N ❧ p-compressible is slightly weaker than “in ℓp” for each p > 0

CoSaMP (DIMACS, 26 March 2009) 5

slide-6
SLIDE 6

Approximating Compressible Signals

❧ Consider a signal p-compressible w.r.t. the standard basis |x|(k) ≤ R · k−1/p for k = 1, 2, 3, . . . ❧ Approximating x by its s largest terms gives error x − xs2 ≤ R ·

  • k>s k−2/p1/2

≈ R · ∞

s

u−2/p du 1/2 ≈ R · s1/2−1/p ❧ Compressible signals are well approximated by sparse signals ❧ Fundamental idea behind transform coding

CoSaMP (DIMACS, 26 March 2009) 6

slide-7
SLIDE 7

Counting Bits

❧ Consider the class of 0–1 signals in RN with exactly s ones ❧ Clearly need at least log2 N

s

  • bits to distinguish signals

❧ By Stirling’s approximation, about s log(N/s) bits ❧ When s ≪ N, signals contain much less information than the ambient dimension suggests ❧ A simple adaptive coding scheme can achieve this rate

CoSaMP (DIMACS, 26 March 2009) 7

slide-8
SLIDE 8

What is a Sample?

❧ A sample is the value of a linear functional applied to the signal ❧ Examples: ❧ CCD: Point intensity of an image ❧ ADC: Voltage of an electrical signal at a point in time ❧ MRI: Frequency in the 2D Fourier transform of an image ❧ CAT: Line integral of density in one direction ❧ Some of these technologies acquire samples in batches ❧ We wish to acquire signals with as few samples as possible

CoSaMP (DIMACS, 26 March 2009) 8

slide-9
SLIDE 9

Compressive Sampling and Signal Recovery

❧ Design linear sampling operator Φ : CN → Cm ❧ Suppose x is an unknown (compressible) signal in CN ❧ Collect noisy samples u = Φx + e ❧ Problem: Given samples u, approximate x

CoSaMP (DIMACS, 26 March 2009) 9

slide-10
SLIDE 10

Restricted Isometries

❧ Abstract property of sampling operator supports efficient sampling ❧ Φ has the restricted isometry property of order 2s when (1 − c) x2

2 ≤ Φx2 2 ≤ (1 + c) x2 2

whenever x0 ≤ 2s ❧ Φ preserves geometry of s-sparse signals (take x = y − z) ❧ W.h.p., a Gaussian sampling operator has RIP(2s) when m ≥ Cs log(N/s) ❧ Gaussian matrices are practically useless References: [Cand` es–Tao 2006, Rudelson–Vershynin 2006]

CoSaMP (DIMACS, 26 March 2009) 10

slide-11
SLIDE 11

Practical Sampling Operators

❧ Partial Fourier matrices [CRT 2006] ❧ Each row of Φ is chosen at random from rows of unitary DFT FN ❧ Random demodulator [Rice DSP 2006] Φ =   1 . . . 1 1 . . . 1 ...  

m×N

  ±1 ±1 ...  

N×N

FN ❧ W.h.p., both have RIP(2s) when m ≥ Cs logα N ❧ Certain technologies can acquire these samples efficiently ❧ Fast matrix–vector multiplies!

CoSaMP (DIMACS, 26 March 2009) 11

slide-12
SLIDE 12

Desiderata for Recovery Algorithm

❧ Works for general sampling schemes ❧ Succeeds with minimal number of samples ❧ Tolerates noise in samples ❧ Produces approximations with optimal error bound ❧ Yields rigorous guarantees on resource requirements ❧ Exploits structured sampling matrices

CoSaMP (DIMACS, 26 March 2009) 12

slide-13
SLIDE 13

CoSaMP(Φ, u, s) Input: Sampling operator Φ, noisy sample vector u, sparsity level s Output: An s-sparse approximation a of the target signal k = 0 { Initialization } ak = 0 while halting criterion false v ← u − Φak { Update samples } y ← Φ∗v { Form signal proxy } Ω ← supp(y2s) { Identification } T ← Ω ∪ supp(ak) { Merge supports } b|T ← Φ†

Tu

{ Signal estimation by least squares } b|T c ← 0 ak+1 ← bs { Prune to obtain next approximation } k ← k + 1 end while a ← ak { Return final approximation }

CoSaMP (DIMACS, 26 March 2009) 13

slide-14
SLIDE 14

Cost per Iteration

❧ Update samples and form signal proxy: v ← u − Φak and y ← Φ∗v ❧ One matrix–vector multiplication each ❧ Signal approximation by least squares: bT ← Φ†

Tu

❧ Use conjugate gradient to apply pseudoinverse ❧ Each iteration requires two matrix–vector mulitplies ❧ Assuming RIP(2s), constant number of iterations for fixed accuracy ❧ Constant number of matrix–vector multiplies per CoSaMP iteration!

CoSaMP (DIMACS, 26 March 2009) 14

slide-15
SLIDE 15

Performance Guarantee

Theorem 1. [CoSaMP] Suppose that ❧ the sampling matrix Φ has RIP(2s), ❧ the sample vector u = Φx + e, ❧ η is a precision parameter, ❧ L bounds cost of a matrix–vector multiply with Φ or Φ∗. Then CoSaMP produces a 2s-sparse approximation a such that x − a2 ≤ C max

  • η, 1

√s x − xs1 + e2

  • with execution time O(L · log(x2 /η)).

❧ Need m ≥ Cs logα N samples for restricted isometry hypothesis

CoSaMP (DIMACS, 26 March 2009) 15

slide-16
SLIDE 16

Error Bound for Compressible Signals

Corollary 2. [Compressible signals] Suppose ❧ the sampling matrix Φ has RIP(2s), ❧ the signal x is p-compressible with magnitude R, ❧ the sample vector u = Φx + e, ❧ L bounds cost of a matrix–vector multiply with Φ or Φ∗. Then CoSaMP produces a 2s-sparse approximation a such that x − a2 ≤ C

  • Rp−1 · s1/2−1/p + e2
  • with execution time O(L · p−1 log s).

CoSaMP (DIMACS, 26 March 2009) 16

slide-17
SLIDE 17

To learn more...

E-mail: ❧ jtropp@acm.caltech.edu ❧ dneedell@math.ucdavis.edu Web: http://www.acm.caltech.edu/~jtropp Relevant Papers:

❧ NTV, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” ACHA 2009 ❧ T and Rice DSP, “Beyond Nyquist: Efficient sampling of sparse, bandlimited signals,” submitted ❧ N and Vershynin, “Stable signal recovery from incomplete and inaccurate samples,” submitted ❧ T and Gilbert, “Signal recovery from random measurements via Orthogonal Matching Pursuit,” Trans. IT, Dec. 2007.

CoSaMP (DIMACS, 26 March 2009) 17