Average-Case Analysis of OMP Joel A. Tropp jtropp@umich.edu - - PowerPoint PPT Presentation

average case analysis of omp
SMART_READER_LITE
LIVE PREVIEW

Average-Case Analysis of OMP Joel A. Tropp jtropp@umich.edu - - PowerPoint PPT Presentation

Average-Case Analysis of OMP Joel A. Tropp jtropp@umich.edu Department of Mathematics The University of Michigan With contributions from Anna C. Gilbert and Martin Strauss 1 Identification of Sparse Signals Suppose that we measure


slide-1
SLIDE 1

Average-Case Analysis of OMP

Joel A. Tropp

jtropp@umich.edu Department of Mathematics The University of Michigan With contributions from Anna C. Gilbert and Martin Strauss

1

slide-2
SLIDE 2

Identification of Sparse Signals

❦ Suppose that we measure a signal s = Φcopt + ν ❧ where Φ is a known matrix ❧ copt is an unknown sparse coefficient vector ❧ ν is an unknown noise vector

Goal:

Find a sparse coefficient vector c that approximates copt. In particular, c should correctly identify the support of copt.

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 2

slide-3
SLIDE 3

Model I: Random Noise

❦ Suppose that we measure a signal s = Φcopt + ν where the noise vector ν is random ❧ Goal: Recover a sparse superposition contaminated with additive noise ❧ Intuition: It is easy to reject the noise, which is unlikely to look like any column of the matrix ❧ Related work: [Cand` es et al., Donoho, Elad, Fletcher et al., . . . ]

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 3

slide-4
SLIDE 4

Model II: Random Matrix

❦ Suppose that we measure a signal s = Φcopt + ν where the matrix Φ is random ❧ Goal: Recover a sparse signal from imperfect random measurements ❧ Intuition: The information content of a sparse signal is preserved under random projections ❧ Related work: [Alon et al., Cand` es et al., Donoho, Gilbert et al., Johnson–Lindenstrauss, Nowak, . . . ]

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 4

slide-5
SLIDE 5

Model III: Random Coefficients

❦ Suppose that we measure a signal s = Φcopt + ν where the coefficient vector copt is sparse and random ❧ Goal: Recover a random sparse superposition contaminated with noise ❧ Intuition: It is unlikely that a random superposition looks like another column of the matrix ❧ Related work: [Cand` es et al., Donoho, Elad–Zibulevsky, . . . ]

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 5

slide-6
SLIDE 6

Orthogonal Matching Pursuit (OMP)

❦ Input: A matrix Φ, an input signal s, a stopping criterion Initialize the residual r0 = s and the counter t = 0 Until the stopping criterion holds do increment t and

  • A. Find the column index ωt that solves

ωt = arg maxj |rt−1, ϕj|

  • B. Calculate the next residual

rt = v − Pt s where Pt is the orthogonal projector onto span {ϕω1, . . . , ϕωt} Output: An sparse estimate c with nonzero entries in components ω1, . . . , ωt. These entries appear in the expansion Pt s = t

k=1

cωk ϕωk

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 6

slide-7
SLIDE 7

Worst-Case Performance of OMP

Assumptions:

❧ We measure s = Φcopt + ν ❧ The matrix Φ has coherence µ ❧ The sparsity m of copt satisfies µ m ≤ 1

3

❧ cmin is the smallest nonzero component of copt ❧ The norm of the noise ν2 ≤ 0.25 cmin ❧ The OMP halting criterion is Φ∗ rt∞ ≤ 0.5 cmin Theorem 1. [TGS] The support of c equals the support of copt.

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 7

slide-8
SLIDE 8

OMP with Random Noise

Assumptions:

❧ We measure s = Φcopt + ν ❧ The matrix Φ has coherence µ and N columns ❧ The sparsity m of copt satisfies µ m ≤ 1

3

❧ cmin is the smallest nonzero component of copt ❧ The noise ν is white Gaussian with variance σ2 ❧ The OMP halting criterion is Φ∗ rt∞ ≤ 0.472 cmin Theorem 2. [JAT] The support of c equals the support of copt with probability at least (1 − δ) provided that σ < 0.167 ln1/2(N/δ) cmin.

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 8

slide-9
SLIDE 9

Worst-Case versus Average-Case Noise

❦ ❧ We measure s = Φcopt + ν ❧ The matrix Φ is 210 × 212 with coherence µ = 2−5 ❧ The sparsity m = 10 and cmin = 1 ❧ For the worst-case noise, success requires an SNR around 22.04 dB ❧ For Gaussian noise, 99% success probability when SNR is 6.57 dB

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 9

slide-10
SLIDE 10

OMP with Random Matrix

Assumptions:

❧ The matrix Φ is d × N with iid normal(0, 1) entries ❧ The vector copt has sparsity m ❧ We measure s = Φcopt Theorem 3. [JAT–ACG] Fix a positive number p. The probability that

  • c = copt exceeds (1 − 2N −p) provided that

m ≤ d 8 (p + 1) ln N . To ensure recovery for general Φ, m = O( √ d)

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 10

slide-11
SLIDE 11

OMP with Random Coefficients

❦ ❧ Let Φ = I F128×256 ❧ The vector copt is m-sparse ❧ The location of the nonzero entries of copt are random ❧ The nonzero entries of copt are iid normal(0, 1) ❧ We measure s = Φcopt ❧ Theory requires m ≤ 6 to ensure exact recovery ❧ But...

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 11

slide-12
SLIDE 12

50 60 70 80 90 100 110 120 130 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 m = Size of Support / Number of Atoms Percent of Support Recovered Incorrectly Average Percentage of Support Recovered Incorrectly over 1000 Trials Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 12

slide-13
SLIDE 13

Related Papers and Contact Information

❦ ❧ “Signal recovery from partial information via Orthogonal Matching Pursuit,” submitted April 2005 ❧ “Algorithms for simultaneous sparse approximation. Parts I and II,” accepted to EURASIP J. Signal Processing, April 2005 ❧ “Greed is good: Algorithmic results for sparse approximation,” IEEE

  • Trans. Info. Theory, October 2004

❧ “Just Relax: Convex programming methods for identifying sparse signals,” submitted February 2004 All papers available from http://www.umich.edu/~jtropp E-mail: jtropp@umich.edu

Average-Case Analysis of OMP, SPIE Wavelets ξ, 2 August 2005 13