The Sparsity Gap Joel A. Tropp Computing & Mathematical - - PowerPoint PPT Presentation

the sparsity gap
SMART_READER_LITE
LIVE PREVIEW

The Sparsity Gap Joel A. Tropp Computing & Mathematical - - PowerPoint PPT Presentation

The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday Conference, College Park,


slide-1
SLIDE 1

The Sparsity Gap

Joel A. Tropp

Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu

Research supported in part by ONR 1

slide-2
SLIDE 2

Introduction

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 2

slide-3
SLIDE 3

Systems of Linear Equations

We consider linear systems of the form m      Φ  

  • N

         x          =    b    

Assume that

❧ Φ has dimensions m × N with N > m ❧ Φ has full row rank ❧ The columns of Φ have unit ℓ2 norm

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 3

slide-4
SLIDE 4

The Trichotomy Theorem

Theorem 1. For a linear system Φx = b, exactly one of the following situations occurs.

  • 1. No solution exists.
  • 2. The equation has a unique solution.
  • 3. The solutions form a linear subspace of positive dimension.

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 4

slide-5
SLIDE 5

Regularization via Sparsity

A principled approach to underdetermined systems: min x0 subject to Φx = b (P0) where x0 = # supp(x) = #{j : xj = 0} ❧ When x0 ≤ s, then x is called s-sparse ❧ If Φx = b and x is s-sparse, then x is an s-sparse representation of b ❧ Since Φ has full row-rank, every b has an m-sparse representation ❧ Question: What can we say about sparser representations?

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 5

slide-6
SLIDE 6

Geometry

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 6

slide-7
SLIDE 7

Key Insight Sparse representations are well behaved when the matrix Φ is sufficiently nice

(Column submatrices should not be singular and individual columns should not look like sparse signals)

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 7

slide-8
SLIDE 8

Quantifying Niceness I

❧ We call Φ a tight frame when ΦΦ∗ = N m · I ❧ Equivalently, the rows of Φ form an orthonormal family (up to scaling) ❧ Observe that N/m is the redundancy of the frame ❧ Tight frames have minimal spectral norm among conformal matrices

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 8

slide-9
SLIDE 9

Quantifying Niceness II

❧ The coherence of Φ is the quantity µ = max

j=k |ϕj, ϕk|

❧ Measures the angle between columns ❧ When N ≥ 2m, the coherence satisfies µ 1 √m ❧ Incoherent matrices appear often in signal processing applications

References: [Welch 1974, Mallat–Zhang 1993, Donoho–Huo 2001, Gribonval–Nielsen 2003, Strohmer–Heath 2003]

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 9

slide-10
SLIDE 10

Example: Identity + Fourier

1

Impulses Complex Exponentials

A very incoherent tight frame

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 10

slide-11
SLIDE 11

Uniqueness

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 11

slide-12
SLIDE 12

Uncertainty implies Uniqueness

Theorem 2. Suppose that a vector b has two representations: Φx = b = Φy. Then x0 + y0 > µ−1. Corollary 3. Suppose that b = Φx where x0 < 1 2 · µ−1. Then x is the unique solution to (P0). ❧ Very strict requirement since µ−1 √m

References: [Donoho–Stark 1989, Donoho–Huo 2001, Gribonval–Nielsen 2003, Donoho–Elad 2003]

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 12

slide-13
SLIDE 13

The Square-Root Threshold

❧ Sparse representations are not necessarily unique past the √m threshold

Example: The Dirac Comb

❧ Consider the Identity + Fourier matrix with m = p2 ❧ There is a vector b that can be written as either p spikes or p sines ❧ By the Poisson summation formula, b(t) =

p−1

  • j=0

δpj(t) = 1 √m

p−1

  • j=0

e−2πipjt/m for t = 0, 1, . . . , m

References: [Donoho–Stark 1989]

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 13

slide-14
SLIDE 14

Enter Probability Insight: The bad vectors are atypical

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 14

slide-15
SLIDE 15

An Uncertainty Principle for Generic Signals

Theorem 4. [T 2010] Suppose that b = Φx where the nonzero components of x have a continuous distribution. With probability one, the vector b has no representation b = Φy where supp(x) ∩ supp(y) = ∅ unless x0 + y0 > µ−1 x1/2 . Corollary 5. When µ ≤ m−1/2, condition becomes y0 > x0 m x0 − 1

  • ❧ Even with refinements, this approach does not yield uniqueness!

❧ Problem: Some supports could be bad

References: [The Sparsity Gap]

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 15

slide-16
SLIDE 16

Enter More Probability Insight: The bad supports are atypical

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 16

slide-17
SLIDE 17

A Simple Model for Random Sparse Vectors

Model (M0) for b = Φx The matrix Φ is a unit-norm tight frame of size m × N with coherence µ ≤ c/ log N. The support of x has cardinality s ≤ cm/ log N and is uniformly random. The nonzero entries of x have a continuous distribution.

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 17

slide-18
SLIDE 18

Random Submatrices & Sparse Representation

Theorem 6. [T 2006, 2008] Assume all parameters satisfy Model (M0). Draw a uniformly random set S of s columns from Φ, and define the random column submatrix A = ΦS. Then Prob

  • A∗A − I < 1

2

  • ≥ 99.72%

and Prob

  • maxn/

∈S A∗ϕn2 < 1

2

  • ≥ 99.72%.

References: [Random Subdictionaries, Random Submatrices]

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 18

slide-19
SLIDE 19

The Sparsity Gap

Theorem 7. [T 2008, 2010] Let b = Φx be a vector drawn from Model (M0). With probability at least 99.44%, the following statements hold.

  • 1. The vector x is the unique solution to (P0).
  • 2. Furthermore, there is no disjoint representation

b = Φy where supp(x) ∩ supp(y) = ∅ unless y0 > x0

  • 1 + 2 · m

N

  • .

References: [Cand` es–Romberg 2006, Random Subdictionaries, Random Submatrices, The Sparsity Gap]

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 19

slide-20
SLIDE 20

To learn more...

Web: http://www.acm.caltech.edu/~jtropp E-mail: jtropp@acm.caltech.edu

Relevant papers: ❧ “Conditioning of random subdictionaries,” ACHA, 2008 ❧ “Norms of random submatrices,” CRAS, 2008 ❧ “Spikes and sines,” JFAA, 2008 ❧ “The sparsity gap,” Proc. CISS, 2010

The Sparsity Gap (Casazza Birthday Conference, College Park, 20 May 2010) 20