sparsity aware sampling theorems and applications
play

Sparsity-aware sampling theorems and applications Rachel Ward - PowerPoint PPT Presentation

Sparsity-aware sampling theorems and applications Rachel Ward University of Texas at Austin November, 2014 Sparsity-aware sampling: motivating example N = 100 , 000 soldiers should be screened for syphilis. Problem: Syphilis is rare (only


  1. Sparsity-aware sampling theorems and applications Rachel Ward University of Texas at Austin November, 2014

  2. Sparsity-aware sampling: motivating example N = 100 , 000 soldiers should be screened for syphilis. Problem: Syphilis is rare (only about s = 10 expected out of 100 , 000). Doing a blood test is expensive. Do we need to take N blood tests?

  3. Sparsity-aware sampling: motivating example N = 100 , 000 soldiers should be screened for syphilis. Problem: Syphilis is rare (only about s = 10 expected out of 100 , 000). Doing a blood test is expensive. Do we need to take N blood tests? Idea: Pool blood together. Test a combined blood sample to check if at least one soldier has syphilis.

  4. Sparsity-aware sampling: motivating example N = 100 , 000 soldiers should be screened for syphilis. Problem: Syphilis is rare (only about s = 10 expected out of 100 , 000). Doing a blood test is expensive. Do we need to take N blood tests? Idea: Pool blood together. Test a combined blood sample to check if at least one soldier has syphilis. Only only need take s log N ≪ N blood tests to identify infected soldiers. (“compressed” measurements). Implemented by the U.S. Government during WWII

  5. Compressive sensing Main idea: Many natural signals / images of interest are sparse in some sense. We say x is s -sparse if � x � 0 = # { j : | x j | > 0 } ≤ s . Theory: from only m ≈ s log( N ) incoherent linear measurements, can recover sparse signal as e.g. vector of minimal ℓ 1 -norm satisfying y = Φ x

  6. Examples of sparsity: Natural images: Smooth function interpolation Low-rank matrices:

  7. Incoherent sampling y = Ax Let (Φ , Ψ) is a pair of orthonormal bases of R N . 1. Φ = ( φ j ) is used for sensing: A ∈ R m × N is a subset of m rows of Φ 2. Ψ = ( ψ k ) is used to sparsely represent x : x = Ψ ∗ b , and b is assumed sparse Definition The coherence between Φ and Ψ is √ µ (Φ , Ψ) = N 1 ≤ k , j ≤ N | < φ j , ψ k > | max

  8. Incoherent sampling y = Ax Let (Φ , Ψ) is a pair of orthonormal bases of R N . 1. Φ = ( φ j ) is used for sensing: A ∈ R m × N is a subset of m rows of Φ 2. Ψ = ( ψ k ) is used to sparsely represent x : x = Ψ ∗ b , and b is assumed sparse Definition The coherence between Φ and Ψ is √ µ (Φ , Ψ) = N 1 ≤ k , j ≤ N | < φ j , ψ k > | max If µ (Φ , Ψ) = C a constant, then Φ and Ψ are called incoherent .

  9. Incoherent sampling Example: ◮ Ψ = Identity. Signal is sparse in canonical/Kronecker basis N e i 2 π jk / N � N − 1 � 1 ◮ Φ is discrete Fourier basis, φ j = √ k =0 ◮ The Kronecker and Fourier bases are incoherent: √ µ (Φ , Ψ) := N max j , k | < φ j , ψ k > | = 1 .

  10. Why does ℓ 1 minimization work?

  11. Why does ℓ 1 minimization work?

  12. Why does ℓ 1 minimization work?

  13. Why does ℓ 1 minimization work?

  14. Why does ℓ 1 minimization work?

  15. Why does ℓ 1 minimization work?

  16. Why does ℓ 1 minimization work?

  17. Reconstructing sparse signals ℓ 1 -minimization N x # = arg min � | z j | such that Az = Ax . z ∈ R N j =1 or, if x is sparse with respect to basis Ψ, N x # = arg min � | (Ψ ∗ z ) j | such that Az = Ax . z ∈ R N j =1

  18. Theorem (Sparse recovery via incoherent sampling 1 ) Let (Φ , Ψ) be a pair of incoherent orthonormal bases of R N . Select m (possibly not distinct) rows of Φ i.i.d. uniformly to form A : R N → R m , where m � Cs log( N ) . With exceedingly high probability, the following holds: for all x ∈ R N such that Ψ ∗ x is s-sparse, N � | (Ψ ∗ z ) j | x = arg min such that Az = Ax . z ∈ R N j =1 Such reconstruction is also stable to sparsity defects and robust to noise. 1 Cand` es, Romberg, Tao ’06, Rudelson Vershynin ’08, ...

  19. Theory is largely restricted to: incoherent measurement/sparsity bases, finite-dimensional spaces, and sparsity in orthonormal representations; not sufficient for key examples Current research directions: 1. Importance sampling for compressive sensing applications 2. Adaptive sampling strategies 3. Extend theory from sparsity in orthonormal bases to sparsity in redundant dictionaries 4. Extend theory from finite-dimensional spaces to infinite-dimensional spaces

  20. Compressive imaging In MRI, one cannot observe the N = n × n pixel image directly; can only take samples from 2D (or 3D) discrete Fourier transform F . So we can acquire a number m ≪ N linear measurements of the form y k 1 , k 2 = ( F x ) k 1 , k 2 = 1 � x j 1 , j 2 e 2 π i ( k 1 j 1 + k 2 j 2 ) / n , − n / 2+1 ≤ k 1 , k 2 , ≤ n / 2 n j 1 , j 2 Smaller m means faster MRI scan! How to subsample in frequency domain?

  21. In the MRI setting ... random sampling fails Reconstructions of an image from m = . 1 N frequency measurements using total variation minimization. Pixel space / Frequency space 50 100 150 200 250 Reconstruction from lowest frequencies Reconstruction from uniformly subsampled frequencies

  22. In the MRI setting ... random sampling fails Image Natural images are sparsely represented in 2D wavelet bases Ψ Possible sensing measurements are Fourier measurements Φ This is because wavelet and Fourier bases are not incoherent

  23. Importance sampling Image domain / Fourier domain 50 50 100 100 150 150 200 200 250 250 (a) Full sampling (b) Uniform random 50 50 100 100 150 150 200 200 250 250 50 100 150 200 250 50 100 150 200 250 (c) Radial line sampling (d) Variable-density Used in MRI: radial-line sampling. New: “importance sampling”: take random samples according to an inverse-square distance variable density: Draw 1 frequency ( k 1 , k 2 ) with probability ∝ 2 . k 2 1 + k 2 With variable density sampling, can extend compressed sensing results and prove that m � s log( N ) 2D DFT measurements suffice for recovering images with s -sparse wavelet expansions.

  24. Examples of sparsity: Natural images: Smooth function interpolation Low-rank matrices:

  25. High-dimensional function interpolation Given a function f : D → C on a d -dimensional domain D , reconstruct or interpolate f from sample values f ( t 1 ) , . . . , f ( t m ). Unweighted l1 minimizer Least squares solution Original function 1 1 1 0.8 0.8 0.5 0.6 0.6 0.4 0.4 0 0.2 0.2 0 − 0.5 0 � 1 � 0.5 0 0.5 1 − 1 − 0.5 0 0.5 1 − 1 − 0.5 0 0.5 1 Assume the form f ( t ) = � j ∈ Γ x j ψ j ( t ) where x has assumed structure: 1. Sparsity: � x � 0 := { ℓ : x ℓ � = 0 } ≤ s − − − − − − j j r | x j | < ∞ . 2. Smoothness: coefficient decay �

  26. High-dimensional function interpolation Given a function f : D → C on a d -dimensional domain D , reconstruct or interpolate f from sample values f ( t 1 ) , . . . , f ( t m ). Unweighted l1 minimizer Least squares solution Original function 1 1 1 0.8 0.8 0.5 0.6 0.6 0.4 0.4 0 0.2 0.2 0 − 0.5 0 � 1 � 0.5 0 0.5 1 − 1 − 0.5 0 0.5 1 − 1 − 0.5 0 0.5 1 Assume the form f ( t ) = � j ∈ Γ x j ψ j ( t ) where x has assumed structure: 1. Sparsity: � x � 0 := { ℓ : x ℓ � = 0 } ≤ s − − − − − − j j r | x j | < ∞ . 2. Smoothness: coefficient decay � Smoothness assumption not strong enough to overcome curse of ε ) d / r sample values for accuracy ε . dimensionality: need m ≈ ( 1

  27. High-dimensional function interpolation Given a function f : D → C on a d -dimensional domain D , reconstruct or interpolate f from sample values f ( t 1 ) , . . . , f ( t m ). Unweighted l1 minimizer Least squares solution Original function 1 1 1 0.8 0.8 0.5 0.6 0.6 0.4 0.4 0 0.2 0.2 0 − 0.5 0 � 1 � 0.5 0 0.5 1 − 1 − 0.5 0 0.5 1 − 1 − 0.5 0 0.5 1 Assume the form f ( t ) = � j ∈ Γ x j ψ j ( t ) where x has assumed structure: 1. Sparsity: � x � 0 := { ℓ : x ℓ � = 0 } ≤ s − − − − − − j j r | x j | < ∞ . 2. Smoothness: coefficient decay � Smoothness assumption not strong enough to overcome curse of ε ) d / r sample values for accuracy ε . dimensionality: need m ≈ ( 1 Our work: combine smoothness + sparsity for weighted ε ) s log 3 ( s ) samples sufficient ℓ 1 -coefficient function spaces . m ≈ ( 1 to reconstruct such a function, independent of dimension d

  28. Low-rank matrix completion / approximation Previous results: a rank- r incoherent n × n matrix M may be completed (via convex optimization) from m ≈ nr log 2 ( n ) uniformly sampled entries Our results: An arbitrary rank- r matrix M may be completed (via convex optimization) from m ≈ nr log( n ) entries, sampled according to a specific non-uniform distribution adapted to the matrix leverage scores. Also: extensions to only approximately low-rank matrices, two-stage adaptive sampling

  29. Summary Compressed sensing and related optimization problems often assume incoherence between the sensing and sparsity bases to derive sparse recovery guarantees. Incoherence is restrictive and not achievable in many problems of practical interest. With small local coherence from one basis to another, one may derive sampling strategies and sparse recovery results for a wide range of new sensing problems (imaging, matrix completion, ...) Also: weighted sparsity, measurement error, adaptive sampling ...

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend