survey compressive sensing in signal processing
play

Survey: Compressive Sensing in Signal Processing Justin Romberg - PowerPoint PPT Presentation

Survey: Compressive Sensing in Signal Processing Justin Romberg Georgia Tech, School of ECE Sublinear Algorithms May 23, 2011 Bertinoro, Italy Acquisition as linear algebra # samples resolution/ = bandwidth data acquisition system


  1. Survey: Compressive Sensing in Signal Processing Justin Romberg Georgia Tech, School of ECE Sublinear Algorithms May 23, 2011 Bertinoro, Italy

  2. Acquisition as linear algebra # samples resolution/ = bandwidth data acquisition system unknown signal/image Small number of samples = underdetermined system Impossible to solve in general If x is sparse and Φ is diverse , then these systems can be “inverted”

  3. Signal processing trends DSP: sample first, ask questions later Explosion in sensor technology/ubiquity has caused two trends: Physical capabilities of hardware are being stressed, increasing speed/resolution becoming expensive ◮ gigahertz+ analog-to-digital conversion ◮ accelerated MRI ◮ industrial imaging Deluge of data ◮ camera arrays and networks, multi-view target databases, streaming video... Compressive Sensing: sample smarter, not faster

  4. Sparsity/Compressibility pixels large wavelet coefficients frequency wideband large signal Gabor samples coefficients time

  5. Wavelet approximation 1 megapixel image 25k term approximation 1% error with ≈ 2 . 5% of the wavelet coefficients

  6. # samples resolution/ = bandwidth data acquisition system unknown signal/image If x is sparse and Φ is diverse , then these systems can be “inverted”

  7. Classical: When can we stably “invert” a matrix? Suppose we have an M × N observation matrix A with M ≥ N (MORE observations than unknowns), through which we observe y = Ax 0 + noise

  8. Classical: When can we stably “invert” a matrix? Suppose we have an M × N observation matrix A with M ≥ N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0 , use the pseudo-inverse x = ( A T A ) − 1 A T y x � y − Ax � 2 solve min ⇔ ˆ 2

  9. Classical: When can we stably “invert” a matrix? Suppose we have an M × N observation matrix A with M ≥ N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0 , use the pseudo-inverse x = ( A T A ) − 1 A T y x � y − Ax � 2 solve min ⇔ ˆ 2 Q: When is this recovery stable? That is, when is x − x 0 � 2 2 ∼ � noise � 2 � ˆ ? 2

  10. Classical: When can we stably “invert” a matrix? Suppose we have an M × N observation matrix A with M ≥ N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0 , use the pseudo-inverse x = ( A T A ) − 1 A T y x � y − Ax � 2 solve min ⇔ ˆ 2 Q: When is this recovery stable? That is, when is x − x 0 � 2 2 ∼ � noise � 2 � ˆ ? 2 A: When the matrix A is an approximate isometry ... � Ax � 2 2 ≈ � x � 2 for all x ∈ R N 2 i.e. A preserves lengths

  11. Classical: When can we stably “invert” a matrix? Suppose we have an M × N observation matrix A with M ≥ N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0 , use the pseudo-inverse x = ( A T A ) − 1 A T y x � y − Ax � 2 solve min ⇔ ˆ 2 Q: When is this recovery stable? That is, when is x − x 0 � 2 2 ∼ � noise � 2 � ˆ ? 2 A: When the matrix A is an approximate isometry ... � A ( x 1 − x 2 ) � 2 2 ≈ � x 1 − x 2 � 2 for all x 1 , x 2 ∈ R N 2 i.e. A preserves distances

  12. Classical: When can we stably “invert” a matrix? Suppose we have an M × N observation matrix A with M ≥ N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0 , use the pseudo-inverse x = ( A T A ) − 1 A T y x � y − Ax � 2 solve min ⇔ ˆ 2 Q: When is this recovery stable? That is, when is x − x 0 � 2 2 ∼ � noise � 2 � ˆ ? 2 A: When the matrix A is an approximate isometry ... (1 − δ ) ≤ σ 2 min ( A ) ≤ σ 2 max ( A ) ≤ (1 + δ ) i.e. A has clustered singular values

  13. Classical: When can we stably “invert” a matrix? Suppose we have an M × N observation matrix A with M ≥ N (MORE observations than unknowns), through which we observe y = Ax 0 + noise Standard way to recover x 0 , use the pseudo-inverse x = ( A T A ) − 1 A T y x � y − Ax � 2 solve min ⇔ ˆ 2 Q: When is this recovery stable? That is, when is x − x 0 � 2 2 ∼ � noise � 2 � ˆ ? 2 A: When the matrix A is an approximate isometry ... (1 − δ ) � x � 2 2 ≤ � Ax � 2 2 ≤ (1 + δ ) � x � 2 2 for some 0 < δ < 1

  14. When can we stably recover an S -sparse vector? Now we have an underdetermined M × N system Φ (FEWER measurements than unknowns), and observe y = Φ x 0 + noise

  15. When can we stably recover an S -sparse vector? Now we have an underdetermined M × N system Φ (FEWER measurements than unknowns), and observe y = Φ x 0 + noise We can recover x 0 when Φ is a keeps sparse signals separated (1 − δ ) � x 1 − x 2 � 2 2 ≤ � Φ( x 1 − x 2 ) � 2 2 ≤ (1 + δ ) � x 1 − x 2 � 2 2 for all S -sparse x 1 , x 2

  16. When can we stably recover an S -sparse vector? Now we have an underdetermined M × N system Φ (FEWER measurements than unknowns), and observe y = Φ x 0 + noise We can recover x 0 when Φ is a restricted isometry (RIP) (1 − δ ) � x � 2 2 ≤ � Φ x � 2 2 ≤ (1 + δ ) � x � 2 for all 2 S -sparse x 2

  17. When can we stably recover an S -sparse vector? Now we have an underdetermined M × N system Φ (FEWER measurements than unknowns), and observe y = Φ x 0 + noise We can recover x 0 when Φ is a restricted isometry (RIP) (1 − δ ) � x � 2 2 ≤ � Φ x � 2 2 ≤ (1 + δ ) � x � 2 for all 2 S -sparse x 2 To recover x 0 , we solve min � x � 0 subject to Φ x ≈ y x � x � 0 = number of nonzero terms in x This program is intractable

  18. When can we stably recover an S -sparse vector? Now we have an underdetermined M × N system Φ (FEWER measurements than unknowns), and observe y = Φ x 0 + noise We can recover x 0 when Φ is a restricted isometry (RIP) (1 − δ ) � x � 2 2 ≤ � Φ x � 2 2 ≤ (1 + δ ) � x � 2 for all 2 S -sparse x 2 A relaxed (convex) program min � x � 1 subject to Φ x ≈ y x � x � 1 = � k | x k | This program is very tractable (linear program)

  19. Sparse recovery algorithms Given y , look for a sparse signal which is consistent. One method: ℓ 1 minimization (or Basis Pursuit ) � Ψ T x � 1 min s.t. Φ x = y x Ψ = sparsifying transform, Φ = measurement system (need RIP for ΦΨ ) Convex (linear) program, can relax for robustness to noise Performance has theoretical guarantees Other recovery methods include greedy algorithms and iterative thresholding schemes

  20. Stable recovery Despite its nonlinearity, sparse recovery is stable in the presence of ◮ modeling mismatch (approximate sparsity), and ◮ measurement error If we observe y = Φ x 0 + e , with � e � 2 ≤ ǫ , the solution ˆ x to x � Ψ T x � 1 min s.t. � y − Φ x � 2 ≤ ǫ will satisfy � ǫ + � x 0 − x 0 ,S � 1 � � ˆ x − x 0 � 2 ≤ Const · √ S where ◮ x 0 ,S = S -term approximation of x 0 ◮ S is the largest value for which ΦΨ satisfies the RIP Similar guarantees exist for other recovery algorithms ◮ greedy (Needell and Tropp ’08) ◮ iterative thresholding (Blumensath and Davies ’08)

  21. What kind of matrices are restricted isometries? They are very hard to design, but they exist everywhere! Φ M .+,%'&),+,(-'/# N !!"#$%&''!%(#)%("*+#,(-)!,'# For any fixed x ∈ R N , each measurement is y k ∼ Normal(0 , � x � 2 2 /M )

  22. What kind of matrices are restricted isometries? They are very hard to design, but they exist everywhere! Φ M .+,%'&),+,(-'/# N !!"#$%&''!%(#)%("*+#,(-)!,'# For any fixed x ∈ R N , we have E[ � Φ x � 2 2 ] = � x � 2 2 the mean of the measurement energy is exactly � x � 2 2

  23. What kind of matrices are restricted isometries? They are very hard to design, but they exist everywhere! Φ M .+,%'&),+,(-'/# N !!"#$%&''!%(#)%("*+#,(-)!,'# For any fixed x ∈ R N , we have � � Φ x � 2 2 − � x � 2 � < δ � x � 2 ≥ 1 − e − Mδ 2 / 4 �� � � P 2 2

  24. What kind of matrices are restricted isometries? They are very hard to design, but they exist everywhere! Φ M .+,%'&),+,(-'/# N !!"#$%&''!%(#)%("*+#,(-)!,'# For all 2 S -sparse x ∈ R N , we have � � � < δ � x � 2 ≥ 1 − e c · S log( N/S ) e − Mδ 2 / 4 � � � Φ x � 2 2 − � x � 2 � P max 2 2 x So we can make this probability close to 1 by taking M � S log( N/S )

  25. What other types of matrices are restricted isometries? Four general frameworks: Random matrices (iid entries) Random subsampling Random convolution (Randomly modulated integration — we’ll skip this today) Note the role of randomness in all of these approaches Slogan: random projections keep sparse signal separated

  26. Random matrices (iid entries) Φ M 0"'()-%,,%.& S (%!,1-%(%*+,2& !"#$%&& "'()'*%*+,& -!*.'(& ± 1 %*+-/%,& N +'+!3&-%,'31#'*45!*.6/.+7&8& Random matrices are provably efficient We can recover S -sparse x from M � S · log( N/S ) measurements

  27. Rice single pixel camera single photon detector image reconstruction or DMD DMD processing random pattern on DMD array (Duarte, Davenport, Takhar, Laska, Sun, Kelly, Baraniuk ’08)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend