single letter formulas for quantized compressed sensing
play

Single Letter Formulas for Quantized Compressed Sensing with - PowerPoint PPT Presentation

Single Letter Formulas for Quantized Compressed Sensing with Gaussian Codebooks Alon Kipnis (Stanford) Galen Reeves (Duke) Yonina Eldar (Technion) ISIT, June 2018 Table of Contents Introduction Motivation Problem Formulation Background


  1. Single Letter Formulas for Quantized Compressed Sensing with Gaussian Codebooks Alon Kipnis (Stanford) Galen Reeves (Duke) Yonina Eldar (Technion) ISIT, June 2018

  2. Table of Contents Introduction Motivation Problem Formulation Background Main Results: CE w.r.t. Gaussian Codebooks Compress-and-Estimate Linear Transformation Compress-and-Estimate Summary 2 / 19

  3. Quantization in Linear Models / Compressed-Sensing Y = H X + N 3 / 19

  4. Quantization in Linear Models / Compressed-Sensing Gaussian noise samples signal Y = H X + N sampling matrix 3 / 19

  5. Quantization in Linear Models / Compressed-Sensing Gaussian noise samples signal Y = H X + N sampling matrix Applications: ◮ Signal processing ◮ Communication ◮ Statistics / Machine learning 3 / 19

  6. Quantization in Linear Models / Compressed-Sensing Gaussian noise samples signal Y = H X + N sampling matrix Applications: ◮ Signal processing ◮ Communication ◮ Statistics / Machine learning This talk : Limited bitrate to represent samples encoder decoder 1011 · · · 01 � Y X 3 / 19

  7. Quantization in Linear Models / Compressed-Sensing Gaussian noise samples signal Y = H X + N sampling matrix Applications: ◮ Signal processing ◮ Communication ◮ Statistics / Machine learning This talk : Limited bitrate to represent samples encoder decoder 1011 · · · 01 � Y X Scenarios: ◮ Limited memory (A/D) ◮ Limited bandwidth 3 / 19

  8. Related Works (on quantized compressed) ◮ Gaussian signals [K., E., Goldsmith, Weissman ’16] ◮ Scalar quantization [Goyal, Fletcher, Rangan ’08], [Jacques, Hammond, Fadili ’11] ◮ 1-bit quantization [Boufounos, Baraniuk 08], [Plan, Vershynin ’13], [Xu, Kabashima, Zdebrova ’14] ◮ AMP reconstruction [Kamilov, Goyal, Rangan ’11] ◮ Separable setting [Leinonen, Codreanu, Juntti, Kramer ’16] 4 / 19

  9. Related Works (on quantized compressed) ◮ Gaussian signals [K., E., Goldsmith, Weissman ’16] ◮ Scalar quantization [Goyal, Fletcher, Rangan ’08], [Jacques, Hammond, Fadili ’11] ◮ 1-bit quantization [Boufounos, Baraniuk 08], [Plan, Vershynin ’13], [Xu, Kabashima, Zdebrova ’14] ◮ AMP reconstruction [Kamilov, Goyal, Rangan ’11] ◮ Separable setting [Leinonen, Codreanu, Juntti, Kramer ’16] This talk: Fundamental limit = minimal distortion over all finite-bit representations of the measurements 4 / 19

  10. Problem Formulation H � 1 , . . ., 2 nR � Y Linear � X AWGN Enc Dec X Transform 5 / 19

  11. Problem Formulation H � 1 , . . ., 2 nR � Y Linear � X AWGN Enc Dec X Transform i . i . d . ◮ Signal distribution : ∼ P X , i = 1 , . . . , n X i ◮ Coding rate : R bits per signal dimension ◮ Sampling matrix : dist ◮ Right-rotationally invariant: H = HO ◮ Empirical spectral distribution of H T H converges to a compactly supported measure µ 5 / 19

  12. Problem Formulation H � 1 , . . ., 2 nR � Y Linear � X AWGN Enc Dec X Transform i . i . d . ◮ Signal distribution : ∼ P X , i = 1 , . . . , n X i ◮ Coding rate : R bits per signal dimension ◮ Sampling matrix : dist ◮ Right-rotationally invariant: H = HO ◮ Empirical spectral distribution of H T H converges to a compactly supported measure µ Definition: D ( P X , µ, R ) � infimum over all D for which there exists a rate- R coding scheme such that �� 2 � � 1 � � � X − ˆ ≤ D lim sup n E X � n →∞ 5 / 19

  13. Spectral Distribution of Sampling Matrix 6 / 19

  14. Spectral Distribution of Sampling Matrix Exm. I H is orthogonal ( H T H = γ I ) ⇒ µ is a point mass distribution δ γ λ γ 0 6 / 19

  15. Spectral Distribution of Sampling Matrix Exm. I H is orthogonal ( H T H = γ I ) ⇒ µ is a point mass distribution δ γ λ γ 0 Exm. II rows of H are randomly sampled form an orthogonal matrix ρ 1 − ρ λ ⇒ µ = (1 − ρ ) δ 0 + ρδ γ γ 0 ρ � 1 − µ ( { 0 } ) is the sampling rate 6 / 19

  16. Spectral Distribution of Sampling Matrix Exm. I H is orthogonal ( H T H = γ I ) ⇒ µ is a point mass distribution δ γ λ γ 0 Exm. II rows of H are randomly sampled form an orthogonal matrix ρ 1 − ρ λ ⇒ µ = (1 − ρ ) δ 0 + ρδ γ γ 0 ρ � 1 − µ ( { 0 } ) is the sampling rate Exm. III H is i.i.d. Gaussian 1 − ρ ⇒ µ is the Marchenco-Pasture law λ 0 ρ � 1 − µ ( { 0 } ) is the sampling rate 6 / 19

  17. Special Cases / Lower Bounds 1 0 . 8 D ( P X , µ, R ) =? 0 . 6 MSE D Shannon ( P X , R ) 0 . 4 0 . 2 MMSE ( P X , µ ) 0 0 . 1 0 . 9 sampling rate ( 1 − µ ( { 0 } ) ) 7 / 19

  18. Special Case: MMSE No Quantization / Infinite Bitrate � � X − E [ X | Y ] � 2 � 1 R →∞ D ( P X , µ, R ) = lim sup lim n E n →∞ � �� � MMSE � M ( P X , µ ) 8 / 19

  19. Special Case: MMSE No Quantization / Infinite Bitrate � � X − E [ X | Y ] � 2 � 1 R →∞ D ( P X , µ, R ) = lim sup lim n E n →∞ � �� � MMSE � M ( P X , µ ) ◮ Under some conditions: M ( P X , µ ) = M ( P X , δ s ) [Guo & Verdu ’05], [Takeda et. al. ’06], [Wu & Verdu ’12], [Tulino et. al. ’13], [Reeves & Pfister ’16], [Barbier et. al. ’16,’17], [Rangan, Schinter, Fletcher ’16] , [Maillard, Barbier, Macris, Krzakala, Wed 10:20] 8 / 19

  20. Special Case: MMSE No Quantization / Infinite Bitrate � � X − E [ X | Y ] � 2 � 1 R →∞ D ( P X , µ, R ) = lim sup lim n E n →∞ � �� � MMSE � M ( P X , µ ) ◮ Under some conditions: M ( P X , µ ) = M ( P X , δ s ) [Guo & Verdu ’05], [Takeda et. al. ’06], [Wu & Verdu ’12], [Tulino et. al. ’13], [Reeves & Pfister ’16], [Barbier et. al. ’16,’17], [Rangan, Schinter, Fletcher ’16] , [Maillard, Barbier, Macris, Krzakala, Wed 10:20] Main result of this talk: D ( P X , µ, R ) ∼ M ( P X , T R µ ) T R is a spectrum scaling operator 8 / 19

  21. Previous Results 1 0 . 8 0 . 6 MSE D Shannon 0 . 4 0 . 2 MMSE 0 0 . 1 0 . 9 sampling rate 9 / 19

  22. Previous Results 1 0 . 8 D EC [KREG ’17] 0 . 6 MSE D Shannon 0 . 4 0 . 2 MMSE 0 0 . 1 0 . 9 sampling rate 9 / 19

  23. Estimate-and-Compress vs Compress-and-Estimate Estimate-and-Compress [Kipnis, Reeves, Eldar, Goldsmith ’17] Y nR bits Linear � X AWGN Est Enc’ Dec X Transform H 10 / 19

  24. Estimate-and-Compress vs Compress-and-Estimate Estimate-and-Compress [Kipnis, Reeves, Eldar, Goldsmith ’17] Y nR bits Linear � X AWGN Est Enc’ Dec X Transform H • Encoding is hard • Decoding is easy 10 / 19

  25. Estimate-and-Compress vs Compress-and-Estimate Estimate-and-Compress [Kipnis, Reeves, Eldar, Goldsmith ’17] Y nR bits Linear � X AWGN Est Enc’ Dec X Transform H • Encoding is hard • Decoding is easy Compress-and-Estimate (this talk) � Y nR bits Y Linear � X AWGN Enc Dec’ Est X Transform H 10 / 19

  26. Estimate-and-Compress vs Compress-and-Estimate Estimate-and-Compress [Kipnis, Reeves, Eldar, Goldsmith ’17] Y nR bits Linear � X AWGN Est Enc’ Dec X Transform H • Encoding is hard • Decoding is easy Compress-and-Estimate (this talk) � Y nR bits Y Linear � X AWGN Enc Dec’ Est X Transform H • Encoding is easy • Decoding is hard 10 / 19

  27. Table of Contents Introduction Motivation Problem Formulation Background Main Results: CE w.r.t. Gaussian Codebooks Compress-and-Estimate Linear Transformation Compress-and-Estimate Summary 11 / 19

  28. Result I Theorem (CE achievability) D ( P X , µ, R ) ≤ M ( P X , Tµ ) where T is an SNR scaling operator applied to the spectral distribution of the sampling matrix, 1 − 2 − 2 R/ρ T ( λ ) = X 2 − 2 R/ρ λ, 1 + γ ρ σ 2 12 / 19

  29. Result I Theorem (CE achievability) D ( P X , µ, R ) ≤ M ( P X , Tµ ) where T is an SNR scaling operator applied to the spectral distribution of the sampling matrix, 1 − 2 − 2 R/ρ T ( λ ) = X 2 − 2 R/ρ λ, 1 + γ ρ σ 2 original spectrum µ scaled spectrum Tµ 1 − ρ 1 − ρ λ λ 0 0 12 / 19

  30. Result I Theorem (CE achievability) D ( P X , µ, R ) ≤ M ( P X , Tµ ) where T is an SNR scaling operator applied to the spectral distribution of the sampling matrix, 1 − 2 − 2 R/ρ T ( λ ) = X 2 − 2 R/ρ λ, 1 + γ ρ σ 2 original spectrum µ scaled spectrum Tµ 1 − ρ 1 − ρ λ λ 0 0 Quantization is equivalent to spectrum attenuation 12 / 19

  31. Example I: Distortion vs Sampling rate P X – Bernoulli-Gauss µ – Marchenco-Pasture law 1 0 . 8 D (this talk) C E 0 . 6 MSE D EC (previous) D Shannon 0 . 4 0 . 2 MMSE 0 0 . 1 0 . 5 0 . 9 sampling rate 13 / 19

  32. Example II: distortion vs Sparsity P X – Bernoulli-Gauss µ – Marchenco-Pasture law (previous) D E C ) 0 . 6 k l a t s h i t ( D CE S E M M 0 . 4 MSE D Shannon 0 . 2 sparse dense ← sparsity → 14 / 19

  33. Can we do better than D CE ? (previous) D C E ) k 0 . 6 a l t s i h t ( D CE S E M M 0 . 4 MSE D Shannon 0 . 2 sparse dense ← sparsity → 15 / 19

  34. Can we do better than D CE ? (previous) D C E ) k 0 . 6 a l t s i h t ( D CE S E M M 0 . 4 MSE D Shannon 0 . 2 sparse dense ← sparsity → 15 / 19

  35. Can we do better than D CE ? (previous) D C E ) k 0 . 6 a l t s i h t ( D CE S E M M 0 . 4 MSE D Shannon 0 . 2 sparse dense ← sparsity → � ˜ ˜ Y Y nR bits Y Linear � X AWGN Enc’ Dec’ Est L X Transform 15 / 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend