Compressed Sensing under Optimal Quantization Alon Kipnis - - PowerPoint PPT Presentation

compressed sensing under optimal quantization
SMART_READER_LITE
LIVE PREVIEW

Compressed Sensing under Optimal Quantization Alon Kipnis - - PowerPoint PPT Presentation

Compressed Sensing under Optimal Quantization Alon Kipnis (Stanford) Galen Reeves (Duke) Yonina Eldar (Technion) Andrea Goldsmith (Stanford) ISIT, June 2017 Table of Contents Introduction Remote Source Coding Compressed Sensing Results


slide-1
SLIDE 1

Compressed Sensing under Optimal Quantization

Alon Kipnis (Stanford) Galen Reeves (Duke) Yonina Eldar (Technion) Andrea Goldsmith (Stanford) ISIT, June 2017

slide-2
SLIDE 2

Table of Contents

Introduction Remote Source Coding Compressed Sensing Results Summary

2 / 22

slide-3
SLIDE 3

Remote source coding

[Dobrushin & Tsybakov ’62]

X

Channel Enc Dec

  • X

Y

  • 1, . . ., 2NR

3 / 22

slide-4
SLIDE 4

Remote source coding

[Dobrushin & Tsybakov ’62]

X

Channel Enc Dec

  • X

Y

  • 1, . . ., 2NR

DX|Y (R) = min

P(ˆ x|y) E

  • d(X,

X)

  • 3 / 22
slide-5
SLIDE 5

Remote source coding

[Dobrushin & Tsybakov ’62]

X

Channel Enc Dec

  • X

Y

  • 1, . . ., 2NR

DX|Y (R) = min

P(ˆ x|y) E

  • d(X,

X)

  • ◮ Estimation under communication constraints

◮ Learning from noisy data ◮ Close connection between inference and compression

3 / 22

slide-6
SLIDE 6

Two coding schemes:

Estimate-and-compress X

Channel Est Enc Dec

  • X

Y

  • X(Y )
  • 1, . . ., 2NR

4 / 22

slide-7
SLIDE 7

Two coding schemes:

Estimate-and-compress X

Channel Est Enc Dec

  • X

Y

  • X(Y )
  • 1, . . ., 2NR

Compress-and-estimate [Kipnis, Rini, Goldsmith ’16] X

Channel Enc Dec Est

  • X

Y

  • 1, . . ., 2NR
  • Y

4 / 22

slide-8
SLIDE 8

Example: IID source, Gaussian noise, MSE distortion

R D

mmse(X|Y ) DX(R)

slide-9
SLIDE 9

Example: IID source, Gaussian noise, MSE distortion

R D

mmse(X|Y ) DX(R)

DX|Y (R)

5 / 22

slide-10
SLIDE 10

Compressed sensing with quantization

Y = √snr HX + W , H ∈ RM×N

6 / 22

slide-11
SLIDE 11

Compressed sensing with quantization

Y = √snr HX + W , H ∈ RM×N X

Linear Transform

AWGN Enc Dec

  • X

Y

  • 1, . . ., 2NR
slide-12
SLIDE 12

Compressed sensing with quantization

Y = √snr HX + W , H ∈ RM×N X

Linear Transform

AWGN Enc Dec

  • X

Y

  • 1, . . ., 2NR

H

M × N matrix

6 / 22

slide-13
SLIDE 13

Compressed sensing with quantization

Y = √snr HX + W , H ∈ RM×N X

Linear Transform

AWGN Enc Dec

  • X

Y

  • 1, . . ., 2NR

H

M × N matrix

Goal is to understand fundamental tradeoffs between

◮ Bitrate R ◮ MSE distortion D ◮ Sampling rate δ = M/N

6 / 22

slide-14
SLIDE 14

Related work on quantization

◮ Gaussian sources – Kipnis, Goldsmith, Eldar, Weissman ’16 ◮ Scaler quantization – Goyal, Fletcher, Rangan ’08 ◮ Lasso recovery – Sue, Goyal ’09 ◮ Optimal high-bit asymptotic – Wu, Verdu ’12, Dai, Milenkovic ’11 ◮ 1-bit quantization – Boufounos, Baraniuk ’08, Plan, Vershynin ’13 ◮ Remote source coding with side information – Guler, MolavianJazi,

Yener ’15

◮ Lower bound on optimal quantization – Leinonen, Codreanu, Juntti,

Kramer ’16

◮ Sampling rate distortion – Boda, Narayan ’17 ◮ Distributed coding of multispectral images – Goukhshtein,

Boufounos, Koike-Akino, Draper ’17

7 / 22

slide-15
SLIDE 15

Fundamental limits of compressed sensing

Y = √snr HX + W , H ∈ RM×N, M, N → ∞

◮ Guo & Verd´

u 2005 analyze large system limit with IID matrices

using heuristic replica method from statistical physics.

8 / 22

slide-16
SLIDE 16

Fundamental limits of compressed sensing

Y = √snr HX + W , H ∈ RM×N, M, N → ∞

◮ Guo & Verd´

u 2005 analyze large system limit with IID matrices

using heuristic replica method from statistical physics.

◮ Rigorous results for special cases: Verd´

u & Shamai 1999, Tse & Hanly 1999, Montanari & Tse 2006, Korada & Macris 2010, Bayati & Montanari 20011, R. & Gastpar 2012, Wu & Verdu 2012, Krzakala et al. 2013, Donoho et al. 2013, Huleihel & Merhav 2016

8 / 22

slide-17
SLIDE 17

Fundamental limits of compressed sensing

Y = √snr HX + W , H ∈ RM×N, M, N → ∞

◮ Guo & Verd´

u 2005 analyze large system limit with IID matrices

using heuristic replica method from statistical physics.

◮ Rigorous results for special cases: Verd´

u & Shamai 1999, Tse & Hanly 1999, Montanari & Tse 2006, Korada & Macris 2010, Bayati & Montanari 20011, R. & Gastpar 2012, Wu & Verdu 2012, Krzakala et al. 2013, Donoho et al. 2013, Huleihel & Merhav 2016

◮ R. & Pfister 2016 provide rigorous derivation of mutual

information and MMSE limits for Gaussian matrices. Proof uses conditional CLT (see Tomorrow’s talk)

8 / 22

slide-18
SLIDE 18

Characterization of limits via decoupling principle

Y = √snr HX + W

compressed sensing

  • Y =

√ s∗ X + W

signal plus noise

◮ Conditional distribution of X given (Y , H) is complicated!

9 / 22

slide-19
SLIDE 19

Characterization of limits via decoupling principle

Y = √snr HX + W

compressed sensing

  • Y =

√ s∗ X + W

signal plus noise

◮ Conditional distribution of X given (Y , H) is complicated! ◮ Conditional distribution of small subsets of X given (Y , H)

characterized by signal plus noise model,

9 / 22

slide-20
SLIDE 20

Characterization of limits via decoupling principle

Y = √snr HX + W

compressed sensing

  • Y =

√ s∗ X + W

signal plus noise

◮ Conditional distribution of X given (Y , H) is complicated! ◮ Conditional distribution of small subsets of X given (Y , H)

characterized by signal plus noise model, i.e. there exists a coupling on (Y , H, Y ) such that PXS|Y ,A(·|Y , A) ≈

  • i∈S

PXi|

Yi(·|

Yi)

9 / 22

slide-21
SLIDE 21

Characterization of limits via decoupling principle

Y = √snr HX + W

compressed sensing

  • Y =

√ s∗ X + W

signal plus noise

◮ Conditional distribution of X given (Y , H) is complicated! ◮ Conditional distribution of small subsets of X given (Y , H)

characterized by signal plus noise model, i.e. there exists a coupling on (Y , H, Y ) such that PXS|Y ,A(·|Y , A) ≈

  • i∈S

PXi|

Yi(·|

Yi)

◮ Effective SNR given by

s∗ = arg min

s

  • I
  • X; √sX + W
  • + δ

2

  • log

δ snr s

  • +

s δ snr − 1

  • 9 / 22
slide-22
SLIDE 22

Table of Contents

Introduction Remote Source Coding Compressed Sensing Results Summary

10 / 22

slide-23
SLIDE 23

Estimate and compress + decoupling

X

Linear Transform

AWGN

H

Est Enc Dec

  • X

E [X |Y, H]

Y

R

11 / 22

slide-24
SLIDE 24

Estimate and compress + decoupling

X

Linear Transform

AWGN

H

Est Enc Dec

  • X

E [X |Y, H]

Y

R

◮ Idea is to compress conditional expectation using marginal

approximation given by signal plus noise model.

11 / 22

slide-25
SLIDE 25

Estimate and compress + decoupling

X

Linear Transform

AWGN

H

Est Enc Dec

  • X

E [X |Y, H]

Y

R

◮ Idea is to compress conditional expectation using marginal

approximation given by signal plus noise model.

◮ Encoding and decoding do not depend on matrix

11 / 22

slide-26
SLIDE 26

Results

Theorem (Achievability via estimate and compress)

For every ǫ > 0, there etxists N large enough and a rate-R quantization scheme such that E

  • 1

N

  • X −

X

  • 2

≤ DX|

√ s∗X+W (R) + ǫ

where s∗ is defined by (PX, δ, snr).

12 / 22

slide-27
SLIDE 27

Results

Theorem (Achievability via estimate and compress)

For every ǫ > 0, there etxists N large enough and a rate-R quantization scheme such that E

  • 1

N

  • X −

X

  • 2

≤ DX|

√ s∗X+W (R) + ǫ

where s∗ is defined by (PX, δ, snr).

Theorem (Converse for bounded subsets)

For every ǫ > 0 and fixed subset S, there exists N0 large enough such that for any N ≥ N0 and any quantization scheme using |S|R bits E

  • 1

|S|

  • XS −

XS

  • 2

≥ DX|

√ s∗X+W (R) − ǫ

where s∗ is defined by (PX, δ, snr).

12 / 22

slide-28
SLIDE 28

Bounds described by single-letter DRF

high sampling rate

R D

mmse DX

DEC

low sampling rate

R D

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 0.5 0.6 0.7 0.8 0.9 1.0

mmse DX

DEC

13 / 22

slide-29
SLIDE 29

Are we done?

14 / 22

slide-30
SLIDE 30

Compress and estimate + decoupling

X

Linear Transform

AWGN

H

Enc Dec Est

  • X

Y R

  • Y

15 / 22

slide-31
SLIDE 31

Compress and estimate + decoupling

X

Linear Transform

AWGN

H

Enc Dec Est

  • X

Y R

  • Y

◮ First compress measurements using Gaussian quantization ◮ Then estimate signal from reconstructed measurements

treating quantization error as additional noise.

15 / 22

slide-32
SLIDE 32

Compress and estimate + decoupling

X

Linear Transform

AWGN

H

Enc Dec Est

  • X

Y R

  • Y

◮ First compress measurements using Gaussian quantization ◮ Then estimate signal from reconstructed measurements

treating quantization error as additional noise.

◮ Encoding and decoding do not depend on matrix

15 / 22

slide-33
SLIDE 33

Result

Theorem (Achievability via compress-and-estimate)

For every ǫ > 0, there etxists N large enough and a rate-R quantization scheme such that E

  • 1

N

  • X −

X

  • 2

≤ mmse

  • X|

√ s′ X + W

  • + ǫ

where s′ is defined by (PX, δ, snr′) with snr′ = snr 1 − 2−2R/δ 1 + snr2−2R/δ

16 / 22

slide-34
SLIDE 34

Comparison of achievability results

high sampling rate

R D

mmse DX

DEC DCE

low sampling rate

R D

mmse DX

DEC DCE

17 / 22

slide-35
SLIDE 35

Comparison of achievability results

high sampling rate

R D

mmse DX

DEC DCE

low sampling rate

R D

mmse DX

DEC DCE Neither scheme is optimal in general!

17 / 22

slide-36
SLIDE 36

Two different quantization schemes

Estimate-and-compress (EC) X

Linear Transform

AWGN

H

Est Enc Dec

  • X

E [X |Y, H]

Y

R Compress-and-estimate (CE) X

Linear Transform

AWGN

H

Enc Dec Est

  • X

Y R

  • Y

18 / 22

slide-37
SLIDE 37

Example: Distortion vs sampling rate

δ D

mmse DX

DCE DEC

19 / 22

slide-38
SLIDE 38

Example: Distortion vs SNR

snr D

m m s e DX

DEC DCE

20 / 22

slide-39
SLIDE 39

Table of Contents

Introduction Remote Source Coding Compressed Sensing Results Summary

21 / 22

slide-40
SLIDE 40

Summary

◮ Quantized compressed sensing – tradeoffs between sampling

rate, SNR, bitrate, and distortion.

◮ Conditional distribution of small subsets is described by signal

plus noise model. Rigorous results due to R. & Pfister ’16.

◮ Converse for small subsets ◮ Achievability for Estimate-and-compress ◮ Achievability for Compress-and-estimate (see talk at 17:00) ◮ Interesting open questions remain...

22 / 22