Wiener Process to Bits (and back) Alon Kipnis (Stanford) Joint - - PowerPoint PPT Presentation

wiener process to bits and back
SMART_READER_LITE
LIVE PREVIEW

Wiener Process to Bits (and back) Alon Kipnis (Stanford) Joint - - PowerPoint PPT Presentation

Wiener Process to Bits (and back) Alon Kipnis (Stanford) Joint work with Yonina Eldar (Technion) and Andrea Goldsmith (Stanford) November 2017 1 /17 Overview Wiener c t 0 W t W t , t 0 process , uniform reconstruct / sampling


slide-1
SLIDE 1

/17

Wiener Process to Bits (and back)

Alon Kipnis

1

November 2017 (Stanford)

Joint work with Yonina Eldar (Technion) and Andrea Goldsmith (Stanford)

slide-2
SLIDE 2

/17

Overview

2

Wt

uniform sampling

¯ Wn

quantize / encode / compress Wiener process

t ≥ 0 n = 0, 1, . . . , = Wn/fs fs

sampling rate = reconstruct / decode

c Wt, t ≥ 0

finite “bitrate” R

slide-3
SLIDE 3

/17

Overview

2

Wt

uniform sampling

¯ Wn

quantize / encode / compress Wiener process

t ≥ 0 n = 0, 1, . . . , = Wn/fs fs

sampling rate = reconstruct / decode

c Wt, t ≥ 0

finite “bitrate” R

goal: minimize

kWt c Wtk2

slide-4
SLIDE 4

/17

Overview

2

Wt

uniform sampling

¯ Wn

quantize / encode / compress Wiener process

t ≥ 0 n = 0, 1, . . . , = Wn/fs fs

sampling rate = reconstruct / decode

c Wt, t ≥ 0

finite “bitrate” R

goal: minimize

kWt c Wtk2

results (example):

R = 2 fs = 1 inf kWt c Wtk2

2 = 18 +

√ 3 96

[smp/sec] [bit/sec]

slide-5
SLIDE 5

/17 3

− − − −0.3 −0.2 −0.1 0.1 0.2

t

Motivation: Information Stability of the Wiener Process

Wiener process ( , , , Gaussian)

Wt

t ≥ 0 EWt = 0 EWtWs = min{t, s}

T

slide-6
SLIDE 6

/17 3

− − − −0.3 −0.2 −0.1 0.1 0.2

t

Motivation: Information Stability of the Wiener Process

zero information MSE: Wiener process ( , , , Gaussian)

Wt

t ≥ 0 EWt = 0 EWtWs = min{t, s}

T

slide-7
SLIDE 7

/17 3

− − − −0.3 −0.2 −0.1 0.1 0.2

t

Motivation: Information Stability of the Wiener Process

zero information MSE:

∅) = 1 T Z T var(Wt)dt = T/2

mmse

Wiener process ( , , , Gaussian)

Wt

t ≥ 0 EWt = 0 EWtWs = min{t, s}

T

slide-8
SLIDE 8

/17 3

− − − −0.3 −0.2 −0.1 0.1 0.2

t T

Wt=T

given endpoint of interval:

Motivation: Information Stability of the Wiener Process

zero information MSE:

∅) = 1 T Z T var(Wt)dt = T/2

mmse

Wiener process ( , , , Gaussian)

Wt

t ≥ 0 EWt = 0 EWtWs = min{t, s}

T

slide-9
SLIDE 9

/17 3

− − − −0.3 −0.2 −0.1 0.1 0.2

t T

Wt=T

given endpoint of interval:

] = t + (T − t)WT E [Wt|Wt=T ]

Motivation: Information Stability of the Wiener Process

zero information MSE:

∅) = 1 T Z T var(Wt)dt = T/2

mmse

Wiener process ( , , , Gaussian)

Wt

t ≥ 0 EWt = 0 EWtWs = min{t, s}

T

slide-10
SLIDE 10

/17 3

− − − −0.3 −0.2 −0.1 0.1 0.2

t T

Wt=T

given endpoint of interval:

] = t + (T − t)WT E [Wt|Wt=T ]

{

f Wt

Motivation: Information Stability of the Wiener Process

zero information MSE:

∅) = 1 T Z T var(Wt)dt = T/2

mmse

Wiener process ( , , , Gaussian)

Wt

t ≥ 0 EWt = 0 EWtWs = min{t, s}

T

slide-11
SLIDE 11

/17 3

− − − −0.3 −0.2 −0.1 0.1 0.2

t T

Wt=T

given endpoint of interval:

] = t + (T − t)WT E [Wt|Wt=T ]

{

f Wt

Motivation: Information Stability of the Wiener Process

mmse = 1

T Z T E ⇣ Wt − f Wt ⌘2 dt = T/6 f Wt

zero information MSE:

∅) = 1 T Z T var(Wt)dt = T/2

mmse

Wiener process ( , , , Gaussian)

Wt

t ≥ 0 EWt = 0 EWtWs = min{t, s}

T

slide-12
SLIDE 12

/17

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

slide-13
SLIDE 13

/17

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

slide-14
SLIDE 14

/17

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

= Ts/6 = 1/(6fs)

mmse(fs) =

slide-15
SLIDE 15

/17

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

= Ts/6 = 1/(6fs)

mmse(fs) =

slide-16
SLIDE 16

/17

this talk: minimal MSE when samples quantized at a finite bitrate

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

= Ts/6 = 1/(6fs)

mmse(fs) =

slide-17
SLIDE 17

/17

this talk: minimal MSE when samples quantized at a finite bitrate

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

= Ts/6 = 1/(6fs)

mmse(fs) =

c Wt

sample

¯ Wn

Wt

bitrate R representation encode decode

slide-18
SLIDE 18

/17

this talk: minimal MSE when samples quantized at a finite bitrate

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

is the MSE bounded in T ?

= Ts/6 = 1/(6fs)

mmse(fs) =

c Wt

sample

¯ Wn

Wt

bitrate R representation encode decode

slide-19
SLIDE 19

/17

this talk: minimal MSE when samples quantized at a finite bitrate

− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2

4

t T

Wt

given points on a grid:

Ts = f −1

s

Motivation: Information Stability of the Wiener Process

is the MSE bounded in T ? what is the minimal MSE as a function of fs and bitrate R ?

= Ts/6 = 1/(6fs)

mmse(fs) =

c Wt

sample

¯ Wn

Wt

bitrate R representation encode decode

slide-20
SLIDE 20

/27

5

Background: Unconstrained Coding

  • f a Wiener Process

Wt

bitrate R representation

c Wt

decode encode

slide-21
SLIDE 21

/27

5

Background: Unconstrained Coding

  • f a Wiener Process

Wt

bitrate R representation

c Wt

RT

. . .

0 . . . 00 0 . . . 01

1 . . . 11

T

W T = {Wt, t ∈ [0, T]}

decode encode

slide-22
SLIDE 22

/27

5

Background: Unconstrained Coding

  • f a Wiener Process

Wt

bitrate R representation

c Wt

RT

. . .

0 . . . 00 0 . . . 01

1 . . . 11

T

W T = {Wt, t ∈ [0, T]}

decode encode

T

T

. . .

T

slide-23
SLIDE 23

/27

5

Background: Unconstrained Coding

  • f a Wiener Process

Wt

bitrate R representation

c Wt

RT

. . .

0 . . . 00 0 . . . 01

1 . . . 11

T

W T = {Wt, t ∈ [0, T]}

decode encode

T

T

. . .

T

slide-24
SLIDE 24

/27

5

Background: Unconstrained Coding

  • f a Wiener Process

Wt

bitrate R representation

c Wt

RT

. . .

0 . . . 00 0 . . . 01

1 . . . 11

T

W T = {Wt, t ∈ [0, T]}

decode encode

T

T

. . .

T

slide-25
SLIDE 25

/27

Coding Theorem [Berger ‘70]:

=

s.t.

DW T (R)

E

  • Wt − c

Wt

  • 2

T

inf

PW T ,c

W T

I(PW T ,c

W T ) ≤ RT

lim

T →∞

=

DW (R)

DW T (R)

5

Background: Unconstrained Coding

  • f a Wiener Process

Wt

bitrate R representation

c Wt

RT

. . .

0 . . . 00 0 . . . 01

1 . . . 11

T

W T = {Wt, t ∈ [0, T]}

decode encode

T

T

. . .

T

slide-26
SLIDE 26

/27

Coding Theorem [Berger ‘70]:

=

s.t.

DW T (R)

E

  • Wt − c

Wt

  • 2

T

inf

PW T ,c

W T

I(PW T ,c

W T ) ≤ RT

lim

T →∞

=

DW (R)

DW T (R)

5

= 2 π2 ln 2R−1

Background: Unconstrained Coding

  • f a Wiener Process

Wt

bitrate R representation

c Wt

RT

. . .

0 . . . 00 0 . . . 01

1 . . . 11

T

W T = {Wt, t ∈ [0, T]}

decode encode

T

T

. . .

T

slide-27
SLIDE 27

/17 6

Background: Unconstrained Coding

  • f a Wiener Process
slide-28
SLIDE 28

/17 6

distortion-rate function [Berger 1970]

DW (R) = 2 π2 ln 2R−1

Background: Unconstrained Coding

  • f a Wiener Process
slide-29
SLIDE 29

/17 6

distortion-rate function [Berger 1970]

DW (R) = 2 π2 ln 2R−1

find coefficients in the Karhunen–Loève of Wt

k = 1, 2, . . . Ak = Z T fk(t)dWt

Background: Unconstrained Coding

  • f a Wiener Process
slide-30
SLIDE 30

/17 6

distortion-rate function [Berger 1970]

DW (R) = 2 π2 ln 2R−1

find coefficients in the Karhunen–Loève of Wt

k = 1, 2, . . . Ak = Z T fk(t)dWt

encode the coefficients using a standard random coding principle [Shannon]

Background: Unconstrained Coding

  • f a Wiener Process
slide-31
SLIDE 31

/17 6

distortion-rate function [Berger 1970]

DW (R) = 2 π2 ln 2R−1

find coefficients in the Karhunen–Loève of Wt

k = 1, 2, . . . Ak = Z T fk(t)dWt

encode the coefficients using a standard random coding principle [Shannon]

requires integration with respect to Brownian path in practice, imprecise at any timescale 😖

Background: Unconstrained Coding

  • f a Wiener Process
slide-32
SLIDE 32

/17 6

distortion-rate function [Berger 1970]

DW (R) = 2 π2 ln 2R−1

find coefficients in the Karhunen–Loève of Wt

k = 1, 2, . . . Ak = Z T fk(t)dWt

encode the coefficients using a standard random coding principle [Shannon]

requires integration with respect to Brownian path in practice, imprecise at any timescale 😖

this talk: incorporate sampling into model

💢

Background: Unconstrained Coding

  • f a Wiener Process
slide-33
SLIDE 33

/17

− − − −0.3 −0.2 −0.1 0.1 0.2

6

distortion-rate function [Berger 1970]

DW (R) = 2 π2 ln 2R−1

find coefficients in the Karhunen–Loève of Wt

k = 1, 2, . . . Ak = Z T fk(t)dWt

encode the coefficients using a standard random coding principle [Shannon]

requires integration with respect to Brownian path in practice, imprecise at any timescale 😖

this talk: incorporate sampling into model

💢

Background: Unconstrained Coding

  • f a Wiener Process
slide-34
SLIDE 34

/17

− − − −0.3 −0.2 −0.1 0.1 0.2

6

distortion-rate function [Berger 1970]

DW (R) = 2 π2 ln 2R−1

find coefficients in the Karhunen–Loève of Wt

k = 1, 2, . . . Ak = Z T fk(t)dWt

encode the coefficients using a standard random coding principle [Shannon]

requires integration with respect to Brownian path in practice, imprecise at any timescale 😖

this talk: incorporate sampling into model

💢

Background: Unconstrained Coding

  • f a Wiener Process
slide-35
SLIDE 35

/17 7

decoder encoder

fs

M ∈ {0, 1}bT Rc W T

= T −1

s

¯ W T c W T

Combined Sampling and Coding

slide-36
SLIDE 36

/17 7

decoder encoder

fs

M ∈ {0, 1}bT Rc W T

= T −1

s

¯ W T c W T

Combined Sampling and Coding

lim

T →∞

inf

¯ W T →M→c W T

1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R)

slide-37
SLIDE 37

/17 7

decoder encoder

fs

M ∈ {0, 1}bT Rc W T

= T −1

s

¯ W T c W T

Combined Sampling and Coding

lim

T →∞

inf

¯ W T →M→c W T

1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓

slide-38
SLIDE 38

/17 7

decoder encoder

fs

M ∈ {0, 1}bT Rc W T

= T −1

s

¯ W T c W T

Combined Sampling and Coding

fs [smp/sec]

MSE

(R = 1) lim

T →∞

inf

¯ W T →M→c W T

1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓

slide-39
SLIDE 39

/17 7

decoder encoder

fs

M ∈ {0, 1}bT Rc W T

= T −1

s

¯ W T c W T

Combined Sampling and Coding

DW (R)

2 π2 ln 2

fs [smp/sec]

MSE

(R = 1) lim

T →∞

inf

¯ W T →M→c W T

1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓

slide-40
SLIDE 40

/17 7

decoder encoder

fs

M ∈ {0, 1}bT Rc W T

= T −1

s

¯ W T c W T

Combined Sampling and Coding

DW (R)

2 π2 ln 2

mmse(fs) =(6fs)−1

fs [smp/sec]

MSE

(R = 1) lim

T →∞

inf

¯ W T →M→c W T

1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓

slide-41
SLIDE 41

/17 7

decoder encoder

fs

M ∈ {0, 1}bT Rc W T

= T −1

s

¯ W T c W T

Combined Sampling and Coding

DW (R)

2 π2 ln 2

mmse(fs) =(6fs)−1

fs [smp/sec]

MSE

(R = 1)

= ?

D ( f

s

, R ) lim

T →∞

inf

¯ W T →M→c W T

1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓

slide-42
SLIDE 42

/17

Theorem [K., Goldsmith, Eldar, ‘16]

Sf

W (φ) =

1 (2 sin(πφ/2))2 − 1 6

Rθ = fs 2 Z 1 log+ ⇥ Sf

W (φ)/θ

⇤ dφ 1 6fs + 1 fs Z 1 min

  • Sf

W (φ), θ

dφ D(fs, R) =

Main Result: Minimal Distortion under Sampling and Coding

8

slide-43
SLIDE 43

/17

Theorem [K., Goldsmith, Eldar, ‘16]

Sf

W (φ) =

1 (2 sin(πφ/2))2 − 1 6

Rθ = fs 2 Z 1 log+ ⇥ Sf

W (φ)/θ

⇤ dφ 1 6fs + 1 fs Z 1 min

  • Sf

W (φ), θ

dφ D(fs, R) =

Main Result: Minimal Distortion under Sampling and Coding

8

θ φ

1

Sf

W (φ)

slide-44
SLIDE 44

/17

Theorem [K., Goldsmith, Eldar, ‘16]

Sf

W (φ) =

1 (2 sin(πφ/2))2 − 1 6

Rθ = fs 2 Z 1 log+ ⇥ Sf

W (φ)/θ

⇤ dφ 1 6fs + 1 fs Z 1 min

  • Sf

W (φ), θ

dφ D(fs, R) =

Main Result: Minimal Distortion under Sampling and Coding

8

asymptotic density of Karhunen Loeve eigenvalues of

f Wt = E[Wt| ¯ W]

θ φ

1

Sf

W (φ)

slide-45
SLIDE 45

/17 9

steps in proof: I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Minimal Distortion under Sampling and Coding — proof

slide-46
SLIDE 46

/17 9

steps in proof: I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Step I:

Minimal Distortion under Sampling and Coding — proof

slide-47
SLIDE 47

/17 9

steps in proof: I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Step I:

E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #

d( ¯ wT , b wT ) def

= d : RbT fsc × L2[0, T] → [0, ∞)

Minimal Distortion under Sampling and Coding — proof

slide-48
SLIDE 48

/17 9

steps in proof:

T

¯ wT

b wT

I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Step I:

E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #

d( ¯ wT , b wT ) def

= d : RbT fsc × L2[0, T] → [0, ∞)

Minimal Distortion under Sampling and Coding — proof

slide-49
SLIDE 49

/17 9

steps in proof:

T

¯ wT

b wT

I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Step I:

E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #

d( ¯ wT , b wT ) def

= d : RbT fsc × L2[0, T] → [0, ∞)

Minimal Distortion under Sampling and Coding — proof

slide-50
SLIDE 50

/17 9

steps in proof:

T

¯ wT

b wT

I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Step I:

E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #

d( ¯ wT , b wT ) def

= d : RbT fsc × L2[0, T] → [0, ∞)

Minimal Distortion under Sampling and Coding — proof

slide-51
SLIDE 51

/17 9

steps in proof:

T

¯ wT

b wT

I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Ed( ¯ W T , c W T ) = 1 T Z T ⇣ Wt − c Wt ⌘2 dt

Step I:

E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #

d( ¯ wT , b wT ) def

= d : RbT fsc × L2[0, T] → [0, ∞)

Minimal Distortion under Sampling and Coding — proof

slide-52
SLIDE 52

/17 9

steps in proof:

T

¯ wT

b wT

I. show that:

  • II. compute solution to optimization problem

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

Ed( ¯ W T , c W T ) = 1 T Z T ⇣ Wt − c Wt ⌘2 dt

Step I: use standard random coding [Shannon] with respect to samples ¯

Wn

under metric d (rather than squared error)

E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #

d( ¯ wT , b wT ) def

= d : RbT fsc × L2[0, T] → [0, ∞)

Minimal Distortion under Sampling and Coding — proof

slide-53
SLIDE 53

/17 10

steps in proof: I. show that:

  • II. compute solution to optimization problem

Minimal Distortion under Sampling and Coding — Proof

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

slide-54
SLIDE 54

/17 10

steps in proof: I. show that: Step II:

  • II. compute solution to optimization problem

Minimal Distortion under Sampling and Coding — Proof

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

slide-55
SLIDE 55

/17 10

steps in proof: I. show that:

inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim

T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT

Step II:

  • II. compute solution to optimization problem

Minimal Distortion under Sampling and Coding — Proof

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

slide-56
SLIDE 56

/17 10

steps in proof: I. show that: use Karhunen–Loève transform of to evaluate last minimization

f Wt

inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim

T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT

Step II:

  • II. compute solution to optimization problem

Minimal Distortion under Sampling and Coding — Proof

k = 1, 2, . . .

λkfk(t) = Z T fk(s)E h f Wtf Ws i ds,

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

slide-57
SLIDE 57

/17 10

steps in proof: I. show that: use Karhunen–Loève transform of to evaluate last minimization

f Wt

inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim

T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT

Step II:

  • II. compute solution to optimization problem

covariance Kernel of has rank . Can ``guess’’ eigenvalues

f W T

bTfsc

λ1, . . . , λbT fsc

Minimal Distortion under Sampling and Coding — Proof

k = 1, 2, . . .

λkfk(t) = Z T fk(s)E h f Wtf Ws i ds,

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

slide-58
SLIDE 58

/17 10

steps in proof: I. show that: use Karhunen–Loève transform of to evaluate last minimization

f Wt

inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim

T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT

Step II:

  • II. compute solution to optimization problem

covariance Kernel of has rank . Can ``guess’’ eigenvalues

f W T

bTfsc

λ1, . . . , λbT fsc

Minimal Distortion under Sampling and Coding — Proof

k = 1, 2, . . .

λkfk(t) = Z T fk(s)E h f Wtf Ws i ds,

Sf

W (φ) =

1 (2 sin(πφ/2))2 − 1 6

asymptotic KL eigenvalues distribution is

1 T Z T E ⇣ Wt − c Wt ⌘2 dt

lim

T →∞

inf = D(fs, R)

I ⇣ ¯ W T ; c W T ⌘ ≤ RT

slide-59
SLIDE 59

/17 11

Analysis

slide-60
SLIDE 60

/17 11

DW (R)

2 π2 ln 2

fs [smp/sec]

MSE (R = 1)

mmse(fs)

Analysis

slide-61
SLIDE 61

/17 11

DW (R)

2 π2 ln 2

fs [smp/sec]

MSE (R = 1)

mmse(fs)

D(f

s

, R)

Analysis

slide-62
SLIDE 62

/17 11

DW (R)

2 π2 ln 2

fs [smp/sec]

MSE (R = 1)

mmse(fs)

DW (R)

1 6fs

mmse(fs)

(fs = 1)

R [bits/sec]

MSE

D(f

s

, R)

Analysis

slide-63
SLIDE 63

/17 11

DW (R)

2 π2 ln 2

fs [smp/sec]

MSE (R = 1)

mmse(fs)

DW (R)

1 6fs

mmse(fs)

(fs = 1)

R [bits/sec]

MSE

D(f

s

, R)

D(fs, R)

Analysis

slide-64
SLIDE 64

/17 11

DW (R)

2 π2 ln 2

fs [smp/sec]

MSE (R = 1)

mmse(fs)

DW (R)

1 6fs

mmse(fs)

(fs = 1)

R [bits/sec]

MSE

D(f

s

, R)

D(fs, R)

1 fs 1 6 + 2 + √ 3 6 2−2R/fs !

=

D(fs, R)

R fs ≥ 1 + log( √ 3 + 2) 2 ≈ 1.45

low sampling rate

Analysis

slide-65
SLIDE 65

/17 11

DW (R)

2 π2 ln 2

fs [smp/sec]

MSE (R = 1)

mmse(fs)

DW (R)

1 6fs

mmse(fs)

(fs = 1)

R [bits/sec]

MSE

D(f

s

, R)

D(fs, R)

1 fs 1 6 + 2 + √ 3 6 2−2R/fs !

=

D(fs, R)

R fs ≥ 1 + log( √ 3 + 2) 2 ≈ 1.45

low sampling rate

Analysis

= 18 + √ 3 96 D(fs = 1, R = 2)

slide-66
SLIDE 66

/17

Excess Distortion due to Sampling

12

(bits per sample)

¯ R = R/fs

excess distortion ratio:

=

DW (R)

ρ( ¯ R) D(fs, R)

slide-67
SLIDE 67

/17

Excess Distortion due to Sampling

12

(bits per sample)

¯ R = R/fs

excess distortion ratio:

=

DW (R)

ρ( ¯ R) D(fs, R)

excess distortion due to sampling is only a function of bits/smp

slide-68
SLIDE 68

/17

Excess Distortion due to Sampling

12

(bits per sample)

¯ R = R/fs

excess distortion ratio:

=

DW (R)

ρ( ¯ R) D(fs, R)

¯ R [bit/smp]

1

ρ ( ¯ R )

excess distortion due to sampling is only a function of bits/smp

slide-69
SLIDE 69

/17

Excess Distortion due to Sampling

12

(bits per sample)

¯ R = R/fs

excess distortion ratio:

=

DW (R)

ρ( ¯ R) D(fs, R)

¯ R [bit/smp]

1

ρ ( ¯ R )

≈ 1.12

¯ R = 1

example: with 1 bit/smp can attain 1.12 of optimal distortion at the same bitrate excess distortion due to sampling is only a function of bits/smp

slide-70
SLIDE 70

/17

Excess Distortion due to Sampling

12

(bits per sample)

¯ R = R/fs

excess distortion ratio:

=

DW (R)

ρ( ¯ R) D(fs, R)

¯ R [bit/smp]

1

ρ ( ¯ R )

≈ 1.12

¯ R = 1

example: with 1 bit/smp can attain 1.12 of optimal distortion at the same bitrate excess distortion due to sampling is only a function of bits/smp must have to get

¯ R → 0 ρ → 1

slide-71
SLIDE 71

/17

Real Stationary Gaussian Processes

13

sample

¯ Xn Xt

bitrate R representation encode decode

b Xt

slide-72
SLIDE 72

/17

Real Stationary Gaussian Processes

13

sample

¯ Xn Xt

bitrate R representation encode decode

b Xt SX(f) = F {EXtX0} (f)

slide-73
SLIDE 73

/17

Real Stationary Gaussian Processes

13

Theorem [K., Goldsmith, Eldar, ‘14]

DX(fs, R) mmse(fs) + Z

fs 2

− fs

2

min {SX(f), θ} d f = Rθ = 1 2 Z

fs 2

− fs

2

log+ [SX(f)/θ] d f

sample

¯ Xn Xt

bitrate R representation encode decode

b Xt SX(f) = F {EXtX0} (f)

slide-74
SLIDE 74

/17

Real Stationary Gaussian Processes

13

Theorem [K., Goldsmith, Eldar, ‘14]

DX(fs, R) mmse(fs) + Z

fs 2

− fs

2

min {SX(f), θ} d f = Rθ = 1 2 Z

fs 2

− fs

2

log+ [SX(f)/θ] d f

SX(f)

fs

θ

sample

¯ Xn Xt

bitrate R representation encode decode

b Xt SX(f) = F {EXtX0} (f)

slide-75
SLIDE 75

/17

Real Stationary Gaussian Processes

13

Theorem [K., Goldsmith, Eldar, ‘14]

DX(fs, R) mmse(fs) + Z

fs 2

− fs

2

min {SX(f), θ} d f = Rθ = 1 2 Z

fs 2

− fs

2

log+ [SX(f)/θ] d f

SX(f)

fs

θ

DX(R)

fNyq

mmse(fs)

fs

distortion

sample

¯ Xn Xt

bitrate R representation encode decode

b Xt SX(f) = F {EXtX0} (f)

slide-76
SLIDE 76

/17

Real Stationary Gaussian Processes

13

Theorem [K., Goldsmith, Eldar, ‘14]

DX(fs, R) mmse(fs) + Z

fs 2

− fs

2

min {SX(f), θ} d f = Rθ = 1 2 Z

fs 2

− fs

2

log+ [SX(f)/θ] d f

SX(f)

fs

θ

DX(R)

fNyq

mmse(fs)

fs

distortion

sample

¯ Xn Xt

bitrate R representation encode decode

b Xt SX(f) = F {EXtX0} (f)

D

X

(f

s

, R)

fR

slide-77
SLIDE 77

/17

Classification of Gaussian Processes

14

slide-78
SLIDE 78

/17

Classification of Gaussian Processes

14

for zero distortion must have R → ∞

) → 0 DW (R)

slide-79
SLIDE 79

/17

Classification of Gaussian Processes

14

for zero distortion must have R → ∞

) → 0 DW (R)

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-80
SLIDE 80

/17

Classification of Gaussian Processes

14

for zero distortion must have R → ∞

) → 0 DW (R)

Wiener process

R/fs → 0

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-81
SLIDE 81

/17

Classification of Gaussian Processes

14

for zero distortion must have R → ∞

) → 0 DW (R)

Wiener process

R/fs → 0

bandlimited Gaussian processes

R/fs → ∞

S

X

( f )

fs

θ

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-82
SLIDE 82

/17

Classification of Gaussian Processes

14

for zero distortion must have R → ∞

) → 0 DW (R)

Wiener process

R/fs → 0

bandlimited Gaussian processes

R/fs → ∞

S

X

( f )

fs

θ

Gauss-Markov (Ornstein–Uhlenbeck) process

R/fs → 1/ ln 2

S

X

( f ) fs

θ

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-83
SLIDE 83

/17

Classification of Gaussian Processes

15

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-84
SLIDE 84

/17

Classification of Gaussian Processes

15

processes with rapidly decreasing spectrum

Class 1:

S

X

( f ) fs

θ

R/fs → ∞

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-85
SLIDE 85

/17

Classification of Gaussian Processes

15

processes with rapidly decreasing spectrum

S

X

( f )

fs

θ

Class 2:

processes with slowly decreasing spectrum

R/fs < ∞

Class 1:

S

X

( f ) fs

θ

R/fs → ∞

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-86
SLIDE 86

/17

challenge in encoding: high-resolution quantization

Classification of Gaussian Processes

15

processes with rapidly decreasing spectrum

S

X

( f )

fs

θ

Class 2:

processes with slowly decreasing spectrum

R/fs < ∞

Class 1:

S

X

( f ) fs

θ

R/fs → ∞

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-87
SLIDE 87

/17

challenge in encoding: high-resolution quantization

Classification of Gaussian Processes

15

processes with rapidly decreasing spectrum

S

X

( f )

fs

θ

Class 2:

processes with slowly decreasing spectrum

R/fs < ∞

challenge in encoding: adapting to high innovation rate

Class 1:

S

X

( f ) fs

θ

R/fs → ∞

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

slide-88
SLIDE 88

/17

challenge in encoding: high-resolution quantization

Classification of Gaussian Processes

15

processes with rapidly decreasing spectrum

S

X

( f )

fs

θ

Class 2:

processes with slowly decreasing spectrum

R/fs < ∞

challenge in encoding: adapting to high innovation rate

Class 1:

S

X

( f ) fs

θ

R/fs → ∞

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

lim

fs→∞

< ∞ 1 fs Z fs

−fs

log+ SX(f) SX(fs)d f

slide-89
SLIDE 89

/17

challenge in encoding: high-resolution quantization

Classification of Gaussian Processes

15

processes with rapidly decreasing spectrum

S

X

( f )

fs

θ

Class 2:

processes with slowly decreasing spectrum

R/fs < ∞

challenge in encoding: adapting to high innovation rate

Class 1:

S

X

( f ) fs

θ

R/fs → ∞

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

R → ∞

as

lim

fs→∞

< ∞ 1 fs Z fs

−fs

log+ SX(f) SX(fs)d f lim

fs→∞

1 fs Z fs

−fs

log+ SX(f) SX(fs)d f = ∞

slide-90
SLIDE 90

/17

Summary

16

slide-91
SLIDE 91

/17

Summary

16

encoding a realization of the Wiener process involves sampling and quantization (encoding)

slide-92
SLIDE 92

/17

Summary

1 bit per sample attains 1.12 times the optimal distortion at the same bitrate

16

encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate:

slide-93
SLIDE 93

/17

Summary

1 bit per sample attains 1.12 times the optimal distortion at the same bitrate

16

encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate: sampling rate must increase faster than bitrate in order to get D(fs, R)/DW (R) → 1

slide-94
SLIDE 94

/17

Summary

1 bit per sample attains 1.12 times the optimal distortion at the same bitrate

16

encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate: sampling rate must increase faster than bitrate in order to get D(fs, R)/DW (R) → 1 a new way to classify spectrum of continuous-time signals:

DX(fs(R), R)/DX(R) → 1

Class 2: (Wiener, Gauss-Markov,…) Class 1:

R/fs → ∞

(bandlimited, rapidly decreasing PSD)

R/fs < ∞

slide-95
SLIDE 95

/17

The End!

17

  • A. Kipnis, A. J. Goldsmith and Y. C. Eldar, “Rate-distortion

function of sampled Wiener processes”, on Arxiv

fs [smp/sec]

Distortion

2R−1 π2 ln 2

DW (R)

θ

1 4 sin2 (πφ/2) − 1 6

φ

1

  • A. Kipnis, A. J. Goldsmith, Y. C. Eldar, T. Weissman,

“Distortion-rate function of sub-Nyquist sampled Gaussian sources”, IEEE Trans. Info. Th.

mmse(fs)

slide-96
SLIDE 96

/17

Classification of Gaussian Processes

18

slide-97
SLIDE 97

/17

Classification of Gaussian Processes

18

for zero distortion must have R → ∞

) → 0 DW (R)

slide-98
SLIDE 98

/17

Classification of Gaussian Processes

18

for zero distortion must have R → ∞

) → 0 DW (R)

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

slide-99
SLIDE 99

/17

Classification of Gaussian Processes

18

for zero distortion must have R → ∞

) → 0 DW (R)

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

Wiener process: R/fs → 0

slide-100
SLIDE 100

/17

Classification of Gaussian Processes

18

for zero distortion must have R → ∞

) → 0 DW (R)

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

Wiener process: R/fs → 0 bandlimited Gaussian processes [K., Goldsmith, Eldar, Weissman ‘13]:

R/fs → ∞

S

X

( f )

fs

θ

slide-101
SLIDE 101

/17

Classification of Gaussian Processes

18

for zero distortion must have R → ∞

) → 0 DW (R)

how to set so that

fs(R)

DW (R)

D(fs, R) → 1

Wiener process: R/fs → 0 bandlimited Gaussian processes [K., Goldsmith, Eldar, Weissman ‘13]:

R/fs → ∞

S

X

( f )

fs

θ

Gauss-Markov process [K., Goldsmith, Eldar ‘14]:

R/fs → 1/ ln 2

S

X

( f ) fs

θ