/17
Wiener Process to Bits (and back)
Alon Kipnis
1
November 2017 (Stanford)
Joint work with Yonina Eldar (Technion) and Andrea Goldsmith (Stanford)
Wiener Process to Bits (and back) Alon Kipnis (Stanford) Joint - - PowerPoint PPT Presentation
Wiener Process to Bits (and back) Alon Kipnis (Stanford) Joint work with Yonina Eldar (Technion) and Andrea Goldsmith (Stanford) November 2017 1 /17 Overview Wiener c t 0 W t W t , t 0 process , uniform reconstruct / sampling
/17
Alon Kipnis
1
November 2017 (Stanford)
Joint work with Yonina Eldar (Technion) and Andrea Goldsmith (Stanford)
/17
2
Wt
uniform sampling
¯ Wn
quantize / encode / compress Wiener process
t ≥ 0 n = 0, 1, . . . , = Wn/fs fs
sampling rate = reconstruct / decode
c Wt, t ≥ 0
finite “bitrate” R
/17
2
Wt
uniform sampling
¯ Wn
quantize / encode / compress Wiener process
t ≥ 0 n = 0, 1, . . . , = Wn/fs fs
sampling rate = reconstruct / decode
c Wt, t ≥ 0
finite “bitrate” R
goal: minimize
kWt c Wtk2
/17
2
Wt
uniform sampling
¯ Wn
quantize / encode / compress Wiener process
t ≥ 0 n = 0, 1, . . . , = Wn/fs fs
sampling rate = reconstruct / decode
c Wt, t ≥ 0
finite “bitrate” R
goal: minimize
kWt c Wtk2
results (example):
R = 2 fs = 1 inf kWt c Wtk2
2 = 18 +
√ 3 96
[smp/sec] [bit/sec]
/17 3
− − − −0.3 −0.2 −0.1 0.1 0.2
t
Motivation: Information Stability of the Wiener Process
Wiener process ( , , , Gaussian)
Wt
t ≥ 0 EWt = 0 EWtWs = min{t, s}
T
/17 3
− − − −0.3 −0.2 −0.1 0.1 0.2
t
Motivation: Information Stability of the Wiener Process
zero information MSE: Wiener process ( , , , Gaussian)
Wt
t ≥ 0 EWt = 0 EWtWs = min{t, s}
T
/17 3
− − − −0.3 −0.2 −0.1 0.1 0.2
t
Motivation: Information Stability of the Wiener Process
zero information MSE:
∅) = 1 T Z T var(Wt)dt = T/2
mmse
Wiener process ( , , , Gaussian)
Wt
t ≥ 0 EWt = 0 EWtWs = min{t, s}
T
/17 3
− − − −0.3 −0.2 −0.1 0.1 0.2
t T
Wt=T
given endpoint of interval:
Motivation: Information Stability of the Wiener Process
zero information MSE:
∅) = 1 T Z T var(Wt)dt = T/2
mmse
Wiener process ( , , , Gaussian)
Wt
t ≥ 0 EWt = 0 EWtWs = min{t, s}
T
/17 3
− − − −0.3 −0.2 −0.1 0.1 0.2
t T
Wt=T
given endpoint of interval:
] = t + (T − t)WT E [Wt|Wt=T ]
Motivation: Information Stability of the Wiener Process
zero information MSE:
∅) = 1 T Z T var(Wt)dt = T/2
mmse
Wiener process ( , , , Gaussian)
Wt
t ≥ 0 EWt = 0 EWtWs = min{t, s}
T
/17 3
− − − −0.3 −0.2 −0.1 0.1 0.2
t T
Wt=T
given endpoint of interval:
] = t + (T − t)WT E [Wt|Wt=T ]
f Wt
Motivation: Information Stability of the Wiener Process
zero information MSE:
∅) = 1 T Z T var(Wt)dt = T/2
mmse
Wiener process ( , , , Gaussian)
Wt
t ≥ 0 EWt = 0 EWtWs = min{t, s}
T
/17 3
− − − −0.3 −0.2 −0.1 0.1 0.2
t T
Wt=T
given endpoint of interval:
] = t + (T − t)WT E [Wt|Wt=T ]
f Wt
Motivation: Information Stability of the Wiener Process
mmse = 1
T Z T E ⇣ Wt − f Wt ⌘2 dt = T/6 f Wt
zero information MSE:
∅) = 1 T Z T var(Wt)dt = T/2
mmse
Wiener process ( , , , Gaussian)
Wt
t ≥ 0 EWt = 0 EWtWs = min{t, s}
T
/17
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
/17
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
/17
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
= Ts/6 = 1/(6fs)
mmse(fs) =
/17
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
= Ts/6 = 1/(6fs)
mmse(fs) =
/17
this talk: minimal MSE when samples quantized at a finite bitrate
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
= Ts/6 = 1/(6fs)
mmse(fs) =
/17
this talk: minimal MSE when samples quantized at a finite bitrate
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
= Ts/6 = 1/(6fs)
mmse(fs) =
c Wt
sample
¯ Wn
Wt
bitrate R representation encode decode
/17
this talk: minimal MSE when samples quantized at a finite bitrate
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
is the MSE bounded in T ?
= Ts/6 = 1/(6fs)
mmse(fs) =
c Wt
sample
¯ Wn
Wt
bitrate R representation encode decode
/17
this talk: minimal MSE when samples quantized at a finite bitrate
− −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2
4
t T
Wt
given points on a grid:
Ts = f −1
s
Motivation: Information Stability of the Wiener Process
is the MSE bounded in T ? what is the minimal MSE as a function of fs and bitrate R ?
= Ts/6 = 1/(6fs)
mmse(fs) =
c Wt
sample
¯ Wn
Wt
bitrate R representation encode decode
/27
5
Background: Unconstrained Coding
Wt
bitrate R representation
c Wt
decode encode
/27
5
Background: Unconstrained Coding
Wt
bitrate R representation
c Wt
RT
. . .
0 . . . 00 0 . . . 01
1 . . . 11
T
W T = {Wt, t ∈ [0, T]}
decode encode
/27
5
Background: Unconstrained Coding
Wt
bitrate R representation
c Wt
RT
. . .
0 . . . 00 0 . . . 01
1 . . . 11
T
W T = {Wt, t ∈ [0, T]}
decode encode
T
T
. . .
T
/27
5
Background: Unconstrained Coding
Wt
bitrate R representation
c Wt
RT
. . .
0 . . . 00 0 . . . 01
1 . . . 11
T
W T = {Wt, t ∈ [0, T]}
decode encode
T
T
. . .
T
/27
5
Background: Unconstrained Coding
Wt
bitrate R representation
c Wt
RT
. . .
0 . . . 00 0 . . . 01
1 . . . 11
T
W T = {Wt, t ∈ [0, T]}
decode encode
T
T
. . .
T
/27
Coding Theorem [Berger ‘70]:
=
s.t.
DW T (R)
E
Wt
T
inf
PW T ,c
W T
I(PW T ,c
W T ) ≤ RT
lim
T →∞
=
DW (R)
DW T (R)
5
Background: Unconstrained Coding
Wt
bitrate R representation
c Wt
RT
. . .
0 . . . 00 0 . . . 01
1 . . . 11
T
W T = {Wt, t ∈ [0, T]}
decode encode
T
T
. . .
T
/27
Coding Theorem [Berger ‘70]:
=
s.t.
DW T (R)
E
Wt
T
inf
PW T ,c
W T
I(PW T ,c
W T ) ≤ RT
lim
T →∞
=
DW (R)
DW T (R)
5
= 2 π2 ln 2R−1
Background: Unconstrained Coding
Wt
bitrate R representation
c Wt
RT
. . .
0 . . . 00 0 . . . 01
1 . . . 11
T
W T = {Wt, t ∈ [0, T]}
decode encode
T
T
. . .
T
/17 6
Background: Unconstrained Coding
/17 6
distortion-rate function [Berger 1970]
DW (R) = 2 π2 ln 2R−1
Background: Unconstrained Coding
/17 6
distortion-rate function [Berger 1970]
DW (R) = 2 π2 ln 2R−1
find coefficients in the Karhunen–Loève of Wt
k = 1, 2, . . . Ak = Z T fk(t)dWt
Background: Unconstrained Coding
/17 6
distortion-rate function [Berger 1970]
DW (R) = 2 π2 ln 2R−1
find coefficients in the Karhunen–Loève of Wt
k = 1, 2, . . . Ak = Z T fk(t)dWt
encode the coefficients using a standard random coding principle [Shannon]
Background: Unconstrained Coding
/17 6
distortion-rate function [Berger 1970]
DW (R) = 2 π2 ln 2R−1
find coefficients in the Karhunen–Loève of Wt
k = 1, 2, . . . Ak = Z T fk(t)dWt
encode the coefficients using a standard random coding principle [Shannon]
requires integration with respect to Brownian path in practice, imprecise at any timescale 😖
Background: Unconstrained Coding
/17 6
distortion-rate function [Berger 1970]
DW (R) = 2 π2 ln 2R−1
find coefficients in the Karhunen–Loève of Wt
k = 1, 2, . . . Ak = Z T fk(t)dWt
encode the coefficients using a standard random coding principle [Shannon]
requires integration with respect to Brownian path in practice, imprecise at any timescale 😖
this talk: incorporate sampling into model
💢
Background: Unconstrained Coding
/17
− − − −0.3 −0.2 −0.1 0.1 0.26
distortion-rate function [Berger 1970]
DW (R) = 2 π2 ln 2R−1
find coefficients in the Karhunen–Loève of Wt
k = 1, 2, . . . Ak = Z T fk(t)dWt
encode the coefficients using a standard random coding principle [Shannon]
requires integration with respect to Brownian path in practice, imprecise at any timescale 😖
this talk: incorporate sampling into model
💢
Background: Unconstrained Coding
/17
− − − −0.3 −0.2 −0.1 0.1 0.26
distortion-rate function [Berger 1970]
DW (R) = 2 π2 ln 2R−1
find coefficients in the Karhunen–Loève of Wt
k = 1, 2, . . . Ak = Z T fk(t)dWt
encode the coefficients using a standard random coding principle [Shannon]
requires integration with respect to Brownian path in practice, imprecise at any timescale 😖
this talk: incorporate sampling into model
💢
Background: Unconstrained Coding
/17 7
decoder encoder
fs
M ∈ {0, 1}bT Rc W T
= T −1
s
¯ W T c W T
/17 7
decoder encoder
fs
M ∈ {0, 1}bT Rc W T
= T −1
s
¯ W T c W T
lim
T →∞
inf
¯ W T →M→c W T
1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R)
/17 7
decoder encoder
fs
M ∈ {0, 1}bT Rc W T
= T −1
s
¯ W T c W T
lim
T →∞
inf
¯ W T →M→c W T
1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓
/17 7
decoder encoder
fs
M ∈ {0, 1}bT Rc W T
= T −1
s
¯ W T c W T
fs [smp/sec]
MSE
(R = 1) lim
T →∞
inf
¯ W T →M→c W T
1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓
/17 7
decoder encoder
fs
M ∈ {0, 1}bT Rc W T
= T −1
s
¯ W T c W T
DW (R)
2 π2 ln 2
fs [smp/sec]
MSE
(R = 1) lim
T →∞
inf
¯ W T →M→c W T
1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓
/17 7
decoder encoder
fs
M ∈ {0, 1}bT Rc W T
= T −1
s
¯ W T c W T
DW (R)
2 π2 ln 2
mmse(fs) =(6fs)−1
fs [smp/sec]
MSE
(R = 1) lim
T →∞
inf
¯ W T →M→c W T
1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓
/17 7
decoder encoder
fs
M ∈ {0, 1}bT Rc W T
= T −1
s
¯ W T c W T
DW (R)
2 π2 ln 2
mmse(fs) =(6fs)−1
fs [smp/sec]
MSE
(R = 1)
= ?
D ( f
s
, R ) lim
T →∞
inf
¯ W T →M→c W T
1 T Z T ⇣ Wt − c Wt ⌘2 dt = D(fs, R) =❓
/17
Theorem [K., Goldsmith, Eldar, ‘16]
Sf
W (φ) =
1 (2 sin(πφ/2))2 − 1 6
Rθ = fs 2 Z 1 log+ ⇥ Sf
W (φ)/θ
⇤ dφ 1 6fs + 1 fs Z 1 min
W (φ), θ
dφ D(fs, R) =
Main Result: Minimal Distortion under Sampling and Coding
8
/17
Theorem [K., Goldsmith, Eldar, ‘16]
Sf
W (φ) =
1 (2 sin(πφ/2))2 − 1 6
Rθ = fs 2 Z 1 log+ ⇥ Sf
W (φ)/θ
⇤ dφ 1 6fs + 1 fs Z 1 min
W (φ), θ
dφ D(fs, R) =
Main Result: Minimal Distortion under Sampling and Coding
8
θ φ
1
Sf
W (φ)
/17
Theorem [K., Goldsmith, Eldar, ‘16]
Sf
W (φ) =
1 (2 sin(πφ/2))2 − 1 6
Rθ = fs 2 Z 1 log+ ⇥ Sf
W (φ)/θ
⇤ dφ 1 6fs + 1 fs Z 1 min
W (φ), θ
dφ D(fs, R) =
Main Result: Minimal Distortion under Sampling and Coding
8
asymptotic density of Karhunen Loeve eigenvalues of
f Wt = E[Wt| ¯ W]
θ φ
1
Sf
W (φ)
/17 9
steps in proof: I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Minimal Distortion under Sampling and Coding — proof
/17 9
steps in proof: I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Step I:
Minimal Distortion under Sampling and Coding — proof
/17 9
steps in proof: I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Step I:
E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #
d( ¯ wT , b wT ) def
= d : RbT fsc × L2[0, T] → [0, ∞)
Minimal Distortion under Sampling and Coding — proof
/17 9
steps in proof:
T
¯ wT
b wT
I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Step I:
E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #
d( ¯ wT , b wT ) def
= d : RbT fsc × L2[0, T] → [0, ∞)
Minimal Distortion under Sampling and Coding — proof
/17 9
steps in proof:
T
¯ wT
b wT
I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Step I:
E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #
d( ¯ wT , b wT ) def
= d : RbT fsc × L2[0, T] → [0, ∞)
Minimal Distortion under Sampling and Coding — proof
/17 9
steps in proof:
T
¯ wT
b wT
I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Step I:
E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #
d( ¯ wT , b wT ) def
= d : RbT fsc × L2[0, T] → [0, ∞)
Minimal Distortion under Sampling and Coding — proof
/17 9
steps in proof:
T
¯ wT
b wT
I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Ed( ¯ W T , c W T ) = 1 T Z T ⇣ Wt − c Wt ⌘2 dt
Step I:
E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #
d( ¯ wT , b wT ) def
= d : RbT fsc × L2[0, T] → [0, ∞)
Minimal Distortion under Sampling and Coding — proof
/17 9
steps in proof:
T
¯ wT
b wT
I. show that:
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
Ed( ¯ W T , c W T ) = 1 T Z T ⇣ Wt − c Wt ⌘2 dt
Step I: use standard random coding [Shannon] with respect to samples ¯
Wn
under metric d (rather than squared error)
E " 1 T Z T (Wt − b wt)2 | ¯ W T = ¯ wT #
d( ¯ wT , b wT ) def
= d : RbT fsc × L2[0, T] → [0, ∞)
Minimal Distortion under Sampling and Coding — proof
/17 10
steps in proof: I. show that:
Minimal Distortion under Sampling and Coding — Proof
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
/17 10
steps in proof: I. show that: Step II:
Minimal Distortion under Sampling and Coding — Proof
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
/17 10
steps in proof: I. show that:
inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim
T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT
Step II:
Minimal Distortion under Sampling and Coding — Proof
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
/17 10
steps in proof: I. show that: use Karhunen–Loève transform of to evaluate last minimization
f Wt
inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim
T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT
Step II:
Minimal Distortion under Sampling and Coding — Proof
k = 1, 2, . . .
λkfk(t) = Z T fk(s)E h f Wtf Ws i ds,
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
/17 10
steps in proof: I. show that: use Karhunen–Loève transform of to evaluate last minimization
f Wt
inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim
T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT
Step II:
covariance Kernel of has rank . Can ``guess’’ eigenvalues
f W T
bTfsc
λ1, . . . , λbT fsc
Minimal Distortion under Sampling and Coding — Proof
k = 1, 2, . . .
λkfk(t) = Z T fk(s)E h f Wtf Ws i ds,
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
/17 10
steps in proof: I. show that: use Karhunen–Loève transform of to evaluate last minimization
f Wt
inf 1 T Z T E ⇣ f Wt − c Wt ⌘2 dt mmse(W T | ¯ W T ) + D(fs, R) = lim
T →∞ I ⇣ f W T ; c W T ⌘ ≤ RT
Step II:
covariance Kernel of has rank . Can ``guess’’ eigenvalues
f W T
bTfsc
λ1, . . . , λbT fsc
Minimal Distortion under Sampling and Coding — Proof
k = 1, 2, . . .
λkfk(t) = Z T fk(s)E h f Wtf Ws i ds,
Sf
W (φ) =
1 (2 sin(πφ/2))2 − 1 6
asymptotic KL eigenvalues distribution is
1 T Z T E ⇣ Wt − c Wt ⌘2 dt
lim
T →∞
inf = D(fs, R)
I ⇣ ¯ W T ; c W T ⌘ ≤ RT
/17 11
/17 11
DW (R)
2 π2 ln 2
fs [smp/sec]
MSE (R = 1)
mmse(fs)
/17 11
DW (R)
2 π2 ln 2
fs [smp/sec]
MSE (R = 1)
mmse(fs)
D(f
s
, R)
/17 11
DW (R)
2 π2 ln 2
fs [smp/sec]
MSE (R = 1)
mmse(fs)
DW (R)
1 6fs
mmse(fs)
(fs = 1)
R [bits/sec]
MSE
D(f
s
, R)
/17 11
DW (R)
2 π2 ln 2
fs [smp/sec]
MSE (R = 1)
mmse(fs)
DW (R)
1 6fs
mmse(fs)
(fs = 1)
R [bits/sec]
MSE
D(f
s
, R)
D(fs, R)
/17 11
DW (R)
2 π2 ln 2
fs [smp/sec]
MSE (R = 1)
mmse(fs)
DW (R)
1 6fs
mmse(fs)
(fs = 1)
R [bits/sec]
MSE
D(f
s
, R)
D(fs, R)
1 fs 1 6 + 2 + √ 3 6 2−2R/fs !
=
D(fs, R)
R fs ≥ 1 + log( √ 3 + 2) 2 ≈ 1.45
low sampling rate
/17 11
DW (R)
2 π2 ln 2
fs [smp/sec]
MSE (R = 1)
mmse(fs)
DW (R)
1 6fs
mmse(fs)
(fs = 1)
R [bits/sec]
MSE
D(f
s
, R)
D(fs, R)
1 fs 1 6 + 2 + √ 3 6 2−2R/fs !
=
D(fs, R)
R fs ≥ 1 + log( √ 3 + 2) 2 ≈ 1.45
low sampling rate
= 18 + √ 3 96 D(fs = 1, R = 2)
/17
12
(bits per sample)
¯ R = R/fs
excess distortion ratio:
=
DW (R)
ρ( ¯ R) D(fs, R)
/17
12
(bits per sample)
¯ R = R/fs
excess distortion ratio:
=
DW (R)
ρ( ¯ R) D(fs, R)
excess distortion due to sampling is only a function of bits/smp
/17
12
(bits per sample)
¯ R = R/fs
excess distortion ratio:
=
DW (R)
ρ( ¯ R) D(fs, R)
¯ R [bit/smp]
1
ρ ( ¯ R )
excess distortion due to sampling is only a function of bits/smp
/17
12
(bits per sample)
¯ R = R/fs
excess distortion ratio:
=
DW (R)
ρ( ¯ R) D(fs, R)
¯ R [bit/smp]
1
ρ ( ¯ R )
≈ 1.12
¯ R = 1
example: with 1 bit/smp can attain 1.12 of optimal distortion at the same bitrate excess distortion due to sampling is only a function of bits/smp
/17
12
(bits per sample)
¯ R = R/fs
excess distortion ratio:
=
DW (R)
ρ( ¯ R) D(fs, R)
¯ R [bit/smp]
1
ρ ( ¯ R )
≈ 1.12
¯ R = 1
example: with 1 bit/smp can attain 1.12 of optimal distortion at the same bitrate excess distortion due to sampling is only a function of bits/smp must have to get
¯ R → 0 ρ → 1
/17
13
sample
¯ Xn Xt
bitrate R representation encode decode
b Xt
/17
13
sample
¯ Xn Xt
bitrate R representation encode decode
b Xt SX(f) = F {EXtX0} (f)
/17
13
Theorem [K., Goldsmith, Eldar, ‘14]
DX(fs, R) mmse(fs) + Z
fs 2
− fs
2
min {SX(f), θ} d f = Rθ = 1 2 Z
fs 2
− fs
2
log+ [SX(f)/θ] d f
sample
¯ Xn Xt
bitrate R representation encode decode
b Xt SX(f) = F {EXtX0} (f)
/17
13
Theorem [K., Goldsmith, Eldar, ‘14]
DX(fs, R) mmse(fs) + Z
fs 2
− fs
2
min {SX(f), θ} d f = Rθ = 1 2 Z
fs 2
− fs
2
log+ [SX(f)/θ] d f
SX(f)
fs
θ
sample
¯ Xn Xt
bitrate R representation encode decode
b Xt SX(f) = F {EXtX0} (f)
/17
13
Theorem [K., Goldsmith, Eldar, ‘14]
DX(fs, R) mmse(fs) + Z
fs 2
− fs
2
min {SX(f), θ} d f = Rθ = 1 2 Z
fs 2
− fs
2
log+ [SX(f)/θ] d f
SX(f)
fs
θ
DX(R)
fNyq
mmse(fs)
fs
distortion
sample
¯ Xn Xt
bitrate R representation encode decode
b Xt SX(f) = F {EXtX0} (f)
/17
13
Theorem [K., Goldsmith, Eldar, ‘14]
DX(fs, R) mmse(fs) + Z
fs 2
− fs
2
min {SX(f), θ} d f = Rθ = 1 2 Z
fs 2
− fs
2
log+ [SX(f)/θ] d f
SX(f)
fs
θ
DX(R)
fNyq
mmse(fs)
fs
distortion
sample
¯ Xn Xt
bitrate R representation encode decode
b Xt SX(f) = F {EXtX0} (f)
D
X
(f
s
, R)
fR
/17
14
/17
14
for zero distortion must have R → ∞
) → 0 DW (R)
/17
14
for zero distortion must have R → ∞
) → 0 DW (R)
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
14
for zero distortion must have R → ∞
) → 0 DW (R)
Wiener process
R/fs → 0
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
14
for zero distortion must have R → ∞
) → 0 DW (R)
Wiener process
R/fs → 0
bandlimited Gaussian processes
R/fs → ∞
S
X
( f )
fs
θ
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
14
for zero distortion must have R → ∞
) → 0 DW (R)
Wiener process
R/fs → 0
bandlimited Gaussian processes
R/fs → ∞
S
X
( f )
fs
θ
Gauss-Markov (Ornstein–Uhlenbeck) process
R/fs → 1/ ln 2
S
X
( f ) fs
θ
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
15
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
15
processes with rapidly decreasing spectrum
Class 1:
S
X
( f ) fs
θ
R/fs → ∞
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
15
processes with rapidly decreasing spectrum
S
X
( f )
fs
θ
Class 2:
processes with slowly decreasing spectrum
R/fs < ∞
Class 1:
S
X
( f ) fs
θ
R/fs → ∞
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
challenge in encoding: high-resolution quantization
15
processes with rapidly decreasing spectrum
S
X
( f )
fs
θ
Class 2:
processes with slowly decreasing spectrum
R/fs < ∞
Class 1:
S
X
( f ) fs
θ
R/fs → ∞
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
challenge in encoding: high-resolution quantization
15
processes with rapidly decreasing spectrum
S
X
( f )
fs
θ
Class 2:
processes with slowly decreasing spectrum
R/fs < ∞
challenge in encoding: adapting to high innovation rate
Class 1:
S
X
( f ) fs
θ
R/fs → ∞
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
/17
challenge in encoding: high-resolution quantization
15
processes with rapidly decreasing spectrum
S
X
( f )
fs
θ
Class 2:
processes with slowly decreasing spectrum
R/fs < ∞
challenge in encoding: adapting to high innovation rate
Class 1:
S
X
( f ) fs
θ
R/fs → ∞
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
lim
fs→∞
< ∞ 1 fs Z fs
−fs
log+ SX(f) SX(fs)d f
/17
challenge in encoding: high-resolution quantization
15
processes with rapidly decreasing spectrum
S
X
( f )
fs
θ
Class 2:
processes with slowly decreasing spectrum
R/fs < ∞
challenge in encoding: adapting to high innovation rate
Class 1:
S
X
( f ) fs
θ
R/fs → ∞
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
R → ∞
as
lim
fs→∞
< ∞ 1 fs Z fs
−fs
log+ SX(f) SX(fs)d f lim
fs→∞
1 fs Z fs
−fs
log+ SX(f) SX(fs)d f = ∞
/17
16
/17
16
encoding a realization of the Wiener process involves sampling and quantization (encoding)
/17
1 bit per sample attains 1.12 times the optimal distortion at the same bitrate
16
encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate:
/17
1 bit per sample attains 1.12 times the optimal distortion at the same bitrate
16
encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate: sampling rate must increase faster than bitrate in order to get D(fs, R)/DW (R) → 1
/17
1 bit per sample attains 1.12 times the optimal distortion at the same bitrate
16
encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate: sampling rate must increase faster than bitrate in order to get D(fs, R)/DW (R) → 1 a new way to classify spectrum of continuous-time signals:
DX(fs(R), R)/DX(R) → 1
Class 2: (Wiener, Gauss-Markov,…) Class 1:
R/fs → ∞
(bandlimited, rapidly decreasing PSD)
R/fs < ∞
/17
17
function of sampled Wiener processes”, on Arxiv
fs [smp/sec]
Distortion
2R−1 π2 ln 2
DW (R)
θ
1 4 sin2 (πφ/2) − 1 6
φ
1
“Distortion-rate function of sub-Nyquist sampled Gaussian sources”, IEEE Trans. Info. Th.
mmse(fs)
/17
18
/17
18
for zero distortion must have R → ∞
) → 0 DW (R)
/17
18
for zero distortion must have R → ∞
) → 0 DW (R)
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
/17
18
for zero distortion must have R → ∞
) → 0 DW (R)
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
Wiener process: R/fs → 0
/17
18
for zero distortion must have R → ∞
) → 0 DW (R)
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
Wiener process: R/fs → 0 bandlimited Gaussian processes [K., Goldsmith, Eldar, Weissman ‘13]:
R/fs → ∞
S
X
( f )
fs
θ
/17
18
for zero distortion must have R → ∞
) → 0 DW (R)
how to set so that
fs(R)
DW (R)
D(fs, R) → 1
Wiener process: R/fs → 0 bandlimited Gaussian processes [K., Goldsmith, Eldar, Weissman ‘13]:
R/fs → ∞
S
X
( f )
fs
θ
Gauss-Markov process [K., Goldsmith, Eldar ‘14]:
R/fs → 1/ ln 2
S
X
( f ) fs
θ