Fixed Delay Joint Source Channel Coding for Finite Memory Systems - - PowerPoint PPT Presentation

fixed delay joint source channel coding for finite memory
SMART_READER_LITE
LIVE PREVIEW

Fixed Delay Joint Source Channel Coding for Finite Memory Systems - - PowerPoint PPT Presentation

Fixed Delay Joint Source Channel Coding for Finite Memory Systems Aditya Mahajan and Demosthenis Teneketzis Dept. of EECS , University of Michigan, Ann Arbor, MI48109 ISIT 2006July 13, 2006 Fixed Delay & Fixed Complexity Motivation


slide-1
SLIDE 1

Fixed Delay Joint Source Channel Coding for Finite Memory Systems

Aditya Mahajan and Demosthenis Teneketzis

  • Dept. of EECS, University of Michigan,

Ann Arbor, MI–48109 ISIT 2006–July 13, 2006

slide-2
SLIDE 2

Fixed Delay & Fixed Complexity

slide-3
SLIDE 3

Motivation

I S I T 2 6 1

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Classical Information Theory does not take delay and com-

plexity into account.

  • Why consider delay and complexity?
  • Delay:

− QoS (end--to--end delay) in communication networks − Control over communication channels. − Decentralized detection in sensor networks.

  • Complexity: (size of lookup table)

− cost − power consumption

slide-4
SLIDE 4

Finite Delay Communication

I S I T 2 6 2

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Separation Theorem: distortion d is feasible is

Rate Distortion of Source < Channel Capacity R(d) < C

  • For finite delay system Separation Theorem does not hold.
  • What is equivalent of rate distortion and channel capacity?
  • Find a metric to check whether distortion level d is feasible
  • r not.
  • Metric will depend on the source and the channel.
slide-5
SLIDE 5

Problem Formulation

I S I T 2 6 3

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding Objective Evaluate optimal performance R−1(C) for the simplest non-- trivial system − Markov Source − memoryless noisy channel − additive distortion Constraints − Use stationary encoding and decoding schemes. − Fixed memory available at the encoder and the decoder.

slide-6
SLIDE 6

Model

I S I T 2 6 4

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Markov Source:

− Source Output { X1, X2, . . . }, Xn ∈ X. − Transition probability matrix P

  • Finite State Encoder:

− Input Xn, State Sn, Output Zn Zn = f(Xn, Sn−1), Zn ∈ Z Sn = h(Xn, Sn−1), Sn ∈ S

  • Memoryless Channel:

Pr

  • Yn
  • Zn, Yn−1

= Pr ( Yn | Zn ) = Q(Yn, Zn)

slide-7
SLIDE 7

Model

I S I T 2 6 5

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Finite State Decoder:

− Input Yn, State Mn, Output Xn

  • Xn = g(Yn, Mn−1),
  • Xn ∈ X

Mn = h(Yn, Mn−1), Mn ∈ M

  • Distortion Metric:

ρ : X × X → [0, K], K < ∞

  • D step delay

ρ(Xn−D, Xn)

slide-8
SLIDE 8

Problem Formulation

I S I T 2 6 6

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

Markov Source Finite State Encoder Memoryless Channel Finite State Decoder Xn Zn Yn

  • Xn

f, h g, l Sn−1 Mn−1 P Q

Problem (P1) Given source (X, P), channel (Z, Y, Q), memory (S, M) and dis- tortion (ρ, D), determine encoder (f, h) and decoder (g, l) so as to minimize J(f, h, g, l) lim sup

N→∞

1

  • N

E

  • N
  • n=D+1

ρ

  • Xn−D,

Xn

  • f, h, g, l
  • where

N = N − D + 1

slide-9
SLIDE 9

Literature Overview

I S I T 2 6 7

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Transmitting Markovian source through finite--state ma-

chines as encoders and decoders.

  • Problem considered by Gaarder and Slepian in mid 70’s.
  • N. T. Gaarder and D. Slepain

On optimal finite--state digital communication systems,

ISIT, Grignano, Italy, 1979 TIT, vol. 28, no. 2, pp. 167–186, 1982.

slide-10
SLIDE 10

Our Approach

I S I T 2 6 8

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Start with a simpler (to analyze) problem

− Finite horizon − zero delay − time--varying design

  • dynamic team problem—solved using Stochastic Optimiza-

tion Techniques

  • finite delay problem
  • infinite horizon problem
  • Find conditions under which time invariant (stationary) de-

signs are optimal.

  • Low complexity algorithms to obtain optimal performance

and optimal design.

slide-11
SLIDE 11

Finite Horizon Problem

slide-12
SLIDE 12

Finite Horizon Case — Model

I S I T 2 6 9

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Encoder and Tx Memory Update

Zn = fn(Xn, Sn−1) Sn = hn(Xn, Sn−1) f (f1, . . . , fN) h (h1, . . . , hN)

  • Decoder and Rx Memory Update
  • Xn = gn(Yn, Mn−1)

Mn = ln(Yn, Mn−1) g (g1, . . . , gN) l (l1, . . . , lN)

  • Delay D = 0
slide-13
SLIDE 13

Finite Horizon Problem Formulation

I S I T 2 6 10

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

Markov Source Finite State Encoder Memoryless Channel Finite State Decoder Xn Zn Yn

  • Xn

f, h g, l Sn−1 Mn−1 P Q

Problem (P2) Given source (X, P), channel (Z, Y, Q), memory (S, M), distor- tion (ρ, D = 0) and horizon N, determine encoder (f, h) and decoder (g, l) so as to minimize JN(f, h, g, l) E

  • N
  • n=1

ρ

  • Xn,

Xn

  • f, h, g, l
  • where f (f1, . . . , fN), and so on for h, g, l.
slide-14
SLIDE 14

Solution Concept in Seq. Stoch. Opt

I S I T 2 6 11

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • One Step Optimization

min

f1,f2,...,fN h1,h2,...,hN g1,g2,...,gN l1,l2,...,lN

E

  • N
  • n=1

ρ(Xn, Xn)

  • fN, hN, gN, lN
  • 4N Step Optimization—Sequential Decomposition

min

f1

  • min

g1

  • min

l1

  • min

h1

  • · · ·

· · · min

fN

  • min

gN

  • min

lN

  • min

hN

  • · · ·
slide-15
SLIDE 15

Dynamic Team Problems

I S I T 2 6 12

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Team Decision Theory: distributed agents with common
  • bjective

− Marshak and Radner − Witsenhausen

  • Decentralized of information—encoder and decoder have

different view of the world.

  • Non--classical information pattern
  • Non--convex functional optimization problem
  • Most important step is identifying information state
slide-16
SLIDE 16

Information State

I S I T 2 6 13

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding If ϕn−1 is the information state at n− (and γ = (f, h, g, l))

  • State in the sense of

− → ϕn−1

Tn−1(γn)

− − − − − − → ϕn

Tn(γn+1)

− − − − − − → ϕn+1 − →

  • Absorbs the effect of past decision rules on future perfor-

mance. E

  • N
  • i=n

ρ(Xi, Xi)

  • γN

1

  • = E
  • N
  • i=n

ρ(Xi, Xi)

  • π0

n−1, γN n

slide-17
SLIDE 17

Find an information state for Problem (P2)

slide-18
SLIDE 18

Find an information state for Problem (P2)

Guess & Verify

slide-19
SLIDE 19

Information State for (P2)

I S I T 2 6 14

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Definition

π1

n Pr(Xn, Yn, Sn−1, Mn−1)

π2

n Pr(Xn,

Sn−1, Mn) π0

n Pr(Xn,

Sn, Mn)

slide-20
SLIDE 20

Information State for (P2)

I S I T 2 6 14

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Definition

π1

n Pr(Xn, Yn, Sn−1, Mn−1)

π2

n Pr(Xn,

Sn−1, Mn) π0

n Pr(Xn,

Sn, Mn) Lemma For all n = 1, . . . , N,

  • there exist linear transforms T 0, T1, T2 such that

− → π0

n−1 T 0

n−1(fn)

− − − − − − → π1

n T 1

n(ln)

− − − − − → π2

n T 2

n(hn)

− − − − − → π0

n −

slide-21
SLIDE 21

Information State for (P2)

I S I T 2 6 14

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Definition

π1

n Pr(Xn, Yn, Sn−1, Mn−1)

π2

n Pr(Xn,

Sn−1, Mn) π0

n Pr(Xn,

Sn, Mn) Lemma (cont. . .) For all n = 1, . . . , N,

  • the expected instantaneous cost can be written as

E

  • ρ(Xn,

Xn)

  • fn, hn, gn, ln

= ρ(π1

n, gn)

slide-22
SLIDE 22

Solution of (P2)

I S I T 2 6 15

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding Dynamic Program

  • For n = 1, . . . , N

V0

n−1(π0 n−1) = min fn

  • V1

n

  • T 0(fn)π0

n−1

  • ,

V1

n(π1 n) = Vn(π1 n) + min ln

  • V2

n

  • T 1(ln)π1

n

  • ,

Vn(π1

n) = min gn

  • ρ(π1

n, gn)

  • ,

V2

n(π2 n) = min hn

  • V0

n

  • T 2(hn)π2

n

  • ,

and V0

N(π0 N) 0.

slide-23
SLIDE 23

Solution of (P2)

I S I T 2 6 16

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • The arg min at each step determines the corresponding op-

timal design rule.

  • The optimal performance is given by

J∗

N = V0 0(π0 0)

  • Computations: Numerical methods from Markov decision

theory can be used.

slide-24
SLIDE 24

Next steps . . .

slide-25
SLIDE 25

Finite Delay Problem

I S I T 2 6 17

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Delay D = 0
  • Sliding window transformation of the source

Xn = (Xn−D, . . . , Xn) ρ(Xn, Xn) = ρ(Xn−D, Xn)

  • Reduces to problem (P2).
slide-26
SLIDE 26

Infinite Horizon Problem

I S I T 2 6 18

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • First consider delay D = 0
  • Two related ways to making the horizon N → ∞.
  • Expected Discounted Cost Problem

Jβ(f, h, g, l) E

  • n=1

βn−1ρ(Xn, Xn)

  • f, h, g, l
  • Average Cost Per Unit Time Problem

J(f, h, g, l) = lim sup

N→∞

1 NE

  • N
  • n=1

ρ(Xn, Xn)

  • f, h, g, l
slide-27
SLIDE 27

Expected Discounted Cost Problem

I S I T 2 6 19

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Jβ(f, h, g, l) E
  • n=1

βn−1ρ(Xn, Xn)

  • f, h, g, l
  • Find a fixed point (V0, V1, V, V2) of

V0(π0) = min

f

  • V1

T 0(f)π0

n−1

  • ,

V1(π1) = βV(π1) + min

l

  • V2

T 1(l)π1 , V(π1) = min

g

  • ρ(π1, g)
  • ,

V2(π2) = min

h

  • V0

T 2(h)π2 ,

  • Fixed point exists and is unique provided the distortion ρ is

uniformly bounded.

slide-28
SLIDE 28

Average Cost per Unit Time

I S I T 2 6 20

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Average Cost

J(f, h, g, l) = lim sup

N→∞

1 NE

  • N
  • n=1

ρ(Xn, Xn)

  • f, h, g, l
  • Define

γn (f, h, g, l)

  • T(γ) T 0(f) ◦ T 1(l) ◦ T 2(h)

^ ρ(π0

n−1, γn)

ρ

  • T 0(fn)π0

n−1, gn

slide-29
SLIDE 29

Average Cost per Unit Time

I S I T 2 6 21

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Assumption (A1): for some ǫ > 0 there exist bounded mea-

surable functions v(·) and r(·) and design γ0 such that for all π0 v(π0) = min

γ

  • v
  • T(γ)π0

= v

  • T(γ0)π0

min

γ

  • ^

ρ(π0, γ) + r

  • T(γ)π0

v(π0) + r(π0) ^ ρ(π0, γ0) + r

  • T(γ0)π0

+ ǫ

slide-30
SLIDE 30

Average Cost per Unit Time

I S I T 2 6 22

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • If (A1) holds then, γ∞

(γ0, γ0, . . .) is ǫ--optimal, that is, for any other design γ′ J(γ∞

0 ) = v(π0 0) J(γ′) + ǫ

where J(γ′) lim inf

N→∞

1 N

N

  • n=1

^ ρ(π0

n−1, γ′ n).

  • Conditions sufficient to ensure (A1) are known.
  • Not easy to translate them into conditions on the problem
slide-31
SLIDE 31

Some Comments

I S I T 2 6 23

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • For the expected discounted cost problem, time--invariant

designs are optimal.

  • For the average cost per unit time problem, time--invariant

designs are optimal under certain conditions.

  • The two problems are related via a Tauberian theorem

lim inf

n→∞ n

  • i=1

ai n lim inf

β→1− (1 − β) ∞

  • i=1

βi−1ai lim sup

β→1− (1 − β) ∞

  • i=1

βi−1ai lim sup

n→∞ n

  • i=1

ai n

slide-32
SLIDE 32

Solution Framework . . .

slide-33
SLIDE 33

General Methodology

I S I T 2 6 24

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Given source (X, P), channel (Z, Y, Q), memory (S, M) and

distortion (ρ, D).

  • Convert to zero delay problem
  • Find ǫ--optimal design and performance for the discounted

cost problem for β close to 1.

  • This can be done using a polynomial complexity algorithms.
  • The resultant design is ǫ--optimal for the average cost per

unit time problem (if an ǫ--optimal design for the average cost per unit time problem exists)

slide-34
SLIDE 34

Some Interesting Cases

I S I T 2 6 25

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Fixed Delay Source Coding Problem
  • The technique presented here can be extended to non--sto-

chastic min--max problems.

  • Used to study fixed delay encoding/decoding of individual

sequences.

  • Interesting to compare the results with “standard” fixed de-

lay source coding techniques.

slide-35
SLIDE 35

Some Interesting Cases

I S I T 2 6 26

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Fixed Delay Channel Coding Problem
  • Fixed delay decoding of convolutional codes.
  • Most researchers focus on computationally efficient algo-

rithms to determine the MAP bit decoding rule.

  • The problem of efficiently storing the observations has not

been considered.

  • If receiver memory |M| = k |Y|, should one store the previous

k channel observations?

  • Can all the past observations be compressed in k |Y| to get

better performance.

  • How can such “compression” functions be found.
  • This problem fits naturally in the framework presented here.
slide-36
SLIDE 36

Conclusion.

slide-37
SLIDE 37

Summary

I S I T 2 6 27

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Consider fixed delay, fixed complexity communication sys-

tem

  • Markov source and noisy memoryless channel
  • Objective: Minimize total (or discounted or average) distor-

tion

  • Provide a systematic methodology for determining optimal

encoding--decoding strategies and optimal performance

  • There exist low complexity algorithms to find such solutions
  • Interesting special cases of the framework
slide-38
SLIDE 38

Thank You

slide-39
SLIDE 39

Gaarder and Slepian’s Approach

I S I T 2 6 28

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Fix a design (f, h, g, l).
  • {Xn

n−D, Sn, Zn, Yn, Mn,

Xn} forms a Markov chain.

  • Find its steady--state distribution.
  • Find the steady--state distortion

lim

n→∞ E

  • ρ(Xn−D,

Xn)

  • .
  • Cezáro Mean: For any sequence of real numbers (an),

If lim

n→∞ an = a

then lim

n→∞

1 n

n

  • i=1

ai = a

  • Repeat for all designs (f, h, g, l).
slide-40
SLIDE 40

Gaarder and Slepian’s Approach

I S I T 2 6 29

Mahajan Teneketzis: Fixed Delay Joint Source Channel Coding

  • Difficulty:

Evaluating asymptotic (steady--state) perfor- mance is difficult. min

f,h,g,l lim n→∞ E

  • ρ(Xn−D,

X)

  • “A sore point here is the very complicated way in which the

stationary distribution of a Markov chain depends on the elements of its transition matrix ”

  • The matrix elements change discontinuously with a change

in design (f, h, g, l).

  • The resultant Markov chain can have several recurrence

classes, be periodic, have several transient states etc., de- pending on the nature of the design (f, h, g, l).