DISCUS: Distributed e s n s o Compression for r w Sensor - - PowerPoint PPT Presentation

discus distributed
SMART_READER_LITE
LIVE PREVIEW

DISCUS: Distributed e s n s o Compression for r w Sensor - - PowerPoint PPT Presentation

DISCUS: Distributed e s n s o Compression for r w Sensor Networks e b s K. Ramchandran, K. Pister, M. Jordan, J. Malik, V. Anantharam, L. Doherty, J. Hill, J. Kusuma, W. Leven, A. Ng, S. Pradhan, R. Szewczyk, J. Yeh


slide-1
SLIDE 1

BASiCS Group University of California at Berkeley

s e n s

  • r

w e b s

DISCUS: Distributed Compression for Sensor Networks

  • K. Ramchandran, K. Pister, M.

Jordan, J. Malik, V. Anantharam,

  • L. Doherty, J. Hill, J. Kusuma, W. Leven,
  • A. Ng, S. Pradhan, R. Szewczyk, J. Yeh

http://basics.eecs.berkeley.edu/sensorwebs

slide-2
SLIDE 2

University of California, Berkeley

Motivations

SmartDust and MEMS: quantum leap in

device and computing device technology

Excellent platform for efficient, smart devices

and collaborative, distributed theory and algorithms

Bring unified theoretical approach to address

specific scenarios

Sensor networks -> correlated observations:

want to compress in efficient and distributed manner

slide-3
SLIDE 3

University of California, Berkeley

Mission Statement

  • Experimental testbed to

leverage advances in SmartDust MEMS technology and demonstrate developed theories and algorithms -- the BCSN (Berkeley Campus Sensor Network). Correlated information

slide-4
SLIDE 4

University of California, Berkeley

Scenario: correlated observations

Varying temperature field

Information Theory: can gain performance boost thanks to correlation DISCUS: constructive way of harnessing correlation for blind joint compression close to

  • ptimum

Central Decoder

slide-5
SLIDE 5

University of California, Berkeley

At The End of The Day

Want to compress

correlated observations

Don’t want communication

between motes

x

R

y

R

3 4 5 6 7 7 6 5 4 3

[ H(x), H(y|x) ] [ H(y), H(x|y) ]

Encoder 1 Source 1 Encoder 2 Joint Decoder Source 2 correlated

x

R

y

R

slide-6
SLIDE 6

University of California, Berkeley

Distributed Compression

Slepian-Wolf, Wyner-Ziv (ca. 70’s):

information-theoretically can jointly compress correlated observations, even with no communication between motes.

DISCUS: constructive framework, using well-

studied error-correcting codes from coding theory.

Encoder 1 Source 1 Encoder 2 Joint Decoder Source 2 correlated

x

R

y

R

3 4 5 6 7 7 6 5 4 3

[ H(x), H(y|x) ] [ H(y), H(x|y) ]

slide-7
SLIDE 7

University of California, Berkeley

Source Coding with Side Information – discrete alphabets

X and Y => length-3 binary data (equally likely), Correlation: Hamming distance between X and Y is at most 1.

Example: When X=[0 1 0], Y => [0 1 0], [0 1 1], [0 0 0], [1 1 0]. X+Y= 0 0 0 0 0 1 0 1 0 1 0 0 Need 2 bits to index this.

X Y X X = ˆ

) | ( Y X H R ≥ X and Y correlated Y at encoder and decoder

System 1

Encoder Decoder

slide-8
SLIDE 8

University of California, Berkeley

What is the best that one can do?

X

Decoder

Y X X = ˆ

) | ( Y X H R ≥ X and Y correlated Y at decoder

System 2

The answer is still 2 bits!

How?

0 0 0 1 1 1 Coset-1 000 001 010 100 111 110 101 011 X Y

Encoder

slide-9
SLIDE 9

University of California, Berkeley

Note:

Coset-1 -> repetition code. Each coset -> unique “syndrome” DIstributed Source Coding Using Syndromes

  • 1

1 1 Coset-1

  • 1

1 1 Coset-4

  • 1

1 1 Coset-3

  • 1

1 1 Coset-2

Encoder -> index of the coset containing X. Decoder ->X in given coset.

slide-10
SLIDE 10

University of California, Berkeley

Discrete case – handwaving argument

Consider abstraction using lattices Correlation structure translates to distance between X and Y Suppose X and Y

have distance at most 1

How to compress X

if we know that Y is close to X?

Use cosets! Partition lattice space into cosets and send coset index!

slide-11
SLIDE 11

University of California, Berkeley

A little accounting

How much have we compressed? Without compression: log236 bits With compression: log29 bits

X Y

slide-12
SLIDE 12

University of California, Berkeley

And now the math cleanup!

Can use well-studied channel codes to build

codes on euclidean space!

Basic idea: want to select points as far apart

as possible (i.e. cosets) – done using channel codes

Send ambiguous information, side

information will disambiguate information!

Can use Hamming, TCM, RS, etc. codes

slide-13
SLIDE 13

University of California, Berkeley

Generalized coset codes: a/symmetric applications

How to compress to

arbitrary (but information- theoretically sound) rates?

Use lattice idea again:

2 encoders send ambiguous information, but only one true answer that satisfy correlation constraint

Encoder-1 Encoder-2

Hexagonal Lattice

slide-14
SLIDE 14

University of California, Berkeley

Continuous case – quantization and estimation

Modular boxes:

Discrete Decoder Discrete Encoder Quant izer Estim ator

slide-15
SLIDE 15

University of California, Berkeley

Gaussian case example

Estimation with side information:

– First quantize sensor information – Use discrete DISCUS encoders / decoders – Estimate given decoder output and side information

[ ]

i

X Y X E X Γ ∈ = , | ˆ

slide-16
SLIDE 16

University of California, Berkeley

No Such Thing as Free Lunch

Big issue: finer quantization means larger

error distances – need stronger code to enable finer quantization!!!

slide-17
SLIDE 17

University of California, Berkeley

Deployment Scenario

Observe readings:

learn correlation

Use clustering

algorithm

Assign codebooks

to different motes

Can rotate “centroid”

mote in each group

Centroid mote report

to central decoder

slide-18
SLIDE 18

University of California, Berkeley

Dynamic Tracking Scenario

Wish to dynamically update clustering Good news: no need for child nodes to be aware!!!

Codebook assignment not an issue!

Only central decoder needs to be aware of clustering Can also rotate centroid node within each cluster:

can detect correlation changes

slide-19
SLIDE 19

University of California, Berkeley

Conclusions

Can use efficient error-correcting codes

to enable distributed compression

Power/bandwidth/quality tradeoff

through quantization and codebook selection

Very little work for encoders! Tremendous gains in a sensor network