Additional Coding Opportunities in Cache-Aided Networks Mich` ele - - PowerPoint PPT Presentation

additional coding opportunities in cache aided networks
SMART_READER_LITE
LIVE PREVIEW

Additional Coding Opportunities in Cache-Aided Networks Mich` ele - - PowerPoint PPT Presentation

Additional Coding Opportunities in Cache-Aided Networks Mich` ele Wigger Telecom ParisTech Joint Work with S. Saeedi-Bidokhti, S. Kamel, M. Sarkiss, S. Shamai, R. Timo, and A. Yener WiOpt 2017 May 17, 2017 Caching in Networks Database


slide-1
SLIDE 1

Additional Coding Opportunities in Cache-Aided Networks

Mich` ele Wigger Telecom ParisTech Joint Work with S. Saeedi-Bidokhti, S. Kamel, M. Sarkiss, S. Shamai,

  • R. Timo, and A. Yener

WiOpt 2017 May 17, 2017

slide-2
SLIDE 2

Caching in Networks

cache

Database

cache cache 2

slide-3
SLIDE 3

Caching in a One-To-Many Broadcast Network

cache cache cache

Database

[Maddah-Ali, Niesen ’14]

3

slide-4
SLIDE 4

Caching in a One-To-Many Broadcast Network

cache cache cache

Database

Wireless

3

slide-5
SLIDE 5

Model

cache cache cache

… user user user broadcast network encoder data data data database:

4

slide-6
SLIDE 6

Model

cache cache cache

… data data data encoder database:

  • Caching Phase: demands not yet known

4

slide-7
SLIDE 7

Model

cache cache cache

Demands encoder data data data database:

  • Caching Phase: demands not yet known
  • Delivery Phase: demands announced

4

slide-8
SLIDE 8

Model

X Y1 Y2 YK

Encoder cache cache cache

… Demands

p(y1, . . . , yK|x)

encoder broadcast network user user user data data data database:

  • Caching Phase: demands not yet known
  • Delivery Phase: demands announced

4

slide-9
SLIDE 9

Model

cache cache cache

… user user user broadcast network encoder data data data database:

  • Caching Phase: demands not yet known
  • Delivery Phase: demands announced
  • Under all possible demands, files need to be sent reliably

4

slide-10
SLIDE 10

Model

R

Encoder cache cache cache

… user user user encoder broadcast network data data data database:

  • Caching Phase: demands not yet known
  • Delivery Phase: demands announced
  • Under all possible demands, files need to be sent reliably
  • Largest data-rate R in function of cache rates M1, . . . , MK?

4

slide-11
SLIDE 11

Easy Bounds

  • Local caching gain:

R(M1, . . . , MK) ≥ R(M0

1, . . . , M0 K) +

max

k∈{1,...,K}

Mk − M0

k

D (Gain only from local cache memory)

  • Perfect global caching gain:

R(M1, . . . , MK) ≤ R(M0

1, . . . , M0 K) +

K

k=1 Mk − K k=1 M0 k

D (Gain as if each receiver had access to all caches)

5

slide-12
SLIDE 12

Coded Caching

cache cache cache

encoder … database: data data data user user user

  • Noise-free bit-pipe
  • Equal cache sizes M1 = M2 = . . . = MK = M

Related results in [Maddah-Ali, Niesen ’14], [Yu, Maddah-Ali, Avestimehr ’16], [Ji, Tulino, Liorcha, Caire ’15], [Pedarsani, Maddah-Ali, Niesen ’16], [Wang, Lim, Gastpar ’16], [Amiri, Yang, Gunduz ’16] ... 6

slide-13
SLIDE 13

Coded Caching

cache

user user

cache still need

encoder data data data database: [Maddah-Ali, Niesen ’14]

7

slide-14
SLIDE 14

Coded Caching

cache

user user still need

cache

encoder data data data database: [Maddah-Ali, Niesen ’14]

7

slide-15
SLIDE 15

Coded Caching

cache

user user

cache

encoder data data data database: [Maddah-Ali, Niesen ’14]

7

slide-16
SLIDE 16

Coded Caching

cache

encoder user user

cache

data data data database: [Maddah-Ali, Niesen ’14]

1 2 3 1 2 3 4

Cache size M/D Data rate R

Coded caching Traditional caching

7

slide-17
SLIDE 17

Noisy Broadcast Networks

Encoder cache cache cache

… user user user encoder broadcast network data data data database:

Cache memories are even more useful in heterogeneous networks

8

slide-18
SLIDE 18

Noisy Broadcast Networks

Encoder

degraded

cache cache cache

… encoder broadcast network data data data database: user user user

Cache memories are even more useful in heterogeneous networks

8

slide-19
SLIDE 19

Challenges

  • Noisy channel outputs
  • Heterogeneous users: different channel qualities & cache sizes
  • Rate Bottleneck caused by weaker receivers

Solutions

  • Cache assignment based on channel qualities
  • Joint cache-channel coding

9

slide-20
SLIDE 20

Erasure Broadcast Networks

1 2 K

Encoder cache cache cache

… encoder user user user database: data data data

X Y1 Y2 YK

  • Binary input X
  • Output Yk =
  • X

with probability 1 − ǫk ? with probability ǫk

  • 1 ≥ ǫ1 ≥ ǫ2 ≥ ǫ3 ≥ . . . ≥ ǫK ≥ 0

10

slide-21
SLIDE 21

Single Weak Receiver Degrades Performance

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0.5 1 1.5 2 2.5 3 3.5 4

Cache size M/D Data rate R

Coded caching (ǫ1 = ǫ2 = . . . ǫK = 0.2)

11

slide-22
SLIDE 22

Single Weak Receiver Degrades Performance

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0.5 1 1.5 2 2.5 3 3.5 4

Cache size M/D Data rate R

Coded caching (ǫ1 = ǫ2 = . . . ǫK = 0.2) Coded caching (ǫ1 = 0.8, ǫ2 = . . . ǫK = 0.2)

11

slide-23
SLIDE 23

Single Weak Receiver Degrades Performance

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0.5 1 1.5 2 2.5 3 3.5 4

Cache size M/D Data rate R

Coded caching (ǫ1 = ǫ2 = . . . ǫK = 0.2) Coded caching (ǫ1 = 0.8, ǫ2 = . . . ǫK = 0.2) Upper Bound with equal caches (ǫ1 = 0.8, ǫ2 = . . . ǫK = 0.2)

11

slide-24
SLIDE 24

Is Degradation Inherent? Can it be Circumvented?

  • Is performance degradation really fundamental?
  • Not if cache sizes had properly been assigned/designed!

Assign more cache memory to weak receiver!

cache cache cache

Library:

… Encoder Decoder Decoder Decoder

1 2 K

12

slide-25
SLIDE 25

Performance when Cache Memory can be freely Assigned

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0.5 1 1.5 2 2.5 3 3.5 4 Average cache size MTotal/D/K

Data rate R

Coded caching (ǫ1 = ǫ2 = . . . ǫK = 0.2) Coded caching (ǫ1 = 0.8, ǫ2 = . . . ǫK = 0.2) Upper Bound with equal caches (ǫ1 = 0.8, ǫ2 = . . . ǫK = 0.2) Our New Coding Scheme (ǫ1 = 0.8, ǫ2 = . . . ǫK = 0.2)

  • Careful cache assignment + new coding allows to mitigate loss!

13

slide-26
SLIDE 26

Coding Schemes for Unequal Cache Assignment

M D

cache

encoder database: data data data user user

  • State of the art (coded caching + erasure BC code):

R − M

D

1 − ǫ1 + R 1 − ǫ2 ≤ 1 (local caching gain)

  • Piggyback coding (joint cache-channel coding)

14

slide-27
SLIDE 27

Piggyback coding

Xn( )

,

for user 2 for user 1 user 1 user 2 user 1 needs user 2 needs knowing

15

slide-28
SLIDE 28

Piggyback coding

user 1 needs user 2 needs

Xn( )

,

for user 2 for user 1 knowing user 1 user 2

15

slide-29
SLIDE 29

Piggyback coding

user 1 needs user 2 needs

Xn( )

,

for user 2 for user 1 knowing user 1 user 2

max

  • R

1 − ǫ2 , R − M

D

1 − ǫ1

  • + R − M

D

1 − ǫ2 ≤ 1 (global caching gain)

15

slide-30
SLIDE 30

Network with Weak Receivers with Caches

cache cache

Library:

… Encoder Decoder Decoder Decoder

  • Weak and strong receivers with ǫw ≥ ǫs
  • Equal cache size M at all weak receivers

16

slide-31
SLIDE 31

New Gains of Caching

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 ·10−2 2.5 3 3.5 4 4.5 5 ·10−2

Cache size M/D Data rate R

Nested piggyback coding Coded caching with BC code Amiri&Gunduz-2017 Upper bound

4 weak, and 16 strong users, ǫw = 0.8, ǫs = 0.2 R ≈ R0

  • no-cache

+ γlocalγMN γnew

  • scales with # strong users

M D

[Saeedi-Bidokhti, Wigger, Timo-2016, Amiri&Gunduz-2017]

17

slide-32
SLIDE 32

Cache Assignment

M1 M2 MK … broadcast network data data data encoder database:

cache cache cache

user user user

M1 + . . . + MK ≤ MTotal R(MTotal)?

18

slide-33
SLIDE 33

Bounds on the Rate-Memory Tradeoff

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 0.5 1 1.5 2 2.5

Total cache budget MTotal/D Data rate R

Upper bound on uniform cache assignment

Erasure broadcast network ǫ1 = 0.9, ǫ2 = 0.6, ǫ3 = 0.1, ǫ4 = 0.05

[Saeedi-Bidokhti, Wigger, Yener-2017]

19

slide-34
SLIDE 34

Bounds on the Rate-Memory Tradeoff

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 0.5 1 1.5 2 2.5

Total cache budget MTotal/D Data rate R

Cache assignment and new coding Upper bound on uniform cache assignment

Erasure broadcast network ǫ1 = 0.9, ǫ2 = 0.6, ǫ3 = 0.1, ǫ4 = 0.05

[Saeedi-Bidokhti, Wigger, Yener-2017]

19

slide-35
SLIDE 35

Bounds on the Rate-Memory Tradeoff

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 0.5 1 1.5 2 2.5

Total cache budget MTotal/D Data rate R

Cache assignment and new coding Upper bound Upper bound on uniform cache assignment

Erasure broadcast network ǫ1 = 0.9, ǫ2 = 0.6, ǫ3 = 0.1, ǫ4 = 0.05

[Saeedi-Bidokhti, Wigger, Yener-2017]

19

slide-36
SLIDE 36

Bounds on the Rate-Memory Tradeoff

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 0.5 1 1.5 2 2.5

slope = 1 slope = 1

K

Total cache budget MTotal/D Data rate R

Cache assignment and new coding Upper bound Upper bound on uniform cache assignment

Erasure broadcast network ǫ1 = 0.9, ǫ2 = 0.6, ǫ3 = 0.1, ǫ4 = 0.05

[Saeedi-Bidokhti, Wigger, Yener-2017]

19

slide-37
SLIDE 37

Bounds on the Rate-Memory Tradeoff

1 2 3 4 5 6 7 8 9 10 0.5 1 1.5 2 2.5 3 3.5 4

Total cache budget MTotal/D Data rate R

Cache assignment and generalized piggyback coding Upper bound Upper bound on uniform cache assignment

Gaussian broadcast network N1 = 4, N2 = 2, N3 = 1, N4 = 0.5

[Saeedi-Bidokhti, Wigger, Yener-2017]

19

slide-38
SLIDE 38

Caching Gain: From Perfect Global to Local

  • Small total cache budget:
  • Assign all the cache memory to the weakest receiver
  • Use superposition piggyback coding
  • Slope close to 1 → perfect global caching gain
  • Large total cache budget:
  • Assign larger cache memory to weaker receivers so as “to

balance the channels”

  • Use generalized coded caching
  • Slope close to 1

K → local caching gain 20

slide-39
SLIDE 39

Conclusions

  • Joint cache-channel coding schemes (piggyback coding and

generalizations) improve performance when weaker receivers have larger cache sizes

  • In some sense: coding allows to replace some of the cache

memory

  • Small total cache memory: assign all possible cache memory to

weakest → perfect global caching gain

  • Large total cache memory: more cache memory the weaker the

users so as to balance channel quality

  • Schemes are close to optimal

21

slide-40
SLIDE 40

Some of the Related Works

  • Decentralized coded caching (Maddah-Ali & Niesen-2014, etc.)
  • Additional libraries with higher resolution information

(Cacciapuoti, Caleffi, Ji, Llorca, Tulino-2016)

  • Fading broadcast channels (Zhang&Elia-2016)
  • Broadcast channels with feedback

(Ghorbel,Kobayashi,Yang-2016, Zhang&Elia-2016)

  • Massive MIMO broadcast channels

(Yang, Ngo, Kobayashi-2016)

22

slide-41
SLIDE 41

Caching in Broadcast Networks with an Eavesdropper

cache cache cache

Database

Wireless

Eavesdropper

  • Eavesdropper only listens to delivery phase
  • Eavesdropper can’t learn anything about entire database!

1 nI(data1, . . . , dataD; Z n) → 0 as n → ∞

23

slide-42
SLIDE 42

Secure Coded Caching over Noise-Free Bit-Pipe

cache cache cache

Database Eavesdropper

  • Equal cache sizes M1 = M2 = . . . = MK = M

Sengupta-Tandon-Clancy-2013:

  • Secure coded caching with common keys stored in caches
  • Performance close to non-secure coded caching

24

slide-43
SLIDE 43

Secure Caching over Heterogeneous Broadcast Networks

cache cache cache

Database

Wireless

Eavesdropper

  • More interesting coding techniques possible
  • Wiretap coding
  • Piggyback coding
  • ...

25

slide-44
SLIDE 44

One-Sided Cache Assignment under Secrecy Constraint

M D

cache

encoder database: data data data user user

z

26

slide-45
SLIDE 45

A Numerical Result

  • Erasure BC with: ǫ1 = 0.7, ǫ2 = 0.2, ǫz = 0.8, D = 5

M D

Secrecy Rate

3 2 1 3 2 1 lower bound

  • ne-sided cache

upper bound

  • ne-sided cache

lower bound symmetric cache upper bound symmetric cache

  • Cache assignment improves performance also under secrecy

constraints!

27

slide-46
SLIDE 46

Usages of One-Sided Caches

M D

Secrecy Rate R0 Cache key for Rx1 Cache keys for Rx1 and Rx2 Cache key and data Capacity saturation Cache all data

  • Caching only key yields steep slope ≈ D
  • Coding allows to achieve saturation level before caching all data!

28

slide-47
SLIDE 47

Securing Rx2’s Communication with Rx1’s Key

  • Placement: Store 2 keys K1 and K2 in Rx1’s cache memory
  • Delivery:

W (1)

d1

W (2)

d1 , Wd2

Wiretap code with secret key K1 Cloud center W (2)

d1 ⊕ K2

Satellite Wd2 Satellite Wd2

29

slide-48
SLIDE 48

Securing Rx2’s Communication with Rx1’s Key

  • Placement: Store 2 keys K1 and K2 in Rx1’s cache memory
  • Delivery:

W (1)

d1

W (2)

d1 , Wd2

Wiretap code with secret key K1 Cloud center W (2)

d1 ⊕ K2

Satellite Wd2 Satellite Wd2

  • Scheme useful for other scenarios with asymmetric key

distribution!

29

slide-49
SLIDE 49

Achieving Saturation Level without Caching all Data

  • Placement: Store key K1 and data W (1)

1 , . . . , W (1) D

  • Delivery:

W (2)

d1 , W (1) d2

W (2)

d2

Secure Piggyback code Wiretap code W (1)

d2

W (2)

d1 ⊕ K1 30

slide-50
SLIDE 50

Flexible Cache Assignment under Secrecy Constraint

  • Total cache memory constraint Mtotal ≥ M1 + M2

M D

cache

encoder database: data data data user user

z

cache

31

slide-51
SLIDE 51

Usage of Cache Memories under Flexible Cache Assignment

M D

Secrecy Rate R0

Rx1 Rx2 Rx1 Rx2 Rx1Rx2 Rx1Rx2

  • Cache first only key, and then data and key

32

slide-52
SLIDE 52

Many Strong and Weak Receivers under Flexible Cache Assignment

  • Erasure BC with 10 weak and 5 strong receivers and ǫw = 0.7,

ǫs = 0.3, ǫz = 0.8; D = 30 files

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 5 · 10−2 0.1 0.15 0.2 0.25

Total cache budget MTotal/D Data rate R

Non-secret rate with cache assignment Secrecy rate with cache assignment

33

slide-53
SLIDE 53

Conclusions for Secrecy Part

  • Cache Assignment significantly increases performance also

under secrecy constraint

  • New secure coding opportunities in noisy heterogenous

networks

  • Caching gains much more prominent with secrecy → slope D

instead of 1

  • Generally close to the performance without secrecy constraint

34

slide-54
SLIDE 54

Related Works

  • No cache: Ekrem&Ulukus-2013
  • Individual secrecy: eavesdropper not allowed to learn anything

about individual message (Kamel,Sarkiss,W’-2017)

  • Users themselves act as eavesdroppers

(Ravindrakumar, Panda, Karamchandani, Prabhakarany -2016)

35

slide-55
SLIDE 55

Caching in Interference Networks

Server data 1 & 2 data 2 & 3 data 3 & 4 Demands

cache user cache user cache user cache user

Database: data data data

  • Caches allow for loadbalancing, beamforming, interference

alignment

Maddah Ali&Niesen-2015; Pujol Roig, Gunduz, Tosato-2017, Pooya-Abolfazl-Hossein-2015; Naderializadeh-Maddah Ali-Avestimehr-2016; Hachem-Niesen-Diggavi-2016,etc. 36

slide-56
SLIDE 56

3-Phase Multi-Cell Caching Model

Server data 1 & 2 data 2 & 3 data 3 & 4 Demands

cache user cache user cache user cache user

Database: data data data

  • 1. Caching Phase:

Server fills caches without knowing demands

  • 2. Download from Server:

BSs download data of close users

  • 3. Delivery to Users:

BSs communicate data to users

37

slide-57
SLIDE 57

Degrees of Freedom S

  • High-SNR regime: SNR ≫ 1
  • Rate per user: R ≈ S log(SNR)
  • Cache memory: M ≈ µ log(SNR)

µ D 2 1 1 5 3 2 3 S?(µ)

slope 3/2 S?(µ) = 1 + µ D (as for parallel links) slope 3

[Wigger,Timo,Shamai- 2016]

38

slide-58
SLIDE 58

Sketch of Scheme achieving Star-point

Tx2 Tx3 Tx1 Xn

2

  • W B

d2 ⊕W A d3

  • Rx2

Rx3 Rx1

  • W B

d

  • ˆ

W A

d1

ˆ W B

d2

ˆ W A

d3

  • W A

d

  • Xn

1

  • W A

d1

  • Phase 1 Subnet:

silenced silenced

TxK

  • Rx 1: Decodes W A

d1

  • Rx 3: Decodes W B

d2

W A

d3;

gets W A

d3;

  • Rx 2: Forms Y n

2 − X n 1 (W A d1);

decodes W B

d2

W A

d3;

gets W B

d2 39

slide-59
SLIDE 59

Summary

  • For noisy heterogeneous networks: many more coding gains

beyond coded caching

  • Coding gains facilitated by smart cache assignment
  • New coding opportunities and global with and without secrecy
  • Proposed coding schemes are near-optimal

40