Coded Caching for Content Distribution Urs Niesen MobiHoc 2018 - - PowerPoint PPT Presentation

coded caching for content distribution
SMART_READER_LITE
LIVE PREVIEW

Coded Caching for Content Distribution Urs Niesen MobiHoc 2018 - - PowerPoint PPT Presentation

Coded Caching for Content Distribution Urs Niesen MobiHoc 2018 Importance of Content Distribution Video on demand is driving network traffic growth Netflix streaming service, Amazon Prime Video, Hulu, Verizon / Comcast on Demand, . . . IP


slide-1
SLIDE 1

Coded Caching for Content Distribution

Urs Niesen MobiHoc 2018

slide-2
SLIDE 2

Importance of Content Distribution

Video on demand is driving network traffic growth

Netflix streaming service, Amazon Prime Video, Hulu, Verizon / Comcast on Demand, . . .

IP video traffic is predicted to make up 82% of all IP traffic by 20211

1Cisco, “The Zettabyte era: Trends and analysis,” Tech. Rep., Jun. 2017.

slide-3
SLIDE 3

Importance of Content Distribution

Video on demand is driving network traffic growth

Netflix streaming service, Amazon Prime Video, Hulu, Verizon / Comcast on Demand, . . .

IP video traffic is predicted to make up 82% of all IP traffic by 20211 Places significant stress on service provider’s networks

1Cisco, “The Zettabyte era: Trends and analysis,” Tech. Rep., Jun. 2017.

slide-4
SLIDE 4

Importance of Content Distribution

Video on demand is driving network traffic growth

Netflix streaming service, Amazon Prime Video, Hulu, Verizon / Comcast on Demand, . . .

IP video traffic is predicted to make up 82% of all IP traffic by 20211 Places significant stress on service provider’s networks Caching (prefetching) can be used to mitigate this stress

1Cisco, “The Zettabyte era: Trends and analysis,” Tech. Rep., Jun. 2017.

slide-5
SLIDE 5

Caching (Prefetching)

20 40 60 80 100 6 12 18 24 time of day normalized demand

slide-6
SLIDE 6

Caching (Prefetching)

20 40 60 80 100 6 12 18 24 time of day normalized demand

High temporal traffic variability

slide-7
SLIDE 7

Caching (Prefetching)

20 40 60 80 100 6 12 18 24 time of day normalized demand

High temporal traffic variability Caching can help smooth traffic

slide-8
SLIDE 8

Caching (Prefetching)

slide-9
SLIDE 9

Caching (Prefetching)

Placement phase (5am): Populate caches

slide-10
SLIDE 10

Caching (Prefetching)

Placement phase (5am): Populate caches Delivery phase (8pm): Request and deliver movies

slide-11
SLIDE 11

The Role of Caching

Conventional beliefs about caching:

slide-12
SLIDE 12

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally

slide-13
SLIDE 13

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters

slide-14
SLIDE 14

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content

slide-15
SLIDE 15

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content This talk will argue:

slide-16
SLIDE 16

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content This talk will argue: The main gain in caching is global

slide-17
SLIDE 17

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content This talk will argue: The main gain in caching is global Global cache size matters

slide-18
SLIDE 18

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content This talk will argue: The main gain in caching is global Global cache size matters Statistically identical users ⇒ different cache content

slide-19
SLIDE 19

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content This talk will argue: The main gain in caching is global Global cache size matters Statistically identical users ⇒ different cache content Coded multicasting as key enabler

slide-20
SLIDE 20

Problem Setting

K users caches server shared (broadcast) link

slide-21
SLIDE 21

Problem Setting

K users caches server N files, N ≥ K for simplicity shared (broadcast) link

slide-22
SLIDE 22

Problem Setting

K users caches size M server N files shared (broadcast) link

slide-23
SLIDE 23

Problem Setting

K users caches size M server N files shared (broadcast) link Placement: cache arbitrary function of files (linear, nonlinear, . . . )

slide-24
SLIDE 24

Problem Setting

K users caches size M server N files shared (broadcast) link Delivery:

slide-25
SLIDE 25

Problem Setting

K users caches size M server N files shared (broadcast) link Delivery: - requests are revealed to server

slide-26
SLIDE 26

Problem Setting

K users caches size M server N files shared (broadcast) link Delivery: - requests are revealed to server

  • server sends arbitrary function of files
slide-27
SLIDE 27

Problem Setting

K users caches size M server N files shared (broadcast) link Delivery: - requests are revealed to server

  • server sends arbitrary function of files
slide-28
SLIDE 28

Problem Setting

K users caches size M server N files shared (broadcast) link Question: smallest worst-case rate R(M) needed in delivery phase?

slide-29
SLIDE 29

Uncoded Caching Scheme

N files, K users, cache size M

M N

slide-30
SLIDE 30

Uncoded Caching Scheme

N files, K users, cache size M

M N

M N

slide-31
SLIDE 31

Uncoded Caching Scheme

N files, K users, cache size M

M N

M N

slide-32
SLIDE 32

Uncoded Caching Scheme

N files, K users, cache size M

M N

M N

slide-33
SLIDE 33

Uncoded Caching Scheme

N files, K users, cache size M

M N

M N

slide-34
SLIDE 34

Uncoded Caching Scheme

N files, K users, cache size M

M N

M N

Performance of uncoded scheme: R(M) = K · (1 − M/N)

slide-35
SLIDE 35

Uncoded Caching Scheme

N files, K users, cache size M

M N

M N

Performance of uncoded scheme: R(M) = K · (1 − M/N) Caches provide content locally ⇒ local cache size matters Identical cache content at users

slide-36
SLIDE 36

Uncoded Caching Scheme

N = 2 files, K = 2 users

2 2 M R uncoded scheme

slide-37
SLIDE 37

Uncoded Caching Scheme

N = 4 files, K = 4 users

4 4 M R uncoded scheme

slide-38
SLIDE 38

Uncoded Caching Scheme

N = 8 files, K = 8 users

8 8 M R uncoded scheme

slide-39
SLIDE 39

Uncoded Caching Scheme

N = 16 files, K = 16 users

16 16 M R uncoded scheme

slide-40
SLIDE 40

Uncoded Caching Scheme

N = 32 files, K = 32 users

32 32 M R uncoded scheme

slide-41
SLIDE 41

Uncoded Caching Scheme

N = 64 files, K = 64 users

64 64 M R uncoded scheme

slide-42
SLIDE 42

Uncoded Caching Scheme

N = 128 files, K = 128 users

128 128 M R uncoded scheme

slide-43
SLIDE 43

Uncoded Caching Scheme

N = 256 files, K = 256 users

256 256 M R uncoded scheme

slide-44
SLIDE 44

Uncoded Caching Scheme

N = 512 files, K = 512 users

512 512 M R uncoded scheme

slide-45
SLIDE 45

Proposed Coded Caching Scheme

N files, K users, cache size M

Design guidelines advocated in this talk: The main gain in caching is global Global cache size matters Different cache content at users Coded multicasting

  • 2M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE
  • Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
slide-46
SLIDE 46

Proposed Coded Caching Scheme

N files, K users, cache size M

Design guidelines advocated in this talk: The main gain in caching is global Global cache size matters Different cache content at users Coded multicasting Performance of coded scheme:2 R(M) = K · (1 − M/N) · 1 1 + KM/N

  • 2M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE
  • Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
slide-47
SLIDE 47

Proposed Coded Caching Scheme

N = 2 files, K = 2 users

2 2 M R uncoded scheme coded scheme

slide-48
SLIDE 48

Proposed Coded Caching Scheme

N = 4 files, K = 4 users

4 4 M R uncoded scheme coded scheme

slide-49
SLIDE 49

Proposed Coded Caching Scheme

N = 8 files, K = 8 users

8 8 M R uncoded scheme coded scheme

slide-50
SLIDE 50

Proposed Coded Caching Scheme

N = 16 files, K = 16 users

16 16 M R uncoded scheme coded scheme

slide-51
SLIDE 51

Proposed Coded Caching Scheme

N = 32 files, K = 32 users

32 32 M R uncoded scheme coded scheme

slide-52
SLIDE 52

Proposed Coded Caching Scheme

N = 64 files, K = 64 users

64 64 M R uncoded scheme coded scheme

slide-53
SLIDE 53

Proposed Coded Caching Scheme

N = 128 files, K = 128 users

128 128 M R uncoded scheme coded scheme

slide-54
SLIDE 54

Proposed Coded Caching Scheme

N = 256 files, K = 256 users

256 256 M R uncoded scheme coded scheme

slide-55
SLIDE 55

Proposed Coded Caching Scheme

N = 512 files, K = 512 users

512 512 M R uncoded scheme coded scheme

slide-56
SLIDE 56

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

slide-57
SLIDE 57

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

B A

slide-58
SLIDE 58

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

B1, B2 A1, A2

slide-59
SLIDE 59

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2

slide-60
SLIDE 60

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2

slide-61
SLIDE 61

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2 A2

slide-62
SLIDE 62

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 A A1, B1 B1, B2 A1, A2 A2

slide-63
SLIDE 63

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 A A1, B1 B1, B2 A1, A2 A2 ⇒ Identical cache content at users ⇒ Gain from delivering content locally

slide-64
SLIDE 64

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2

slide-65
SLIDE 65

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2 A2, B2

slide-66
SLIDE 66

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A1, B1 B1, B2 A1, A2 A2, B2

slide-67
SLIDE 67

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A1, B1 B1, B2 A1, A2 A2, B2 ⇒ Multicast only possible for users with same demand

slide-68
SLIDE 68

Recall: Uncoded Scheme

N = 2 files, K = 2 users, cache size M = 1 1 2 1 2 M R uncoded scheme coded scheme

slide-69
SLIDE 69

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

slide-70
SLIDE 70

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

B A

slide-71
SLIDE 71

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

B1, B2 A1, A2

slide-72
SLIDE 72

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2

slide-73
SLIDE 73

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2

slide-74
SLIDE 74

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2 A2 B1

slide-75
SLIDE 75

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2 A2 B1

slide-76
SLIDE 76

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2 A2 ⊕ B1

slide-77
SLIDE 77

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A2, B2 B1, B2 A1, A2 A2 ⊕ B1

slide-78
SLIDE 78

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A2, B2 B1, B2 A1, A2 A2 ⊕ B1 ⇒ Different cache content at users ⇒ Coded multicast to 2 users with different demands

slide-79
SLIDE 79

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 A A2, B2 B1, B2 A1, A2 A2⊕A1 A A1, B1 B A2, B2 B1, B2 A1, A2 A2⊕B1 B A1, B1 A A2, B2 B1, B2 A1, A2 B2⊕A1 B A1, B1 B A2, B2 B1, B2 A1, A2 B2⊕B1

⇒ Works for all possible user requests ⇒ Simultaneous coded multicasting gain

slide-80
SLIDE 80

Proposed Coded Scheme

N = 2 files, K = 2 users, cache size M = 1 1 2 1 2 M R uncoded scheme coded scheme

slide-81
SLIDE 81

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

slide-82
SLIDE 82

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

C B A

slide-83
SLIDE 83

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

C1, C2, C3 B1, B2, B3 A1, A2, A3

slide-84
SLIDE 84

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3

slide-85
SLIDE 85

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3

slide-86
SLIDE 86

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3

slide-87
SLIDE 87

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3 A2 ⊕ B1

slide-88
SLIDE 88

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3 A2 ⊕ B1

slide-89
SLIDE 89

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3 A2 ⊕ B1, A3 ⊕ C1

slide-90
SLIDE 90

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3 A2 ⊕ B1, A3 ⊕ C1

slide-91
SLIDE 91

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A1, B1, C1 A2, B2, C2 A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3 A2 ⊕ B1, A3 ⊕ C1, B3 ⊕ C2

slide-92
SLIDE 92

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A A1, B1, C1 B A2, B2, C2 C A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3 A2 ⊕ B1, A3 ⊕ C1, B3 ⊕ C2

slide-93
SLIDE 93

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1

A A1, B1, C1 B A2, B2, C2 C A3, B3, C3 C1, C2, C3 B1, B2, B3 A1, A2, A3 A2 ⊕ B1, A3 ⊕ C1, B3 ⊕ C2 ⇒ Coded multicast to 2 users with different demands

slide-94
SLIDE 94

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 1 1 2 3 1 2 3 M R uncoded scheme coded scheme

slide-95
SLIDE 95

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

slide-96
SLIDE 96

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

C B A

slide-97
SLIDE 97

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

C12, C13, C23 B12, B13, B23 A12, A13, A23

slide-98
SLIDE 98

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

A12, B12, C12 A13, B13, C13 A12, B12, C12 A23, B23, C23 A13, B13, C13 A23, B23, C23 C12, C13, C23 B12, B13, B23 A12, A13, A23

slide-99
SLIDE 99

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

A12, B12, C12 A13, B13, C13 A12, B12, C12 A23, B23, C23 A13, B13, C13 A23, B23, C23 C12, C13, C23 B12, B13, B23 A12, A13, A23

slide-100
SLIDE 100

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

A12, B12, C12 A13, B13, C13 A12, B12, C12 A23, B23, C23 A13, B13, C13 A23, B23, C23 C12, C13, C23 B12, B13, B23 A12, A13, A23

slide-101
SLIDE 101

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

A12, B12, C12 A13, B13, C13 A12, B12, C12 A23, B23, C23 A13, B13, C13 A23, B23, C23 C12, C13, C23 B12, B13, B23 A12, A13, A23 A23 ⊕ B13 ⊕ C12

slide-102
SLIDE 102

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

A A12, B12, C12 A13, B13, C13 B A12, B12, C12 A23, B23, C23 C A13, B13, C13 A23, B23, C23 C12, C13, C23 B12, B13, B23 A12, A13, A23 A23 ⊕ B13 ⊕ C12

slide-103
SLIDE 103

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2

A A12, B12, C12 A13, B13, C13 B A12, B12, C12 A23, B23, C23 C A13, B13, C13 A23, B23, C23 C12, C13, C23 B12, B13, B23 A12, A13, A23 A23 ⊕ B13 ⊕ C12 ⇒ Coded multicast to 3 users with different demands

slide-104
SLIDE 104

Proposed Coded Scheme

N = 3 files, K = 3 users, cache size M = 2 1 2 3 1 2 3 M R uncoded scheme coded scheme

slide-105
SLIDE 105

Proposed Coded Scheme

N = K files and users, cache size M

Goal: coded multicast to M + 1 users with different demands

slide-106
SLIDE 106

Proposed Coded Scheme

N = K files and users, cache size M

Goal: coded multicast to M + 1 users with different demands Need to place content such that in delivery phase:

1 for every possible user demands. . . 2 and for every possible subset S of M + 1 users. . . 3 and for every possible subset T ⊂ S of M users. . . 4 users in T share content that is required at the user in S \ T

b b b b b

S T

slide-107
SLIDE 107

Proposed Coded Scheme

N = K files and users, cache size M

Goal: coded multicast to M + 1 users with different demands Need to place content such that in delivery phase:

1 for every possible user demands. . . 2 and for every possible subset S of M + 1 users. . . 3 and for every possible subset T ⊂ S of M users. . . 4 users in T share content that is required at the user in S \ T

Example: N = K = 3, M = 2 Every two users have a piece of content the remaining user needs

slide-108
SLIDE 108

Proposed Coded Scheme

N = K files and users, cache size M

Placement phase:

slide-109
SLIDE 109

Proposed Coded Scheme

N = K files and users, cache size M

Placement phase: N files: W1, . . . , WN

slide-110
SLIDE 110

Proposed Coded Scheme

N = K files and users, cache size M

Placement phase: N files: W1, . . . , WN Split each file into K

M

  • parts

⇒ Wn =

  • Wn,T : T ⊂ [K], |T | = M
slide-111
SLIDE 111

Proposed Coded Scheme

N = K files and users, cache size M

Placement phase: N files: W1, . . . , WN Split each file into K

M

  • parts

⇒ Wn =

  • Wn,T : T ⊂ [K], |T | = M
  • Cache k:
  • Wn,T : n ∈ [N], T ⊂ [K], |T | = M, k ∈ T
slide-112
SLIDE 112

Proposed Coded Scheme

N = K files and users, cache size M

Placement phase: N files: W1, . . . , WN Split each file into K

M

  • parts

⇒ Wn =

  • Wn,T : T ⊂ [K], |T | = M
  • Cache k:
  • Wn,T : n ∈ [N], T ⊂ [K], |T | = M, k ∈ T
  • Example: N = K = 3, M = 2

Consider files A, B, C ⇒ cache 2:

  • A12, A23, B12, B23, C12, C23
slide-113
SLIDE 113

Proposed Coded Scheme

N = K files and users, cache size M

Delivery phase:

slide-114
SLIDE 114

Proposed Coded Scheme

N = K files and users, cache size M

Delivery phase: Assume users k requests Wdk

slide-115
SLIDE 115

Proposed Coded Scheme

N = K files and users, cache size M

Delivery phase: Assume users k requests Wdk Send ⊕k∈SWdk,S\{k} for all S ⊂ [K] such that |S| = M + 1

slide-116
SLIDE 116

Proposed Coded Scheme

N = K files and users, cache size M

Delivery phase: Assume users k requests Wdk Send ⊕k∈SWdk,S\{k} for all S ⊂ [K] such that |S| = M + 1 Coded multicast to M + 1 users with different demands

slide-117
SLIDE 117

Proposed Coded Scheme

N = K files and users, cache size M

Delivery phase: Assume users k requests Wdk Send ⊕k∈SWdk,S\{k} for all S ⊂ [K] such that |S| = M + 1 Coded multicast to M + 1 users with different demands Example: N = K = 3, M = 1 Consider files A, B, C and user requests d1 = A, d2 = B, d3 = C ⇒ Server sends A2 ⊕ B1, A3 ⊕ C1, B3 ⊕ C2

slide-118
SLIDE 118

Comparison of the Two Schemes

N files, K users, cache size M

Uncoded scheme: R(M) = K · (1 − M/N) Coded scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

slide-119
SLIDE 119

Comparison of the Two Schemes

N files, K users, cache size M

Uncoded scheme: R(M) = K · (1 − M/N) Coded scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K

slide-120
SLIDE 120

Comparison of the Two Schemes

N files, K users, cache size M

Uncoded scheme: R(M) = K · (1 − M/N) Coded scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K Local caching gain 1 − M/N

Significant when local cache size M is of order N

slide-121
SLIDE 121

Comparison of the Two Schemes

N files, K users, cache size M

Uncoded scheme: R(M) = K · (1 − M/N) Coded scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K Local caching gain 1 − M/N

Significant when local cache size M is of order N

Global caching gain

1 1+KM/N

Significant when global cache size KM is of order N

slide-122
SLIDE 122

Comparison of the Two Schemes

N files, K users, cache size M

Uncoded scheme: R(M) = K · (1 − M/N) Coded scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K Local caching gain 1 − M/N

Significant when local cache size M is of order N

Global caching gain

1 1+KM/N

Significant when global cache size KM is of order N

⇒ Global gain can be Θ(K) smaller than local gain

slide-123
SLIDE 123

Example

N = 30 files, K = 30 users, cache size M = 10

M R

slide-124
SLIDE 124

Example

N = 30 files, K = 30 users, cache size M = 10

M R

Uncoded scheme: R(M) = K · (1 − M/N) ≈ 30 · 0.67 ≈ 20

slide-125
SLIDE 125

Example

N = 30 files, K = 30 users, cache size M = 10

M R

Uncoded scheme: R(M) = K · (1 − M/N) ≈ 30 · 0.67 ≈ 20 Coded scheme: R(M) = K · (1 − M/N)· 1 1 + KM/N ≈ 30 · 0.67 · 0.09 ≈ 1.8

slide-126
SLIDE 126

Example

N = 30 files, K = 30 users, cache size M = 10

M R

Uncoded scheme: R(M) = K · (1 − M/N) ≈ 30 · 0.67 ≈ 20 Coded scheme: R(M) = K · (1 − M/N)· 1 1 + KM/N ≈ 30 · 0.67 · 0.09 ≈ 1.8 ⇒ Factor 11 reduction in rate!

slide-127
SLIDE 127

Example

N = 30 files, K = 30 users, cache size M = 10

M R

Uncoded scheme: R(M) = K · (1 − M/N) ≈ 30 · 0.67 ≈ 20 Coded scheme: R(M) = K · (1 − M/N)· 1 1 + KM/N ≈ 30 · 0.67 · 0.09 ≈ 1.8 ⇒ Factor 11 reduction in rate! ⇒ Local gain is 0.67

slide-128
SLIDE 128

Example

N = 30 files, K = 30 users, cache size M = 10

M R

Uncoded scheme: R(M) = K · (1 − M/N) ≈ 30 · 0.67 ≈ 20 Coded scheme: R(M) = K · (1 − M/N)· 1 1 + KM/N ≈ 30 · 0.67 · 0.09 ≈ 1.8 ⇒ Factor 11 reduction in rate! ⇒ Local gain is 0.67 ⇒ Global gain is 0.09 (coded multicast to M + 1 = 11 users with different demands)

slide-129
SLIDE 129

Can We Do Better?

Theorem The coded scheme is optimal to within a constant factor in rate.3

  • 3M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE
  • Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
slide-130
SLIDE 130

Can We Do Better?

Theorem The coded scheme is optimal to within a constant factor in rate.3 ⇒ Information-theoretic bound

  • 3M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE
  • Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
slide-131
SLIDE 131

Can We Do Better?

Theorem The coded scheme is optimal to within a constant factor in rate.3 ⇒ Information-theoretic bound ⇒ Constant is independent of problem parameters N, K, M

  • 3M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE
  • Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
slide-132
SLIDE 132

Can We Do Better?

Theorem The coded scheme is optimal to within a constant factor in rate.3 ⇒ Information-theoretic bound ⇒ Constant is independent of problem parameters N, K, M ⇒ No other significant gain besides local and global

  • 3M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE
  • Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
slide-133
SLIDE 133

Approach Can be Adapted to Handle. . .

Asynchronous user requests4 Nonuniform file popularities5 Users joining and leaving the network6 Several users sharing a cache7 Online cache updates8 More complicated network topologies9

4Niesen and Maddah-Ali 2015. 5Niesen and Maddah-Ali 2017; Ji, Tulino, Llorca, and Caire 2017; Zhang,

Lin, and Wang 2018.

6Maddah-Ali and Niesen 2015. 7Hachem, Karamchandani, and Diggavi 2017. 8Pedarsani, Maddah-Ali, and Niesen 2016. 9Karamchandani, Niesen, Maddah-Ali, and Diggavi 2016; Ji, Caire, and

Molisch 2016.

slide-134
SLIDE 134

Video Streaming Demo10

  • 10U. Niesen and M. A. Maddah-Ali, “Coded caching for delay-sensitive

content,” in Proc. IEEE ICC, Jun. 2015, pp. 5559–5564.

slide-135
SLIDE 135

Open Questions

File size scaling:

slide-136
SLIDE 136

Open Questions

File size scaling:

Described approach requires file size scaling like

  • K

KM/N

  • Scales poorly with number of users K
slide-137
SLIDE 137

Open Questions

File size scaling:

Described approach requires file size scaling like

  • K

KM/N

  • Scales poorly with number of users K

Promising recent results with interesting connection to graph theory11, but scope for more work

  • 11K. Shanmugam, A. M. Tulino, and A. G. Dimakis, “Coded caching with

linear subpacketization is possible using Rusza-Szemeredi graphs,” in Proc. IEEE ISIT, Jun. 2017, pp. 2157–8117

slide-138
SLIDE 138

Open Questions

File size scaling:

Described approach requires file size scaling like

  • K

KM/N

  • Scales poorly with number of users K

Promising recent results with interesting connection to graph theory11, but scope for more work

State:

  • 11K. Shanmugam, A. M. Tulino, and A. G. Dimakis, “Coded caching with

linear subpacketization is possible using Rusza-Szemeredi graphs,” in Proc. IEEE ISIT, Jun. 2017, pp. 2157–8117

slide-139
SLIDE 139

Open Questions

File size scaling:

Described approach requires file size scaling like

  • K

KM/N

  • Scales poorly with number of users K

Promising recent results with interesting connection to graph theory11, but scope for more work

State:

Standard caching schemes only require knowledge of local state In contrast coded caching requires the server to have knowledge of global state

  • 11K. Shanmugam, A. M. Tulino, and A. G. Dimakis, “Coded caching with

linear subpacketization is possible using Rusza-Szemeredi graphs,” in Proc. IEEE ISIT, Jun. 2017, pp. 2157–8117

slide-140
SLIDE 140

Open Questions

File size scaling:

Described approach requires file size scaling like

  • K

KM/N

  • Scales poorly with number of users K

Promising recent results with interesting connection to graph theory11, but scope for more work

State:

Standard caching schemes only require knowledge of local state In contrast coded caching requires the server to have knowledge of global state

Large scale implementation:

  • 11K. Shanmugam, A. M. Tulino, and A. G. Dimakis, “Coded caching with

linear subpacketization is possible using Rusza-Szemeredi graphs,” in Proc. IEEE ISIT, Jun. 2017, pp. 2157–8117

slide-141
SLIDE 141

Open Questions

File size scaling:

Described approach requires file size scaling like

  • K

KM/N

  • Scales poorly with number of users K

Promising recent results with interesting connection to graph theory11, but scope for more work

State:

Standard caching schemes only require knowledge of local state In contrast coded caching requires the server to have knowledge of global state

Large scale implementation:

So far only demo-sized implementation Experimentation with large-scale systems are needed

  • 11K. Shanmugam, A. M. Tulino, and A. G. Dimakis, “Coded caching with

linear subpacketization is possible using Rusza-Szemeredi graphs,” in Proc. IEEE ISIT, Jun. 2017, pp. 2157–8117

slide-142
SLIDE 142

Conclusions

A New Approach to Caching

slide-143
SLIDE 143

Conclusions

A New Approach to Caching

Main gain in caching is global

⇒ Coded multicast to users with different demands

slide-144
SLIDE 144

Conclusions

A New Approach to Caching

Main gain in caching is global

⇒ Coded multicast to users with different demands

Global cache size matters

slide-145
SLIDE 145

Conclusions

A New Approach to Caching

Main gain in caching is global

⇒ Coded multicast to users with different demands

Global cache size matters Statistically identical users ⇒ different cache content

slide-146
SLIDE 146

Conclusions

A New Approach to Caching

Main gain in caching is global

⇒ Coded multicast to users with different demands

Global cache size matters Statistically identical users ⇒ different cache content Significant improvement over uncoded caching schemes

⇒ Reduction in rate up to order of number of users

slide-147
SLIDE 147

Conclusions

A New Approach to Caching

Main gain in caching is global

⇒ Coded multicast to users with different demands

Global cache size matters Statistically identical users ⇒ different cache content Significant improvement over uncoded caching schemes

⇒ Reduction in rate up to order of number of users

Key open questions: block length, state, large-scale implementation

slide-148
SLIDE 148

References I

Cisco, “The Zettabyte era: Trends and analysis,” Tech. Rep.,

  • Jun. 2017.
  • M. A. Maddah-Ali and U. Niesen, “Fundamental limits of

caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.

  • U. Niesen and M. A. Maddah-Ali, “Coded caching for

delay-sensitive content,” in Proc. IEEE ICC, Jun. 2015,

  • pp. 5559–5564.

——,“Coded caching with nonuniform demands,” IEEE Trans.

  • Inf. Theory, vol. 63, pp. 1146–1158, Feb. 2017.
  • M. Ji, A. M. Tulino, J. Llorca, and G. Caire, “Order-optimal rate
  • f caching and coded multicasting with random demands,” IEEE
  • Trans. Inf. Theory, vol. 63, pp. 3923–3949, Apr. 2017.
slide-149
SLIDE 149

References II

  • J. Zhang, X. Lin, and X. Wang, “Coded caching under arbitrary

popularity distributions,” IEEE Trans. Inf. Theory, vol. 64,

  • pp. 98–107, Jan. 2018.
  • M. A. Maddah-Ali and U. Niesen, “Decentralized coded caching

attains order-optimal memory-rate tradeoff,” IEEE/ACM Trans. Netw., vol. 23, pp. 1029–1040, Aug. 2015.

  • J. Hachem, N. Karamchandani, and S. Diggavi, “Coded caching

for multi-level popularity and access,” IEEE Trans. Inf. Theory,

  • vol. 63, pp. 3108–3141, May 2017.
  • R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online coded

caching,” IEEE/ACM Trans. Netw., vol. 24, pp. 836–845, Apr. 2016.

  • N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. Diggavi,

“Hierarchical coded caching,” IEEE Trans. Inf. Theory, vol. 62,

  • pp. 3212–3229, Jun. 2016.
slide-150
SLIDE 150

References III

  • M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of caching

in wireless D2D networks,” IEEE Trans. Inf. Theory, vol. 62, no. 2, pp. 849–869, Feb. 2016.

  • K. Shanmugam, A. M. Tulino, and A. G. Dimakis, “Coded caching

with linear subpacketization is possible using Rusza-Szemeredi graphs,” in Proc. IEEE ISIT, Jun. 2017, pp. 2157–8117.

slide-151
SLIDE 151

Can We Do Better?

cache 1 user 1 cache 2 user 2 cache 3 user 3 cache 4 user 4

slide-152
SLIDE 152

Can We Do Better?

cache 1 user 1 cache 2 user 2 cache 3 user 3 cache 4 user 4

slide-153
SLIDE 153

Can We Do Better?

cache 1 user 1 cache 2 user 2 cache 3 user 3 cache 4 user 4

slide-154
SLIDE 154

Can We Do Better?

cache 1 user 1 cache 2 user 2 cache 3 user 3 cache 4 user 4

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M

slide-155
SLIDE 155

Can We Do Better?

cache 1 user 1 user 1 cache 4 user 4 user 4

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M

slide-156
SLIDE 156

Can We Do Better?

cache 1 user 1 user 1 cache 4 user 4 user 4

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M

slide-157
SLIDE 157

Can We Do Better?

cache 1 user 1 user 1 cache 4 user 4 user 4

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M

slide-158
SLIDE 158

Can We Do Better?

cache 1 user 1 user 1 cache 4 user 4 user 4

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M 2R + 2M ≥ 4 ⇒ R ≥ 2 − M

slide-159
SLIDE 159

Can We Do Better?

cache 1 user 1 user 1 user 1 user 1

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M 2R + 2M ≥ 4 ⇒ R ≥ 2 − M

slide-160
SLIDE 160

Can We Do Better?

cache 1 user 1 user 1 user 1 user 1

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M 2R + 2M ≥ 4 ⇒ R ≥ 2 − M

slide-161
SLIDE 161

Can We Do Better?

cache 1 user 1 user 1 user 1 user 1

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M 2R + 2M ≥ 4 ⇒ R ≥ 2 − M

slide-162
SLIDE 162

Can We Do Better?

cache 1 user 1 user 1 user 1 user 1

R + 4M ≥ 4 ⇒ R ≥ 4 − 4M 2R + 2M ≥ 4 ⇒ R ≥ 2 − M 4R + M ≥ 4 ⇒ R ≥ 1 − M/4

slide-163
SLIDE 163

Can We Do Better?

This can be rewritten as R ≥ max

  • 4 − 4M, 2 − M, 1 − M/4
slide-164
SLIDE 164

Can We Do Better?

This can be rewritten as R ≥ max

  • 4 − 4M, 2 − M, 1 − M/4
  • For general N and K

R ≥ max

s

  • s −

s ⌊N/s⌋M

  • Comparing with achievable rate yields the theorem