Fundamental Limits of Caching Urs Niesen Jointly with Mohammad - - PowerPoint PPT Presentation

fundamental limits of caching
SMART_READER_LITE
LIVE PREVIEW

Fundamental Limits of Caching Urs Niesen Jointly with Mohammad - - PowerPoint PPT Presentation

Fundamental Limits of Caching Urs Niesen Jointly with Mohammad Maddah-Ali Bell Labs, Alcatel-Lucent Video on Demand Video on demand is getting increasingly popular: Netflix streaming service Amazon Instant Video Hulu Verizon / Comcast on


slide-1
SLIDE 1

Fundamental Limits of Caching

Urs Niesen Jointly with Mohammad Maddah-Ali

Bell Labs, Alcatel-Lucent

slide-2
SLIDE 2

Video on Demand

Video on demand is getting increasingly popular: Netflix streaming service Amazon Instant Video Hulu Verizon / Comcast on Demand . . .

slide-3
SLIDE 3

Video on Demand

Video on demand is getting increasingly popular: Netflix streaming service Amazon Instant Video Hulu Verizon / Comcast on Demand . . . ⇒ Places significant stress on service provider’s networks

slide-4
SLIDE 4

Video on Demand

Video on demand is getting increasingly popular: Netflix streaming service Amazon Instant Video Hulu Verizon / Comcast on Demand . . . ⇒ Places significant stress on service provider’s networks ⇒ Caching (prefetching) can be used to mitigate this stress

slide-5
SLIDE 5

Caching (Prefetching)

20 40 60 80 100 6 12 18 24 time of day normalized demand

slide-6
SLIDE 6

Caching (Prefetching)

20 40 60 80 100 6 12 18 24 time of day normalized demand

High temporal traffic variability

slide-7
SLIDE 7

Caching (Prefetching)

20 40 60 80 100 6 12 18 24 time of day normalized demand

High temporal traffic variability Caching can help smooth traffic

slide-8
SLIDE 8

The Role of Caching

Conventional beliefs about caching:

slide-9
SLIDE 9

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally

slide-10
SLIDE 10

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters

slide-11
SLIDE 11

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content

slide-12
SLIDE 12

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work:

slide-13
SLIDE 13

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work: The main gain in caching is global

slide-14
SLIDE 14

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work: The main gain in caching is global Global cache size matters

slide-15
SLIDE 15

The Role of Caching

Conventional beliefs about caching: Caches useful to deliver content locally Local cache size matters Statistically identical users ⇒ identical cache content Insights from this work: The main gain in caching is global Global cache size matters Statistically identical users ⇒ different cache content

slide-16
SLIDE 16

Problem Setting

K users caches server shared link

slide-17
SLIDE 17

Problem Setting

K users caches server N files, N ≥ K for simplicity shared link

slide-18
SLIDE 18

Problem Setting

K users caches size M server N files shared link

slide-19
SLIDE 19

Problem Setting

K users caches size M server N files shared link Placement: cache arbitrary function of files (linear, nonlinear, . . . )

slide-20
SLIDE 20

Problem Setting

K users caches size M server N files shared link Delivery:

slide-21
SLIDE 21

Problem Setting

K users caches size M server N files shared link Delivery: - requests are revealed to server

slide-22
SLIDE 22

Problem Setting

K users caches size M server N files shared link Delivery: - requests are revealed to server

  • server sends arbitrary function of files
slide-23
SLIDE 23

Problem Setting

K users caches size M server N files shared link Delivery: - requests are revealed to server

  • server sends arbitrary function of files
slide-24
SLIDE 24

Problem Setting

K users caches size M server N files shared link Question: smallest worst-case rate R(M) needed in delivery phase?

slide-25
SLIDE 25

Conventional Caching Scheme

N files, K users, cache size M

M N

slide-26
SLIDE 26

Conventional Caching Scheme

N files, K users, cache size M

M N

M N

slide-27
SLIDE 27

Conventional Caching Scheme

N files, K users, cache size M

M N

M N

slide-28
SLIDE 28

Conventional Caching Scheme

N files, K users, cache size M

M N

M N

slide-29
SLIDE 29

Conventional Caching Scheme

N files, K users, cache size M

M N

M N

slide-30
SLIDE 30

Conventional Caching Scheme

N files, K users, cache size M

M N

M N

Performance of conventional scheme: R(M) = K · (1 − M/N)

slide-31
SLIDE 31

Conventional Caching Scheme

N files, K users, cache size M

M N

M N

Performance of conventional scheme: R(M) = K · (1 − M/N) Caches provide content locally ⇒ local cache size matters Identical cache content at users

slide-32
SLIDE 32

Conventional Caching Scheme

N = 2 files, K = 2 users

2 2 M R conventional scheme

slide-33
SLIDE 33

Conventional Caching Scheme

N = 4 files, K = 4 users

4 4 M R conventional scheme

slide-34
SLIDE 34

Conventional Caching Scheme

N = 8 files, K = 8 users

8 8 M R conventional scheme

slide-35
SLIDE 35

Conventional Caching Scheme

N = 16 files, K = 16 users

16 16 M R conventional scheme

slide-36
SLIDE 36

Conventional Caching Scheme

N = 32 files, K = 32 users

32 32 M R conventional scheme

slide-37
SLIDE 37

Conventional Caching Scheme

N = 64 files, K = 64 users

64 64 M R conventional scheme

slide-38
SLIDE 38

Conventional Caching Scheme

N = 128 files, K = 128 users

128 128 M R conventional scheme

slide-39
SLIDE 39

Conventional Caching Scheme

N = 256 files, K = 256 users

256 256 M R conventional scheme

slide-40
SLIDE 40

Conventional Caching Scheme

N = 512 files, K = 512 users

512 512 M R conventional scheme

slide-41
SLIDE 41

Proposed Caching Scheme

N files, K users, cache size M

Design guidelines advocated in this work: The main gain in caching is global Global cache size matters Different cache content at users

slide-42
SLIDE 42

Proposed Caching Scheme

N files, K users, cache size M

Design guidelines advocated in this work: The main gain in caching is global Global cache size matters Different cache content at users Performance of proposed scheme: R(M) = K · (1 − M/N) · 1 1 + KM/N

slide-43
SLIDE 43

Proposed Caching Scheme

N = 2 files, K = 2 users

2 2 M R conventional scheme proposed scheme

slide-44
SLIDE 44

Proposed Caching Scheme

N = 4 files, K = 4 users

4 4 M R conventional scheme proposed scheme

slide-45
SLIDE 45

Proposed Caching Scheme

N = 8 files, K = 8 users

8 8 M R conventional scheme proposed scheme

slide-46
SLIDE 46

Proposed Caching Scheme

N = 16 files, K = 16 users

16 16 M R conventional scheme proposed scheme

slide-47
SLIDE 47

Proposed Caching Scheme

N = 32 files, K = 32 users

32 32 M R conventional scheme proposed scheme

slide-48
SLIDE 48

Proposed Caching Scheme

N = 64 files, K = 64 users

64 64 M R conventional scheme proposed scheme

slide-49
SLIDE 49

Proposed Caching Scheme

N = 128 files, K = 128 users

128 128 M R conventional scheme proposed scheme

slide-50
SLIDE 50

Proposed Caching Scheme

N = 256 files, K = 256 users

256 256 M R conventional scheme proposed scheme

slide-51
SLIDE 51

Proposed Caching Scheme

N = 512 files, K = 512 users

512 512 M R conventional scheme proposed scheme

slide-52
SLIDE 52

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

slide-53
SLIDE 53

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

B A

slide-54
SLIDE 54

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

B1, B2 A1, A2

slide-55
SLIDE 55

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2

slide-56
SLIDE 56

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2

slide-57
SLIDE 57

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2 A2

slide-58
SLIDE 58

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 A A1, B1 B1, B2 A1, A2 A2

slide-59
SLIDE 59

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 A A1, B1 B1, B2 A1, A2 A2 ⇒ Identical cache content at users ⇒ Gain from delivering content locally

slide-60
SLIDE 60

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2

slide-61
SLIDE 61

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A1, B1 B1, B2 A1, A2 A2, B2

slide-62
SLIDE 62

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A1, B1 B1, B2 A1, A2 A2, B2

slide-63
SLIDE 63

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A1, B1 B1, B2 A1, A2 A2, B2 ⇒ Multicast only possible for users with same demand

slide-64
SLIDE 64

Recall: Conventional Scheme

N = 2 files, K = 2 users, cache size M = 1 1 2 1 2 M R conventional scheme proposed scheme

slide-65
SLIDE 65

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

slide-66
SLIDE 66

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

B A

slide-67
SLIDE 67

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

B1, B2 A1, A2

slide-68
SLIDE 68

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2

slide-69
SLIDE 69

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2

slide-70
SLIDE 70

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2 A2 B1

slide-71
SLIDE 71

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2 A2 B1

slide-72
SLIDE 72

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A1, B1 A2, B2 B1, B2 A1, A2 A2 ⊕ B1

slide-73
SLIDE 73

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A2, B2 B1, B2 A1, A2 A2 ⊕ B1

slide-74
SLIDE 74

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 B A2, B2 B1, B2 A1, A2 A2 ⊕ B1 ⇒ Different cache content at users ⇒ Multicast to 2 users with different demands

slide-75
SLIDE 75

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1

A A1, B1 A A2, B2 B1, B2 A1, A2 A2⊕A1 A A1, B1 B A2, B2 B1, B2 A1, A2 A2⊕B1 B A1, B1 A A2, B2 B1, B2 A1, A2 B2⊕A1 B A1, B1 B A2, B2 B1, B2 A1, A2 B2⊕B1

⇒ Works for all possible user requests ⇒ Simultaneous multicasting gain

slide-76
SLIDE 76

Proposed Scheme

N = 2 files, K = 2 users, cache size M = 1 1 2 1 2 M R conventional scheme proposed scheme

slide-77
SLIDE 77

Proposed Scheme

N files, K users, cache size M

Scheme can be generalized to arbitrary:

Number of files N Number of users K Cache size M

slide-78
SLIDE 78

Proposed Scheme

N files, K users, cache size M

Scheme can be generalized to arbitrary:

Number of files N Number of users K Cache size M

Enables multicast to KM/N + 1 users with different demands

slide-79
SLIDE 79

Comparison of the Two Schemes

N files, K users, cache size M

Conventional scheme: R(M) = K · (1 − M/N) Proposed scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

slide-80
SLIDE 80

Comparison of the Two Schemes

N files, K users, cache size M

Conventional scheme: R(M) = K · (1 − M/N) Proposed scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K

slide-81
SLIDE 81

Comparison of the Two Schemes

N files, K users, cache size M

Conventional scheme: R(M) = K · (1 − M/N) Proposed scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K Local caching gain 1 − M/N

Significant when local cache size M is of order N

slide-82
SLIDE 82

Comparison of the Two Schemes

N files, K users, cache size M

Conventional scheme: R(M) = K · (1 − M/N) Proposed scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K Local caching gain 1 − M/N

Significant when local cache size M is of order N

Global caching gain

1 1+KM/N

Significant when global cache size KM is of order N

slide-83
SLIDE 83

Comparison of the Two Schemes

N files, K users, cache size M

Conventional scheme: R(M) = K · (1 − M/N) Proposed scheme: R(M) = K · (1 − M/N) ·

1 1+KM/N

Rate without caching K Local caching gain 1 − M/N

Significant when local cache size M is of order N

Global caching gain

1 1+KM/N

Significant when global cache size KM is of order N

⇒ Global gain can be Θ(K) smaller than local gain

slide-84
SLIDE 84

Can We Do Better?

Theorem The proposed scheme is optimal to within a constant factor in rate.

slide-85
SLIDE 85

Can We Do Better?

Theorem The proposed scheme is optimal to within a constant factor in rate. ⇒ Information-theoretic bound

slide-86
SLIDE 86

Can We Do Better?

Theorem The proposed scheme is optimal to within a constant factor in rate. ⇒ Information-theoretic bound ⇒ Constant is independent of problem parameters N, K, M

slide-87
SLIDE 87

Can We Do Better?

Theorem The proposed scheme is optimal to within a constant factor in rate. ⇒ Information-theoretic bound ⇒ Constant is independent of problem parameters N, K, M ⇒ No other significant gain besides local and global

slide-88
SLIDE 88

Conclusions

A New Approach to Caching

slide-89
SLIDE 89

Conclusions

A New Approach to Caching

The main gain in caching is global

⇒ Multicast to users with different demands

slide-90
SLIDE 90

Conclusions

A New Approach to Caching

The main gain in caching is global

⇒ Multicast to users with different demands

Global cache size matters

slide-91
SLIDE 91

Conclusions

A New Approach to Caching

The main gain in caching is global

⇒ Multicast to users with different demands

Global cache size matters Statistically identical users ⇒ different cache content

slide-92
SLIDE 92

Conclusions

A New Approach to Caching

The main gain in caching is global

⇒ Multicast to users with different demands

Global cache size matters Statistically identical users ⇒ different cache content Significant improvement over conventional caching schemes

⇒ Reduction in rate up to order of number of users

slide-93
SLIDE 93

Conclusions

A New Approach to Caching

The main gain in caching is global

⇒ Multicast to users with different demands

Global cache size matters Statistically identical users ⇒ different cache content Significant improvement over conventional caching schemes

⇒ Reduction in rate up to order of number of users

Papers available on arXiv

⇒ Maddah-Ali, Niesen: “Fundamental Limits of Caching” ⇒ Maddah-Ali, Niesen: “Decentralized Caching Attains Order-Optimal Memory-Rate Tradeoff” ⇒ Niesen, Maddah-Ali: “Coded Caching with Nonuniform Demands”