The Versatility of TTL Caches: Service Differentiation and Pricing - - PowerPoint PPT Presentation

the versatility of ttl caches service differentiation and
SMART_READER_LITE
LIVE PREVIEW

The Versatility of TTL Caches: Service Differentiation and Pricing - - PowerPoint PPT Presentation

The Versatility of TTL Caches: Service Differentiation and Pricing Don Towsley CICS Umass Amherst Collaborators: W. Chu, M. Dehghan, R. Ma, L. Massoulie, D. Menasche, Y. C. Tay Internet today primary use of Internet content


slide-1
SLIDE 1

Don Towsley CICS Umass ‐ Amherst

The Versatility of TTL Caches: Service Differentiation and Pricing

Collaborators:

  • W. Chu, M. Dehghan, R. Ma,
  • L. Massoulie, D. Menasche,
  • Y. C. Tay
slide-2
SLIDE 2

 primary use of Internet – content delivery  point-to-point communication - users know where content is located

Internet

Internet today

Does not scale!

slide-3
SLIDE 3

 users request what they want  content stored at edge of network, in network  diversity of users, content driving CCN designs

New paradigm: content centric networks

Cache Cache Cache

slide-4
SLIDE 4

Caching for content delivery

Decreases  delays  banwidth consumption  server loads

Cache Cache Cache

slide-5
SLIDE 5

New Challenges

Cache

Content Providers Content Distribution Networks Users

slide-6
SLIDE 6

Service differentiation

 not all content equally important to providers/users  content providers have different service demands  economic incentives for CDNs  current cache policies (mostly) oblivious to service requirements

Internet

slide-7
SLIDE 7

This has given rise to

LRU

LFU

CAR

ARC RAND

FIFO

DSCA

k‐LRU

SAIU PSC

TOW

LFRU

RPAC 2Q LIRS LARC

RMARK CLOCK LACS

GDSF

PRR DWCS

ASA PBSTA

LRV

SIZE

alphabet soup

slide-8
SLIDE 8

This has given rise to

LRU

LFU

CAR

ARC RAND

FIFO

DSCA

k‐LRU

SAIU PSC

TOW

LFRU

RPAC 2Q LIRS LARC

RMARK CLOCK LACS

GDSF

PRR DWCS

ASA PBSTA

LRV

SIZE

slide-9
SLIDE 9

LRU (least recently used)

 classic cache management policy  contents ordered by recency of usage  miss – remove least recently used content  hit – move content to front

9

1 2 3 4 5

6

2 3 4 5 6

3

1 2 4 5 3

request

1 2 3 4 5

least recently used most recently used

slide-10
SLIDE 10

 how to provide differentiated services

  • to users
  • to content providers

 how to make sense of universe of caches  how to design cache policies

Challenges

Answer: Time‐to‐Live (TTL) caches

slide-11
SLIDE 11

Outline

 introduction  TTL caches  differentiated services: utility driven caching

  • per content
  • per provider

 incentivizing caches  conclusions, future directions

slide-12
SLIDE 12

Time-to-live (TTL) caches

TTL cache

 associate timer with every content

  • set on miss
  • remove content when timer expires

 versatile tool for modeling caches  versatile mechanism for cache design/configuration  two types of TTL caches

  • reset, non-reset

Cache

slide-13
SLIDE 13

Non-reset TTL cache

 timer set on cache miss  TTL non-reset hit probability (content ):

  • request rate (Poisson)
slide-14
SLIDE 14

 timer reset at every request  TTL reset hit probability (content ):

  • Reset TTL cache
slide-15
SLIDE 15

Characteristic time approximation

(Fagin, 77)

Cache size ; request rate  LRU – model as reset TTL cache

  •  FIFO – model as non-reset cache

– cache characteristic time  asymptotically exact as accurate for  extends to many cache policies

15

slide-16
SLIDE 16

Providing differentiated services

16

slide-17
SLIDE 17

Model

 single cache, size  contents, request rates

: hit probability of content

 each content has utility, function

  • f hit probability
  • concave, increasing

B

user requests for content

  • provider

miss 1

slide-18
SLIDE 18

Cache utility maximization

slide-19
SLIDE 19

Utility-based caching

 cost/value tradeoff

 fairness implications

  • e.g. Proportionally fair w.r.t. hit probability

 cache markets

  • contract design
slide-20
SLIDE 20

Cache utility maximization

Can we use this framework to model existing policies?

slide-21
SLIDE 21

LRU

  • logarithmic integral

FIFO

  • Reverse Engineering

Can we obtain same statistical behavior as LRU, FIFO using timers? What utilities?

slide-22
SLIDE 22

 Lagrangian function:

  •  optimality condition:
  •  inverse
  • Dual Problem

Dual variable

slide-23
SLIDE 23

LRU Utility Function

 optimality condition:

  •  TTL approximation
  •  let hit probability decrease in , increase in

 let

slide-24
SLIDE 24

Fairness properties

 weighted proportional fairness

  • yields
  •  max-min fairness – limit as
  • yields

24

slide-25
SLIDE 25

Evaluation

 10,000 contents  cache size 1000  Zipf popularity,

parameter 0.8

 10 requests

slide-26
SLIDE 26

Cache utility maximization

Q: How do we control hit probabilities? A: TTL cache; control hit probabilities through timers

slide-27
SLIDE 27

Cache utility maximization

slide-28
SLIDE 28

On-line algorithms

 dual algorithm  primal algorithm  primal-dual algorithm

28

slide-29
SLIDE 29

Setting timer in dual

 TTL-reset cache:

  •  optimality condition:
  •  find

via gradient descent; update at each request  estimate

using sliding window

slide-30
SLIDE 30

Convergence: dual algorithm

 10,000 contents  cache size 1000  Zipf popularity,

parameter 0.8

 10 requests

slide-31
SLIDE 31

Primal algorithm

 primal problem replaces buffer constraint with soft “cost” constraint with convex cost function  similar style on-line algorithm

31

slide-32
SLIDE 32

Summary

 utility-based caching enables differentiated services  TTL cache provides flexible mechanism for deploying differentiated services  simple online algorithms require no apriori information about:

  • number of contents
  • popularity

 framework captures existing policies

  • e.g. LRU and FIFO
slide-33
SLIDE 33

Other issues

 provider-based service differentiation  monetizing caching

33

slide-34
SLIDE 34

Differentiated monetization of content

34

slide-35
SLIDE 35

 focused on

  • user/content differentiation
  • CP differentiation

 how can SPs make money?

  • contract structure?
  • effect of popularity?

35

slide-36
SLIDE 36

Per request cost and benefit

 benefit per request hit  cost per request miss

Key: how should SP manage cache?

Original Content Server Cache Server

Users Content Provider (CP ) Service Provider (SP)

Request Hit Fetch Miss

slide-37
SLIDE 37

Formulating as utility optimization

  • payment to cache provider

37

Q: how should SP manage cache? pricing schemes?

slide-38
SLIDE 38

Service contracts

Contracts specify pricing per content  nonrenewable contracts  renewable contracts

  • occupancy-based
  • usage-based
slide-39
SLIDE 39

Non-renewable contracts

 on-demand contract upon cache miss

  • no content pre-fetching
  • contract for time linear price
  • proportional to TTL, per-unit time charge

 potential inefficiency

  • content evicted upon TTL timer expiration

⟹ miss for subsequent request

 how long to cache content?

slide-40
SLIDE 40

Non-renewable contracts

 value accrual rate to content provider 1  payment rate to cache provider

  • 1/

Rule: cache if ; ∗ ∞

  • therwise not

40

  • 1
slide-41
SLIDE 41

Occupancy-based renewable contracts

 on-demand contract on every cache request

  • pre-fetching
  • at request, pay
  • if miss
  • , if time since last request

 CP pays for time content in cache Rule: cache if ;

  • therwise not

same as non-renewable contract

slide-42
SLIDE 42

Observations

 both contracts occupancy based; pay for time in cache  renewable contract more flexible allows contract renegotiation  results generalize to renewal request processes

42

slide-43
SLIDE 43

Usage-based renewable contracts

 on-demand contract on every cache request

  • no pre-fetching
  • at request, always pay

 price - per request Rule: cache if

  • therwise not
slide-44
SLIDE 44

Observations

Usage-based pricing  provides better cache utilization than

  • ccupancy-based pricing
  • ∗ decreasing function of , ; increasing function of

,

 better incentivizes cache provider

44

slide-45
SLIDE 45

Summary

 TTL cache versatile construct for

  • modeling/analysis
  • design/configuration
  • adaptive control
  • pricing

 TTL combined with utility-based optimization

  • provides differentiated cache services
  • shares caches between content providers
  • provides incentives for cache providers

45

slide-46
SLIDE 46

Future directions

 differentiated services in a multi-cache setting

  • presence of router caches
  • multiple edge caches

 relaxation of assumptions

  • Poisson, renewal → stationary
  • arbitrary size content

 pricing

  • non-linear pricing
  • market competition among cache providers

 unified cache, bandwidth, processor allocation framework

46

slide-47
SLIDE 47

Thank you