Don Towsley CICS Umass ‐ Amherst
The Versatility of TTL Caches: Service Differentiation and Pricing
Collaborators:
- W. Chu, M. Dehghan, R. Ma,
- L. Massoulie, D. Menasche,
- Y. C. Tay
The Versatility of TTL Caches: Service Differentiation and Pricing - - PowerPoint PPT Presentation
The Versatility of TTL Caches: Service Differentiation and Pricing Don Towsley CICS Umass Amherst Collaborators: W. Chu, M. Dehghan, R. Ma, L. Massoulie, D. Menasche, Y. C. Tay Internet today primary use of Internet content
Don Towsley CICS Umass ‐ Amherst
Collaborators:
primary use of Internet – content delivery point-to-point communication - users know where content is located
Internet
users request what they want content stored at edge of network, in network diversity of users, content driving CCN designs
Cache Cache Cache
Decreases delays banwidth consumption server loads
Cache Cache Cache
Cache
Content Providers Content Distribution Networks Users
not all content equally important to providers/users content providers have different service demands economic incentives for CDNs current cache policies (mostly) oblivious to service requirements
Internet
DSCA
k‐LRU
SAIU PSC
TOW
RPAC 2Q LIRS LARC
RMARK CLOCK LACS
GDSF
PRR DWCS
ASA PBSTA
LRV
SIZE
DSCA
k‐LRU
SAIU PSC
TOW
RPAC 2Q LIRS LARC
RMARK CLOCK LACS
GDSF
PRR DWCS
ASA PBSTA
LRV
SIZE
classic cache management policy contents ordered by recency of usage miss – remove least recently used content hit – move content to front
9
6
3
request
least recently used most recently used
how to provide differentiated services
how to make sense of universe of caches how to design cache policies
introduction TTL caches differentiated services: utility driven caching
incentivizing caches conclusions, future directions
TTL cache
associate timer with every content
versatile tool for modeling caches versatile mechanism for cache design/configuration two types of TTL caches
Cache
timer set on cache miss TTL non-reset hit probability (content ):
timer reset at every request TTL reset hit probability (content ):
(Fagin, 77)
Cache size ; request rate LRU – model as reset TTL cache
– cache characteristic time asymptotically exact as accurate for extends to many cache policies
15
16
single cache, size contents, request rates
: hit probability of content
each content has utility, function
B
user requests for content
miss 1
cost/value tradeoff
fairness implications
cache markets
Can we use this framework to model existing policies?
LRU
FIFO
Can we obtain same statistical behavior as LRU, FIFO using timers? What utilities?
Lagrangian function:
Dual variable
optimality condition:
let
weighted proportional fairness
24
10,000 contents cache size 1000 Zipf popularity,
parameter 0.8
10 requests
Q: How do we control hit probabilities? A: TTL cache; control hit probabilities through timers
dual algorithm primal algorithm primal-dual algorithm
28
TTL-reset cache:
via gradient descent; update at each request estimate
using sliding window
10,000 contents cache size 1000 Zipf popularity,
parameter 0.8
10 requests
primal problem replaces buffer constraint with soft “cost” constraint with convex cost function similar style on-line algorithm
31
utility-based caching enables differentiated services TTL cache provides flexible mechanism for deploying differentiated services simple online algorithms require no apriori information about:
framework captures existing policies
provider-based service differentiation monetizing caching
33
34
focused on
how can SPs make money?
35
benefit per request hit cost per request miss
Key: how should SP manage cache?
Original Content Server Cache Server
Users Content Provider (CP ) Service Provider (SP)
Request Hit Fetch Miss
37
Contracts specify pricing per content nonrenewable contracts renewable contracts
on-demand contract upon cache miss
potential inefficiency
⟹ miss for subsequent request
how long to cache content?
value accrual rate to content provider 1 payment rate to cache provider
Rule: cache if ; ∗ ∞
40
on-demand contract on every cache request
CP pays for time content in cache Rule: cache if ;
∗
same as non-renewable contract
both contracts occupancy based; pay for time in cache renewable contract more flexible allows contract renegotiation results generalize to renewal request processes
42
on-demand contract on every cache request
price - per request Rule: cache if
∗
Usage-based pricing provides better cache utilization than
,
better incentivizes cache provider
44
TTL cache versatile construct for
TTL combined with utility-based optimization
45
differentiated services in a multi-cache setting
relaxation of assumptions
pricing
unified cache, bandwidth, processor allocation framework
46