3rd ACM Conference on Information-Centric Networking (ICN 2016)
On Allocating Cache Resources to Content Providers Weibo Chu, - - PowerPoint PPT Presentation
On Allocating Cache Resources to Content Providers Weibo Chu, - - PowerPoint PPT Presentation
3rd ACM Conference on Information-Centric Networking (ICN 2016) On Allocating Cache Resources to Content Providers Weibo Chu, Mostafa Dehghan, Don Towsley, Zhi-Li Zhang wbchu@nwpu.edu.cn Northwestern Polytechnical University 3rd ACM Conference
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Why Resource Allocation in ICN?
Resource allocation is important for networks:
QoS: delay, throughput, jitters, etc DiffServ: video, emails, instant messaging, etc Fairness among users, applications Market and economics
The above goals are challenging to realize in ICNs due to in-network caching
Content can be from anywhere in the network, no connection Traditional policies (i.e., LRU, RND, FIFO) treat content of different providers in a tightly coupled manner
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Problem Introduction
We consider allocating resources of an edge cache among different contending content providers
A cache shared by users of K content providers. Users access files of CPs through the cache. Question: how the cache provider (SP) allocate its resources to realize management purposes?
Figure: Network Model.
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Problem Formulation
We propose a cache partitioning approach. The cache provider partitions its cache resource into slices with each slice allocated to one CP; Advantage: 1) restricts contention of each CP into its dedicated slice, and hence provides a natural means to tune the performance of each CP; 2) potentially improves system performance; 3) easy-to-implement; Model Assumptions: Cache of size C with LRU policy. Each CP k serves Nk different files Fk = {f1k, f2k, . . . , fkNk}
- f equal sizes.
Requests for fik arrive as a Poisson process with rate λik.
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Utility-based Cache Resource Allocation
Uk(hk): concave, increasing in the hit rate hk of CP k. Partition the cache into K slices, and allocate one with size Ck to CP k. Our goal: to maximize the (weighted) utilities over all CPs. maximize
K
- k=1
wkUk
- hk(Ck)
- such that
K
- k=1
Ck ≤ C Ck ≥ 0, k = 1, 2, . . . , K (1)
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Cache Characteristic Time (H.Che 2001)
For each CP k accessing LRU cache with size Ck, the hit rate hk can be approximated as: hk =
Nk
- i=1
λik(1 − e−λikTk), where Tk is a constant denoting cache characteristic time that satisfies:
Nk
- i=1
(1 − e−λikTk) = Ck.
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Important Properties
Theorem 1: Partitioning the cache provides performance gain as compared to sharing it.
C =
K
- k=1
- i∈Fk
(1 − e−λikT ) It can be seen creating partitions Ck =
i∈Fk (1 − e−λikT )
provides the same performance.
Theorem 2: hk is concave in Ck, and problem (1) is convex.
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Some Numerical Results
Basic setting:
2 CPs competing for 1 Cache File population: N1 = 2 × 105, N2 = 1 × 106 Zipf distribution: α1 = 1.2 and α2 = 0.8 Request rate: λ1 = 1500, λ2 = 1000 Utility function: U1(h) = U2(h) = h. Weights: w1 = w2 = 1
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Efficacy of Cache Partitioning
102 103 104
Cache size
800 1000 1200 1400 1600
Utility U1(h1) = h1
Partitioning Sharing
(a) U1(h1) = h1
102 103 104
Cache size
200 400 600 800 1000
Utility U1(h1) = log h1
Partitioning Sharing
(b) U1(h1) = log h1
Figure: Showing the efficacy of cache partitioning. Aggregate utility
- btained by partitioning versus sharing. In both cases, U2(h2) = h2.
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Efficacy of Cache Partitioning
102 103 104
Cache size
101 102 103 104
C1 U1(h1) = h1
Partitioning Sharing
(a) U1(h1) = h1
102 103 104
Cache size
100 102 104
C1 U1(h1) = log h1
Partitioning Sharing
(b) U1(h1) = log h1
Figure: Cache size allocated to CP1 when partitioning the cache compared to average cache storage consumed by CP1 files when sharing the cache. In both cases, U2(h2) = h2.
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Parameter Impact of Request Rate
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Parameter Impact of Weight
CP CP1 1 hit hit rat ate CP CP2 2 hit it rat ate Cac Cache fractio ion allo allocated to
- CP
CP1
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Parameter Impact of Skewness
CP CP1 1 hit hit rat ate CP CP2 2 hit it rat ate Cac Cache fractio ion allo allocated to
- CP
CP1
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Parameter Impact of File Population
CP CP1 1 hit hit rat ate CP CP2 2 hit it rat ate Cac Cache fractio ion allo allocated to
- CP
CP1
3rd ACM Conference on Information-Centric Networking (ICN 2016)
Implications
Use different utility functions to achieve different fairness: Uk(hk) = h1−β
k
1−β . β → 1, proportional fairness; β → ∞,
max-min fairness; Online (decentralized) allocation: Kelly’s framework. Develop adaptive control mechanism: in response to changing network traffic. Alternative formulation: delay optimization. Apply to policies other than LRU.
3rd ACM Conference on Information-Centric Networking (ICN 2016)