On Allocating Cache Resources to Content Providers Weibo Chu, - - PowerPoint PPT Presentation

on allocating cache resources to content providers
SMART_READER_LITE
LIVE PREVIEW

On Allocating Cache Resources to Content Providers Weibo Chu, - - PowerPoint PPT Presentation

3rd ACM Conference on Information-Centric Networking (ICN 2016) On Allocating Cache Resources to Content Providers Weibo Chu, Mostafa Dehghan, Don Towsley, Zhi-Li Zhang wbchu@nwpu.edu.cn Northwestern Polytechnical University 3rd ACM Conference


slide-1
SLIDE 1

3rd ACM Conference on Information-Centric Networking (ICN 2016)

On Allocating Cache Resources to Content Providers

Weibo Chu, Mostafa Dehghan, Don Towsley, Zhi-Li Zhang wbchu@nwpu.edu.cn Northwestern Polytechnical University

slide-2
SLIDE 2

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Why Resource Allocation in ICN?

Resource allocation is important for networks:

QoS: delay, throughput, jitters, etc DiffServ: video, emails, instant messaging, etc Fairness among users, applications Market and economics

The above goals are challenging to realize in ICNs due to in-network caching

Content can be from anywhere in the network, no connection Traditional policies (i.e., LRU, RND, FIFO) treat content of different providers in a tightly coupled manner

slide-3
SLIDE 3

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Problem Introduction

We consider allocating resources of an edge cache among different contending content providers

A cache shared by users of K content providers. Users access files of CPs through the cache. Question: how the cache provider (SP) allocate its resources to realize management purposes?

Figure: Network Model.

slide-4
SLIDE 4

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Problem Formulation

We propose a cache partitioning approach. The cache provider partitions its cache resource into slices with each slice allocated to one CP; Advantage: 1) restricts contention of each CP into its dedicated slice, and hence provides a natural means to tune the performance of each CP; 2) potentially improves system performance; 3) easy-to-implement; Model Assumptions: Cache of size C with LRU policy. Each CP k serves Nk different files Fk = {f1k, f2k, . . . , fkNk}

  • f equal sizes.

Requests for fik arrive as a Poisson process with rate λik.

slide-5
SLIDE 5

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Utility-based Cache Resource Allocation

Uk(hk): concave, increasing in the hit rate hk of CP k. Partition the cache into K slices, and allocate one with size Ck to CP k. Our goal: to maximize the (weighted) utilities over all CPs. maximize

K

  • k=1

wkUk

  • hk(Ck)
  • such that

K

  • k=1

Ck ≤ C Ck ≥ 0, k = 1, 2, . . . , K (1)

slide-6
SLIDE 6

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Cache Characteristic Time (H.Che 2001)

For each CP k accessing LRU cache with size Ck, the hit rate hk can be approximated as: hk =

Nk

  • i=1

λik(1 − e−λikTk), where Tk is a constant denoting cache characteristic time that satisfies:

Nk

  • i=1

(1 − e−λikTk) = Ck.

slide-7
SLIDE 7

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Important Properties

Theorem 1: Partitioning the cache provides performance gain as compared to sharing it.

C =

K

  • k=1
  • i∈Fk

(1 − e−λikT ) It can be seen creating partitions Ck =

i∈Fk (1 − e−λikT )

provides the same performance.

Theorem 2: hk is concave in Ck, and problem (1) is convex.

slide-8
SLIDE 8

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Some Numerical Results

Basic setting:

2 CPs competing for 1 Cache File population: N1 = 2 × 105, N2 = 1 × 106 Zipf distribution: α1 = 1.2 and α2 = 0.8 Request rate: λ1 = 1500, λ2 = 1000 Utility function: U1(h) = U2(h) = h. Weights: w1 = w2 = 1

slide-9
SLIDE 9

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Efficacy of Cache Partitioning

102 103 104

Cache size

800 1000 1200 1400 1600

Utility U1(h1) = h1

Partitioning Sharing

(a) U1(h1) = h1

102 103 104

Cache size

200 400 600 800 1000

Utility U1(h1) = log h1

Partitioning Sharing

(b) U1(h1) = log h1

Figure: Showing the efficacy of cache partitioning. Aggregate utility

  • btained by partitioning versus sharing. In both cases, U2(h2) = h2.
slide-10
SLIDE 10

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Efficacy of Cache Partitioning

102 103 104

Cache size

101 102 103 104

C1 U1(h1) = h1

Partitioning Sharing

(a) U1(h1) = h1

102 103 104

Cache size

100 102 104

C1 U1(h1) = log h1

Partitioning Sharing

(b) U1(h1) = log h1

Figure: Cache size allocated to CP1 when partitioning the cache compared to average cache storage consumed by CP1 files when sharing the cache. In both cases, U2(h2) = h2.

slide-11
SLIDE 11

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Parameter Impact of Request Rate

slide-12
SLIDE 12

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Parameter Impact of Weight

CP CP1 1 hit hit rat ate CP CP2 2 hit it rat ate Cac Cache fractio ion allo allocated to

  • CP

CP1

slide-13
SLIDE 13

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Parameter Impact of Skewness

CP CP1 1 hit hit rat ate CP CP2 2 hit it rat ate Cac Cache fractio ion allo allocated to

  • CP

CP1

slide-14
SLIDE 14

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Parameter Impact of File Population

CP CP1 1 hit hit rat ate CP CP2 2 hit it rat ate Cac Cache fractio ion allo allocated to

  • CP

CP1

slide-15
SLIDE 15

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Implications

Use different utility functions to achieve different fairness: Uk(hk) = h1−β

k

1−β . β → 1, proportional fairness; β → ∞,

max-min fairness; Online (decentralized) allocation: Kelly’s framework. Develop adaptive control mechanism: in response to changing network traffic. Alternative formulation: delay optimization. Apply to policies other than LRU.

slide-16
SLIDE 16

3rd ACM Conference on Information-Centric Networking (ICN 2016)

Thank you! Q & A ?