Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in - - PowerPoint PPT Presentation

scaled vip algorithms for joint dynamic forwarding and
SMART_READER_LITE
LIVE PREVIEW

Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in - - PowerPoint PPT Presentation

1896 1920 1987 2006 Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks Ying Cui Shanghai Jiao Tong University, Shanghai, China Joint work with Fan Lai, Feng Qiu, Wenjie Bian, Edmund Yeh 16/9/28 1


slide-1
SLIDE 1

1896 1920 1987 2006

Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks

Ying Cui Shanghai Jiao Tong University, Shanghai, China

Joint work with Fan Lai, Feng Qiu, Wenjie Bian, Edmund Yeh

16/9/28 1

slide-2
SLIDE 2

Background

16/9/28 2

  • Promising future Internet architecture: NDN/CCN

– replace connection-based model with content-centric model – reflect how Internet is primarily used today – improve efficiency of content dissemination

  • Content delivery in NDN

– two packet types: Interest Packet (IP) and Data Packet (DP) – three data structures in nodes: Forwarding Information Base (FIB),

Pending Interest Table (PIT), Content Store

  • Goal of NDN

– optimally utilize bandwidth and storage resources – jointly design forwarding and caching

slide-3
SLIDE 3

Previous Work

Separate Forwarding and Caching Joint Forwarding and Caching [Chai'12]: caching based on concept of betweenness centrality [Xie'12] : single-path routing and caching [Ming'12] : cooperative caching schemes (without joint forwarding design) [Amble'11] : single-hop throughput

  • ptimal routing and caching

[Yi'12] : adaptive multipath forwarding scheme (without joint caching design) [Yeh'14] : VIP framework, multi-hop joint dynamic forwarding and caching

16/9/28 3

slide-4
SLIDE 4
  • Virtual interest packets (VIPs)

– capture (measured) demand for respective data objects

– demand unavailable in actual network due to interest suppression

  • Virtual control plane

– operate on VIPs at data object level – develop throughput optimal control operating on VIPs

  • Actual plane

– handle interest and data packets at data chunk level – specify efficient actual control using VIP flow rates and queue length

VIP Framework

16/9/28 4

slide-5
SLIDE 5
  • Problem in previous virtual control plane
  • Aim to further improve performance of previous VIP

algorithms by improving virtual control plane

– provide sufficient demand information – reflect more accurately actual interest traffic with suppression

  • Challenge

– how to capture both demand and interest suppression effect? – how to maintain throughput optimality when reflecting interest suppression?

Motivation and Challenge

16/9/28 5

virtual control plane: capture demand via VIPs not reflect interest suppression actual network: have interest suppression not have demand information

VIP scaling

slide-6
SLIDE 6

Network Model

  • General multi-hop network (bi-directed graph)

– node set : N nodes, index n, cache size – link set : L directed links, index (a,b), capacity – data set : K data objects, index k, size D

  • Discrete time system

– time slot index t, slot duration 1

  • Exogenous data request arrival

– per-slot arrivals:

– long-term arrival rate:

  • Content sources

– content source for data k is src(k) – fixed sources and varying caching points in time

16/9/28 6

N Î

slide-7
SLIDE 7

Virtual Control Plane

  • Virtual interest packet (VIP)

– one corresponding VIP generated for one exogenous request arrival

  • VIP counts:

– represent VIP queue size (maintain one VIP queue)

  • VIP transmission rate:

– not involve actual packets being sent, DPs travel on reverse path – ,

  • Caching state:

(1: cached, 0: not)

– assume a node can gain access to any data object when there is a VIP –

  • I/O rate of storage disk:

– maximum rate of producing copies of cached objects

16/9/28 7

n

r

slide-8
SLIDE 8

VIP Scaling

  • Scaling parameter:

(reduce to previous results if = 1)

– reflect average per-slot interest suppression effect (VIP arrival rate) – approximate by a predetermined constant – estimate using e.g., Exponential Moving Average (EMA)

  • Dynamics of scaled VIPs

– provide downward gradient from entry points of data requests to content source and caching nodes

16/9/28 8

  • utgoing VIPs

Scaled-down incoming VIPs sinking VIPs due to caching capture to some extent demand information reflect to some extend interest suppression

slide-9
SLIDE 9

Stability Region with VIP Scaling

16/9/28 9

  • Interpretation

–greater when θnk increases –superset of previous VIP stability region

For any in int(Λ), there exists some policy guarantees scaled VIP queues stable without knowing exact value of λ. long-term VIP transmission rate long-term data caching chance

slide-10
SLIDE 10

Scaled VIP Algorithm

Forwarding: For each data object and each link , choose , ,

16/9/28 10

L Î ) , ( b a

  • Interpretation

– at t, for k and (a,b), allocate entire normalized reverse link capacity Cba/D to transmit VIPs for object with max backpressure (BP) weight – balance out VIP counts (spread out VIPs from high potential to low one) – capture interest suppression at receiving node

BP weight max BP weight data object with max BP weight

slide-11
SLIDE 11

Scaled VIP Algorithm

Caching: At each node , choose to

16/9/28 11

  • Interpretation

– knapsack problem: at t, for n, allocate cache space to Ln/D data

  • bjects with highest VIP counts

– balance out VIP counts (sink VIPs with high potential) – same as previous caching without VIP scaling

slide-12
SLIDE 12

Features of Scaled VIP Algorithm

  • Joint design

– forwarding and caching both based on VIP counts

  • Dynamic design

– adaptive to varying VIP counts (instantaneous congestion)

  • Distributed design

– forwarding: exchange VIP counts with neighbors – caching: local VIP counts

  • Complexity

– same order as previous VIP algorithm: O(N2K)

  • Extend Lyapunov drift techniques

– incorporate scaling

  • Develop algorithms for actual plane from scaled VIP algorithm

using mapping in previous VIP framework

16/9/28 12

slide-13
SLIDE 13

Throughput Optimality

16/9/28 13

  • Interpretation

– adaptively stabilize all VIP queues for any in virtual plane – exploit both bandwith and storage resources to maximally balance

  • ut VIP load and prevent congestion buildup

– upper bound on average total number of scaled VIPs smaller than previous upper bound for VIPs without scaling

slide-14
SLIDE 14

Experiment Evaluation

16/9/28

  • Baselines

– previous VIP algorithm – six other baselines

– forwarding: shortest path, potential-based routing – caching decision: LCE, LCD, Age-based, LFU – caching replacement: LRU, FIFO, BIAS, UNIF, Age-based, LFU

  • Delay between arrival time of requested Data Packet and

creation time of Interest Packet

  • Arrival models and parameters

– request arrivals follow Poisson process with same rate λ (requests/node/slot) – content popularity follows Zipf (0.75), w.p. pk requesting data k – Interest Packet size 125B, chunk size 50 KB, object size 5 MB

14

slide-15
SLIDE 15

DTelekom Network

3000 Objects, cache size 2GB (400 objects), link capacity 500 Mb/s, all nodes generate requests and can be sources

constant θ=1.5 EMA θ Improvement 36 %, λ = 40 57 %, λ = 60

Previous VIP and scaled VIP algorithms with different scaling parameters achieve better delay than six baseline schemes Large θ overemphasizes suppression and underestimates demand, leading to performance degradation

16/9/28 15

Scaled VIP algorithms withθ=1.5 and EMAθn

k(t) greatly improve

performance of VIP algorithm

slide-16
SLIDE 16

GEANT Network

3000 Objects, cache size 2GB (400 objects), link capacity 500 Mb/s, all nodes generate requests and can be sources

constant θ=1.5 EMA θ Improvement 47 %, λ = 40 58 %, λ = 40

16/9/28 16

slide-17
SLIDE 17

Abilene Network

3000 Objects, cache size 5GB (1000

  • bjects), link capacity 500 Mb/s, all nodes

generate requests and can be sources

constant θ=1.5 EMA θ Improvement 31 %, λ = 60 40 %, λ = 60

16/9/28 17

slide-18
SLIDE 18

Service Network

16/9/28

3000 Objects, cache size 5GB (1000 objects), link capacity 500 Mb/s, consumer nodes generate requests, Node 1 is source node.

constant θ=1.5 EMA θ Improvement 31 %, λ = 40 58 %, λ = 30

18

slide-19
SLIDE 19

Summary

16/9/28 19

  • Propose a modified VIP framework based on scaled VIPs

– capture both demand and interest suppression

  • Characterize stability region with VIP scaling
  • Develop scaled forwarding and caching algorithm
  • Prove throughput optimality of scaled VIP algorithm
  • Evaluate superior delay performance with experiment results
  • Can be extended to incorporate congestion control
slide-20
SLIDE 20

Reference

[Yeh’14] E. Yeh, T. Ho, Y. Cui, M. Burd, R. Liu, and D. Leong. Vip: A framework for joint dynamic forwarding and caching in named data networks. In Proceedings of the 1st International Conference on Information-centric Networking, ICN ’14, pages 117–126, New York, NY, USA, 2014. ACM. [Amble’11] M. M. Amble, P. Parag, S. Shakkottai, and L. Ying. Content-aware caching and traffic management in content distribution networks. In Proceedings of IEEE INFOCOM 2011, pages 2858–2866, Shanghai, China, Apr. 2011. [Xie’12] H. Xie, G. Shi, and P. Wang. Tecc: Towards collaborative in-network caching guided by traffic engineering. In Proceedings of IEEE INFOCOM 2012:Mini-Conference, pages 2546–2550, Orlando, Florida, USA, Mar. 2012. [Chai’12] W. Chai, D. He, L. Psaras, and G. Paviou, Cache “less for more” in information-centric

  • networks. IFIP’ 12, pages 27-40, Berlin, Heideberg, 2012.

[Ming’12] Z. Ming, M. Xu, and D. Wang. Age-based cooperative caching in information-centric

  • networks. In IEEE INFOCOM WKSHPS, pages 268–273, March 2012.

[Yi’12] C. Yi, A. Afanasyev, L. Wang, B. Zhang, and L. Zhang. Adaptive forwarding in named data

  • networking. SIGCOMM Comput. Commun. Rev., 42(3):62–67, June 2012.

16/9/28 20

slide-21
SLIDE 21

16/9/28 21