1896 1920 1987 2006
Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks
Ying Cui Shanghai Jiao Tong University, Shanghai, China
Joint work with Fan Lai, Feng Qiu, Wenjie Bian, Edmund Yeh
16/9/28 1
Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in - - PowerPoint PPT Presentation
1896 1920 1987 2006 Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks Ying Cui Shanghai Jiao Tong University, Shanghai, China Joint work with Fan Lai, Feng Qiu, Wenjie Bian, Edmund Yeh 16/9/28 1
1896 1920 1987 2006
16/9/28 1
16/9/28 2
– replace connection-based model with content-centric model – reflect how Internet is primarily used today – improve efficiency of content dissemination
– two packet types: Interest Packet (IP) and Data Packet (DP) – three data structures in nodes: Forwarding Information Base (FIB),
Pending Interest Table (PIT), Content Store
– optimally utilize bandwidth and storage resources – jointly design forwarding and caching
Separate Forwarding and Caching Joint Forwarding and Caching [Chai'12]: caching based on concept of betweenness centrality [Xie'12] : single-path routing and caching [Ming'12] : cooperative caching schemes (without joint forwarding design) [Amble'11] : single-hop throughput
[Yi'12] : adaptive multipath forwarding scheme (without joint caching design) [Yeh'14] : VIP framework, multi-hop joint dynamic forwarding and caching
16/9/28 3
– capture (measured) demand for respective data objects
– demand unavailable in actual network due to interest suppression
– operate on VIPs at data object level – develop throughput optimal control operating on VIPs
– handle interest and data packets at data chunk level – specify efficient actual control using VIP flow rates and queue length
16/9/28 4
– provide sufficient demand information – reflect more accurately actual interest traffic with suppression
– how to capture both demand and interest suppression effect? – how to maintain throughput optimality when reflecting interest suppression?
16/9/28 5
virtual control plane: capture demand via VIPs not reflect interest suppression actual network: have interest suppression not have demand information
VIP scaling
– node set : N nodes, index n, cache size – link set : L directed links, index (a,b), capacity – data set : K data objects, index k, size D
– time slot index t, slot duration 1
– per-slot arrivals:
– long-term arrival rate:
– content source for data k is src(k) – fixed sources and varying caching points in time
16/9/28 6
N Î
– one corresponding VIP generated for one exogenous request arrival
– represent VIP queue size (maintain one VIP queue)
– not involve actual packets being sent, DPs travel on reverse path – ,
– assume a node can gain access to any data object when there is a VIP –
– maximum rate of producing copies of cached objects
16/9/28 7
n
– reflect average per-slot interest suppression effect (VIP arrival rate) – approximate by a predetermined constant – estimate using e.g., Exponential Moving Average (EMA)
– provide downward gradient from entry points of data requests to content source and caching nodes
16/9/28 8
Scaled-down incoming VIPs sinking VIPs due to caching capture to some extent demand information reflect to some extend interest suppression
16/9/28 9
–greater when θnk increases –superset of previous VIP stability region
For any in int(Λ), there exists some policy guarantees scaled VIP queues stable without knowing exact value of λ. long-term VIP transmission rate long-term data caching chance
Forwarding: For each data object and each link , choose , ,
16/9/28 10
L Î ) , ( b a
– at t, for k and (a,b), allocate entire normalized reverse link capacity Cba/D to transmit VIPs for object with max backpressure (BP) weight – balance out VIP counts (spread out VIPs from high potential to low one) – capture interest suppression at receiving node
BP weight max BP weight data object with max BP weight
Caching: At each node , choose to
16/9/28 11
– knapsack problem: at t, for n, allocate cache space to Ln/D data
– balance out VIP counts (sink VIPs with high potential) – same as previous caching without VIP scaling
– forwarding and caching both based on VIP counts
– adaptive to varying VIP counts (instantaneous congestion)
– forwarding: exchange VIP counts with neighbors – caching: local VIP counts
– same order as previous VIP algorithm: O(N2K)
– incorporate scaling
16/9/28 12
16/9/28 13
– adaptively stabilize all VIP queues for any in virtual plane – exploit both bandwith and storage resources to maximally balance
– upper bound on average total number of scaled VIPs smaller than previous upper bound for VIPs without scaling
16/9/28
– previous VIP algorithm – six other baselines
– forwarding: shortest path, potential-based routing – caching decision: LCE, LCD, Age-based, LFU – caching replacement: LRU, FIFO, BIAS, UNIF, Age-based, LFU
– request arrivals follow Poisson process with same rate λ (requests/node/slot) – content popularity follows Zipf (0.75), w.p. pk requesting data k – Interest Packet size 125B, chunk size 50 KB, object size 5 MB
14
3000 Objects, cache size 2GB (400 objects), link capacity 500 Mb/s, all nodes generate requests and can be sources
constant θ=1.5 EMA θ Improvement 36 %, λ = 40 57 %, λ = 60
Previous VIP and scaled VIP algorithms with different scaling parameters achieve better delay than six baseline schemes Large θ overemphasizes suppression and underestimates demand, leading to performance degradation
16/9/28 15
Scaled VIP algorithms withθ=1.5 and EMAθn
k(t) greatly improve
performance of VIP algorithm
3000 Objects, cache size 2GB (400 objects), link capacity 500 Mb/s, all nodes generate requests and can be sources
constant θ=1.5 EMA θ Improvement 47 %, λ = 40 58 %, λ = 40
16/9/28 16
3000 Objects, cache size 5GB (1000
generate requests and can be sources
constant θ=1.5 EMA θ Improvement 31 %, λ = 60 40 %, λ = 60
16/9/28 17
16/9/28
3000 Objects, cache size 5GB (1000 objects), link capacity 500 Mb/s, consumer nodes generate requests, Node 1 is source node.
constant θ=1.5 EMA θ Improvement 31 %, λ = 40 58 %, λ = 30
18
16/9/28 19
– capture both demand and interest suppression
[Yeh’14] E. Yeh, T. Ho, Y. Cui, M. Burd, R. Liu, and D. Leong. Vip: A framework for joint dynamic forwarding and caching in named data networks. In Proceedings of the 1st International Conference on Information-centric Networking, ICN ’14, pages 117–126, New York, NY, USA, 2014. ACM. [Amble’11] M. M. Amble, P. Parag, S. Shakkottai, and L. Ying. Content-aware caching and traffic management in content distribution networks. In Proceedings of IEEE INFOCOM 2011, pages 2858–2866, Shanghai, China, Apr. 2011. [Xie’12] H. Xie, G. Shi, and P. Wang. Tecc: Towards collaborative in-network caching guided by traffic engineering. In Proceedings of IEEE INFOCOM 2012:Mini-Conference, pages 2546–2550, Orlando, Florida, USA, Mar. 2012. [Chai’12] W. Chai, D. He, L. Psaras, and G. Paviou, Cache “less for more” in information-centric
[Ming’12] Z. Ming, M. Xu, and D. Wang. Age-based cooperative caching in information-centric
[Yi’12] C. Yi, A. Afanasyev, L. Wang, B. Zhang, and L. Zhang. Adaptive forwarding in named data
16/9/28 20
16/9/28 21