Intra-AS Cooperative Caching for Content-Centric Networks - - PowerPoint PPT Presentation

intra as cooperative caching for content centric networks
SMART_READER_LITE
LIVE PREVIEW

Intra-AS Cooperative Caching for Content-Centric Networks - - PowerPoint PPT Presentation

Intra-AS Cooperative Caching for Content-Centric Networks Min(Jason) WANG jasonwangm@cse.ust.hk Department of Computer Science and Engineering The Hong Kong University of Science and Technology August 12, 2013 Outline Introduction 1


slide-1
SLIDE 1

Intra-AS Cooperative Caching for Content-Centric Networks

Min(Jason) WANG jasonwangm@cse.ust.hk

Department of Computer Science and Engineering The Hong Kong University of Science and Technology

August 12, 2013

slide-2
SLIDE 2

Outline

1

Introduction

2

Cooperative Redundancy Elimination Problem formulation Modelling issues Greedy heuristic

3

Intra-AS Cache Cooperation Scheme

4

Performance Evaluation

5

Conclusion

1 of 19

slide-3
SLIDE 3

Request-Response Scenario in CCN

. . . . . . . . .

R1: request for data k from Client-1 cache hit

  • f R1

request path of R1 AS-1 AS-2 AS-3 return path

  • f R1
  • -- data k

R2: request for data k from Client-2 cache hit

  • f R2

1 of 19

slide-4
SLIDE 4

Motivation for Redundancy Elimination

  • Network traffic exhibits high redundancy
  • due to content popularity, often, the same content is accessed by many

users.

  • CCN enables individual nodes to reduce redundancy by managing a

local cache

  • the central abstraction is the named-data.

However, redundancy can freely appear across different nodes:

the default ubiquitous LRU caching scheme; the support of multi-path routing.

Controlling the redundancy level is critical to improving the systematic caching performance of CCN.

2 of 19

slide-5
SLIDE 5

Motivation for Redundancy Elimination

  • Network traffic exhibits high redundancy
  • due to content popularity, often, the same content is accessed by many

users.

  • CCN enables individual nodes to reduce redundancy by managing a

local cache

  • the central abstraction is the named-data.
  • However, redundancy can freely appear across different nodes:
  • the default ubiquitous LRU caching scheme;
  • the support of multi-path routing.
  • Controlling the redundancy level is critical to improving the

systematic caching performance of CCN.

2 of 19

slide-6
SLIDE 6

Motivation for Redundancy Elimination (II)

  • Fact 1:
  • a CCN node’s caching performance is highly related to its cache size, yet
  • the available caching resource is rather limited:
  • buffer memory of IP router

. content store.

Fact 2:

dominant video traffic surge on both wired and wireless network,

poses a burden on link bandwidth; increases cross-traffic . increased transit-cost.

yet CCN is expected to greatly offload cross-AS traffic

These two facts dictate a frugal usage of limited caching resources within the AS,

storing valuable content only; avoiding storage waste by reducing redundancy;

3 of 19

slide-7
SLIDE 7

Motivation for Redundancy Elimination (II)

  • Fact 1:
  • a CCN node’s caching performance is highly related to its cache size, yet
  • the available caching resource is rather limited:
  • buffer memory of IP router

. content store.

  • Fact 2:
  • dominant video traffic surge on both wired and wireless network,
  • poses a burden on link bandwidth;
  • increases cross-traffic

. increased transit-cost.

  • yet CCN is expected to greatly offload cross-AS traffic

These two facts dictate a frugal usage of limited caching resources within the AS,

storing valuable content only; avoiding storage waste by reducing redundancy;

3 of 19

slide-8
SLIDE 8

Motivation for Redundancy Elimination (II)

  • Fact 1:
  • a CCN node’s caching performance is highly related to its cache size, yet
  • the available caching resource is rather limited:
  • buffer memory of IP router

. content store.

  • Fact 2:
  • dominant video traffic surge on both wired and wireless network,
  • poses a burden on link bandwidth;
  • increases cross-traffic

. increased transit-cost.

  • yet CCN is expected to greatly offload cross-AS traffic
  • These two facts dictate a frugal usage of limited caching

resources within the AS,

  • storing valuable content only;
  • avoiding storage waste by reducing redundancy;

3 of 19

slide-9
SLIDE 9

Dimensions of Redundancy Elimination

  • Spacial Dimension:
  • vertically: control the number of copies along the return path
  • horizontally: control number of duplicates across neighbour nodes

within the AS

T emporal Dimension:

actively: in real time before the data copy is brought into the cache passively: on-demand offline after the data has been cached

Previous works proposing new caching schemes for CCN inscribes in the vertical-active redundancy elimination (RE) category; Our work adopts the horizontal-passive approach.

4 of 19

slide-10
SLIDE 10

Dimensions of Redundancy Elimination

  • Spacial Dimension:
  • vertically: control the number of copies along the return path
  • horizontally: control number of duplicates across neighbour nodes

within the AS

  • T

emporal Dimension:

  • actively: in real time before the data copy is brought into the cache
  • passively: on-demand offline after the data has been cached

Previous works proposing new caching schemes for CCN inscribes in the vertical-active redundancy elimination (RE) category; Our work adopts the horizontal-passive approach.

4 of 19

slide-11
SLIDE 11

Dimensions of Redundancy Elimination

  • Spacial Dimension:
  • vertically: control the number of copies along the return path
  • horizontally: control number of duplicates across neighbour nodes

within the AS

  • T

emporal Dimension:

  • actively: in real time before the data copy is brought into the cache
  • passively: on-demand offline after the data has been cached
  • Previous works proposing new caching schemes for CCN inscribes in

the vertical-active redundancy elimination (RE) category; Our work adopts the horizontal-passive approach.

4 of 19

slide-12
SLIDE 12

Dimensions of Redundancy Elimination

  • Spacial Dimension:
  • vertically: control the number of copies along the return path
  • horizontally: control number of duplicates across neighbour nodes

within the AS

  • T

emporal Dimension:

  • actively: in real time before the data copy is brought into the cache
  • passively: on-demand offline after the data has been cached
  • Previous works proposing new caching schemes for CCN inscribes in

the vertical-active redundancy elimination (RE) category;

  • Our work adopts the horizontal-passive approach.

4 of 19

slide-13
SLIDE 13

Outline

1

Introduction

2

Cooperative Redundancy Elimination Problem formulation Modelling issues Greedy heuristic

3

Intra-AS Cache Cooperation Scheme

4

Performance Evaluation

5

Conclusion

5 of 19

slide-14
SLIDE 14

Notations

  • G = (N, E): a network managed by a single administrative authority
  • Si: the cache size of node i (in units of chunks)
  • Ki: the set of cached items at node i
  • Ni: the set of neighbouring nodes of i

. Cooperation Scope . .

  • 1. node

can own the view of cached items of nodes in and can utilize them to serve its locally-unsatisfied requests;

  • 2. from node ’s perspective, for each

, if one copy of exists in , it can purge to release one caching slot; : whether node should keep item ( : yes; : no) : the released slots from the cooperative RE

5 of 19

slide-15
SLIDE 15

Notations

  • G = (N, E): a network managed by a single administrative authority
  • Si: the cache size of node i (in units of chunks)
  • Ki: the set of cached items at node i
  • Ni: the set of neighbouring nodes of i

. Cooperation Scope . .

  • 1. node i can own the view of cached items of nodes in Ni and can

utilize them to serve its locally-unsatisfied requests;

  • 2. from node i’s perspective, for each k ∈ Ki, if one copy of k exists in

Ni, it can purge k to release one caching slot; : whether node should keep item ( : yes; : no) : the released slots from the cooperative RE

5 of 19

slide-16
SLIDE 16

Notations

  • G = (N, E): a network managed by a single administrative authority
  • Si: the cache size of node i (in units of chunks)
  • Ki: the set of cached items at node i
  • Ni: the set of neighbouring nodes of i

. Cooperation Scope . .

  • 1. node i can own the view of cached items of nodes in Ni and can

utilize them to serve its locally-unsatisfied requests;

  • 2. from node i’s perspective, for each k ∈ Ki, if one copy of k exists in

Ni, it can purge k to release one caching slot;

  • xik: whether node i should keep item k (1: yes; 0: no)
  • (Si − ∑

k∈Ki xik): the released slots from the cooperative RE

5 of 19

slide-17
SLIDE 17

Problem Formulation

  • Cooperative Redundancy Elimination (CRE):

max

i∈N

wiUi(Si − ∑

k∈Ki

xik) s.t. Si − ∑

k∈Ki

xik ≥ 0, ∀i ∈ N xik +

j∈Ni,k∈Kj

xjk ≥ 1, ∀i ∈ N, k ∈ Ki xik ∈ {0, 1}, ∀i ∈ N, k ∈ Ki.

  • Ui(·): quantifies the benefit achieved from RE
  • Ui(v) = ln(1 + v) to achieve the proportional fairness
  • wi: the weight of node i’s utility

6 of 19

slide-18
SLIDE 18

Modelling Issues (I)

  • Objective function: sum of weighted utilities
  • Two types of nodes in the AS: access nodes and intermediate nodes.

Due to “cache filtering effect”, the caching performance of access nodes plays the major role in systematic caching performance. Value more the gain of access nodes: larger ; Value less the gain of intermediate nodes: smaller ;

7 of 19

slide-19
SLIDE 19

Modelling Issues (I)

  • Objective function: sum of weighted utilities
  • Two types of nodes in the AS: access nodes and intermediate nodes.
  • Due to “cache filtering effect”, the caching performance of access nodes

plays the major role in systematic caching performance. Value more the gain of access nodes: larger ; Value less the gain of intermediate nodes: smaller ;

Access%nodes% Intermediate%node% Zipf2like% Cache2miss%% request%stream% Random%

7 of 19

slide-20
SLIDE 20

Modelling Issues (I)

  • Objective function: sum of weighted utilities
  • Two types of nodes in the AS: access nodes and intermediate nodes.
  • Due to “cache filtering effect”, the caching performance of access nodes

plays the major role in systematic caching performance.

  • Value more the gain of access nodes: larger wi;
  • Value less the gain of intermediate nodes: smaller wi;

Access%nodes% Intermediate%node% Zipf2like% Cache2miss%% request%stream% Random%

7 of 19

slide-21
SLIDE 21

Modelling Issues (II)

  • Limited cooperation scope
  • The accepted redundancy level reflects the trade-off between two goals
  • performance goal: minimizing the access latency;
  • cost goal: maximizing the diversity of cached items to reduce cross-traffic;

Lessons learned from previous study:

Acquiring optimal degree of duplication per se is challenging; Often couples with network topology and needs to be handled together with request routing, rendering solutions non-scalable.

One-hop cooperation scope

eliminating duplicates in small groups controls overall redundancy level; severs the dependence on request routing; reduces the amount of signalling traffic needed for cooperation; yet,

  • nly subtly prolongs access latency for evicted items;

8 of 19

slide-22
SLIDE 22

Modelling Issues (II)

  • Limited cooperation scope
  • The accepted redundancy level reflects the trade-off between two goals
  • performance goal: minimizing the access latency;
  • cost goal: maximizing the diversity of cached items to reduce cross-traffic;
  • Lessons learned from previous study:
  • Acquiring optimal degree of duplication per se is challenging;
  • Often couples with network topology and needs to be handled together with

request routing, rendering solutions non-scalable.

One-hop cooperation scope

eliminating duplicates in small groups controls overall redundancy level; severs the dependence on request routing; reduces the amount of signalling traffic needed for cooperation; yet,

  • nly subtly prolongs access latency for evicted items;

8 of 19

slide-23
SLIDE 23

Modelling Issues (II)

  • Limited cooperation scope
  • The accepted redundancy level reflects the trade-off between two goals
  • performance goal: minimizing the access latency;
  • cost goal: maximizing the diversity of cached items to reduce cross-traffic;
  • Lessons learned from previous study:
  • Acquiring optimal degree of duplication per se is challenging;
  • Often couples with network topology and needs to be handled together with

request routing, rendering solutions non-scalable.

  • One-hop cooperation scope
  • eliminating duplicates in small groups controls overall redundancy level;
  • severs the dependence on request routing;
  • reduces the amount of signalling traffic needed for cooperation; yet,
  • only subtly prolongs access latency for evicted items;

8 of 19

slide-24
SLIDE 24

Greedy Heuristic

  • CRE is Non-linear Integer Programming and is NP-hard.
  • With no fairness consideration and in absence of storage capacity

constraint, CRE degenerates to solving a sequence of MDS problems.

  • each cached item k leads to an induced graph Gk = (N, Ek);
  • calculate the MDS for each Gk;

The greedy heuristic for MDS tends to pick nodes with high degrees.

Greedy leads to a

  • approx solution.

Two observations:

  • 1. Nodes with high degrees tend to be intermediate nodes that are of less

importance to the systematic caching performance;

  • 2. A node with high degree locates in a better position for eliminating

redundancy.

The greedy heuristic for MDS can be implemented in a distributed way [F. Kuhn PODC’03].

9 of 19

slide-25
SLIDE 25

Greedy Heuristic

  • CRE is Non-linear Integer Programming and is NP-hard.
  • With no fairness consideration and in absence of storage capacity

constraint, CRE degenerates to solving a sequence of MDS problems.

  • each cached item k leads to an induced graph Gk = (N, Ek);
  • calculate the MDS for each Gk;
  • The greedy heuristic for MDS tends to pick nodes with high degrees.

Greedy leads to a ln ∆-approx solution.

Two observations:

  • 1. Nodes with high degrees tend to be intermediate nodes that are of less

importance to the systematic caching performance;

  • 2. A node with high degree locates in a better position for eliminating

redundancy.

The greedy heuristic for MDS can be implemented in a distributed way [F. Kuhn PODC’03].

9 of 19

slide-26
SLIDE 26

Greedy Heuristic

  • CRE is Non-linear Integer Programming and is NP-hard.
  • With no fairness consideration and in absence of storage capacity

constraint, CRE degenerates to solving a sequence of MDS problems.

  • each cached item k leads to an induced graph Gk = (N, Ek);
  • calculate the MDS for each Gk;
  • The greedy heuristic for MDS tends to pick nodes with high degrees.

Greedy leads to a ln ∆-approx solution.

  • Two observations:
  • 1. Nodes with high degrees tend to be intermediate nodes that are of less

importance to the systematic caching performance;

  • 2. A node with high degree locates in a better position for eliminating

redundancy.

The greedy heuristic for MDS can be implemented in a distributed way [F. Kuhn PODC’03].

9 of 19

slide-27
SLIDE 27

Greedy Heuristic

  • CRE is Non-linear Integer Programming and is NP-hard.
  • With no fairness consideration and in absence of storage capacity

constraint, CRE degenerates to solving a sequence of MDS problems.

  • each cached item k leads to an induced graph Gk = (N, Ek);
  • calculate the MDS for each Gk;
  • The greedy heuristic for MDS tends to pick nodes with high degrees.

Greedy leads to a ln ∆-approx solution.

  • Two observations:
  • 1. Nodes with high degrees tend to be intermediate nodes that are of less

importance to the systematic caching performance;

  • 2. A node with high degree locates in a better position for eliminating

redundancy.

The greedy heuristic for MDS can be implemented in a distributed way [F. Kuhn PODC’03].

9 of 19

slide-28
SLIDE 28

Intra-AS Cache Cooperation Framework (I)

  • Each CCN node is taken care of by any specified cache management

scheme, e.g., the default ubiquitous LRU.

  • Ideally, the cache space of an AS should abound in popular items

requested by its served clients. Periodically, each CCN node exchanges with its one-hop neighbours the cache “summary”.

“summary” encodes the information of currently cached items. caveat: to avoid exposing short-lived items.

10 of 19

slide-29
SLIDE 29

Intra-AS Cache Cooperation Framework (I)

  • Each CCN node is taken care of by any specified cache management

scheme, e.g., the default ubiquitous LRU.

  • Ideally, the cache space of an AS should abound in popular items

requested by its served clients.

  • Periodically, each CCN node exchanges with its one-hop neighbours

the cache “summary”.

“summary” encodes the information of currently cached items. caveat: to avoid exposing short-lived items.

10 of 19

slide-30
SLIDE 30

Intra-AS Cache Cooperation Framework (I)

  • Each CCN node is taken care of by any specified cache management

scheme, e.g., the default ubiquitous LRU.

  • Ideally, the cache space of an AS should abound in popular items

requested by its served clients.

  • Periodically, each CCN node exchanges with its one-hop neighbours

the cache “summary”.

  • “summary” encodes the information of currently cached items.
  • caveat: to avoid exposing short-lived items.

10 of 19

slide-31
SLIDE 31

Intra-AS Cache Cooperation Framework (II)

  • A fresh round of cooperative redundancy elimination will be

launched,

when the “redundancy monitoring metric” indicates the potential benefits of doing so based on collected summaries.

If a request cannot be served locally, probe the neighbours to see whether an early cache hit can happen, in light of exchanged summaries.

in view of the existence of false-positive probe (due to stale summary), probing action is confined with one-hop neighbours, and only at the first hop node;

Cache Cooperation + Cooperative Redundancy Elimination

11 of 19

slide-32
SLIDE 32

Intra-AS Cache Cooperation Framework (II)

  • A fresh round of cooperative redundancy elimination will be

launched,

  • when the “redundancy monitoring metric” indicates the potential

benefits of doing so based on collected summaries.

If a request cannot be served locally, probe the neighbours to see whether an early cache hit can happen, in light of exchanged summaries.

in view of the existence of false-positive probe (due to stale summary), probing action is confined with one-hop neighbours, and only at the first hop node;

Cache Cooperation + Cooperative Redundancy Elimination

11 of 19

slide-33
SLIDE 33

Intra-AS Cache Cooperation Framework (II)

  • A fresh round of cooperative redundancy elimination will be

launched,

  • when the “redundancy monitoring metric” indicates the potential

benefits of doing so based on collected summaries.

  • If a request cannot be served locally, probe the neighbours to see

whether an early cache hit can happen, in light of exchanged summaries.

in view of the existence of false-positive probe (due to stale summary), probing action is confined with one-hop neighbours, and only at the first hop node;

Cache Cooperation + Cooperative Redundancy Elimination

11 of 19

slide-34
SLIDE 34

Intra-AS Cache Cooperation Framework (II)

  • A fresh round of cooperative redundancy elimination will be

launched,

  • when the “redundancy monitoring metric” indicates the potential

benefits of doing so based on collected summaries.

  • If a request cannot be served locally, probe the neighbours to see

whether an early cache hit can happen, in light of exchanged summaries.

  • in view of the existence of false-positive probe (due to stale summary),
  • probing action is confined with one-hop neighbours,
  • and only at the first hop node;

Cache Cooperation + Cooperative Redundancy Elimination

11 of 19

slide-35
SLIDE 35

Intra-AS Cache Cooperation Framework (II)

  • A fresh round of cooperative redundancy elimination will be

launched,

  • when the “redundancy monitoring metric” indicates the potential

benefits of doing so based on collected summaries.

  • If a request cannot be served locally, probe the neighbours to see

whether an early cache hit can happen, in light of exchanged summaries.

  • in view of the existence of false-positive probe (due to stale summary),
  • probing action is confined with one-hop neighbours,
  • and only at the first hop node;

Cache Cooperation + Cooperative Redundancy Elimination

11 of 19

slide-36
SLIDE 36

Experimental Setup (I)

  • Network topologies:

Topology Nodes Edges Max degree AS 1755 172 381 15 AS 3967 201 434 15 Brite1 200 404 16 Brite2 200 404 11 Workloads: generated using ProWGen. Zipf-slope Object size One-timers Unique objects Trace-1 [0.90, 0.96] [5, 20] [60, 70]% [35, 40]% Trace-2 [0.70, 0.76] [5, 20] [60, 70]% [35, 40]%

12 of 19

slide-37
SLIDE 37

Experimental Setup (I)

  • Network topologies:

Topology Nodes Edges Max degree AS 1755 172 381 15 AS 3967 201 434 15 Brite1 200 404 16 Brite2 200 404 11

  • Workloads: generated using ProWGen.

Zipf-slope Object size One-timers Unique objects Trace-1 [0.90, 0.96] [5, 20] [60, 70]% [35, 40]% Trace-2 [0.70, 0.76] [5, 20] [60, 70]% [35, 40]%

12 of 19

slide-38
SLIDE 38

Experimental Setup (II)

  • CCN simulator:
  • implemented on top of OmNet++;
  • incorporate three basic components: CS, PIT, and FIB;
  • support per-chunk request, chunk-based caching, and request

aggregation;

  • assume the shortest-path routing;
  • The arrival process of requests (6,000) for objects is Poissonian.
  • A request for object translates into a sequence of chunk requests;
  • the sending window: 1

13 of 19

slide-39
SLIDE 39

Efficiency of the Greedy Heuristic

  • We compare with an “upper bound” of CRE:
  • relaxing Integer constraint of xik
  • Derive a decentralized algorithm to relaxed-version by using

non-linear Gauss-Seidel procedure and Dual decomposition.

14 of 19

slide-40
SLIDE 40

Efficiency of the Greedy Heuristic

  • We compare with an “upper bound” of CRE:
  • relaxing Integer constraint of xik
  • Derive a decentralized algorithm to relaxed-version by using

non-linear Gauss-Seidel procedure and Dual decomposition.

  • AS 1755:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0

Utility value

Upper bound Greedy Weighted, upper bound Weighted, greedy 14 of 19

slide-41
SLIDE 41

Efficiency of the Greedy Heuristic

  • We compare with an “upper bound” of CRE:
  • relaxing Integer constraint of xik
  • Derive a decentralized algorithm to relaxed-version by using

non-linear Gauss-Seidel procedure and Dual decomposition.

  • AS 3967:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0

Utility value

Upper bound Greedy Weighted, upper bound Weighted, greedy 14 of 19

slide-42
SLIDE 42

Efficiency of the Greedy Heuristic

  • We compare with an “upper bound” of CRE:
  • relaxing Integer constraint of xik
  • Derive a decentralized algorithm to relaxed-version by using

non-linear Gauss-Seidel procedure and Dual decomposition.

  • Brite1:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0

Utility value

Upper bound Greedy Weighted, upper bound Weighted, greedy 14 of 19

slide-43
SLIDE 43

Efficiency of the Greedy Heuristic

  • We compare with an “upper bound” of CRE:
  • relaxing Integer constraint of xik
  • Derive a decentralized algorithm to relaxed-version by using

non-linear Gauss-Seidel procedure and Dual decomposition.

  • Brite2:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0

Utility value

Upper bound Greedy Weighted, upper bound Weighted, greedy 14 of 19

slide-44
SLIDE 44

Core Benefits of Cooperation Scheme (I)

  • Cache hit rate:
  • a cache hit: a request either locally served or successfully fetched from

neighbours.

15 of 19

slide-45
SLIDE 45

Core Benefits of Cooperation Scheme (I)

  • Cache hit rate:
  • a cache hit: a request either locally served or successfully fetched from

neighbours.

  • Report results on Trace-1, since similar results are observed for Trace-2.

15 of 19

slide-46
SLIDE 46

Core Benefits of Cooperation Scheme (I)

  • Cache hit rate:
  • a cache hit: a request either locally served or successfully fetched from

neighbours.

  • AS 1755:

1.0 2.0 3.0 4.0 5.0 6.0

Cache size percent (%)

0.0 0.06 0.12 0.18 0.24 0.3 0.36 0.42 0.48 0.54 0.6

Average hit rate ubiquitous LRU intra-AS coop

15 of 19

slide-47
SLIDE 47

Core Benefits of Cooperation Scheme (I)

  • Cache hit rate:
  • a cache hit: a request either locally served or successfully fetched from

neighbours.

  • AS 3967:

1.0 2.0 3.0 4.0 5.0 6.0

Cache size percent (%)

0.0 0.06 0.12 0.18 0.24 0.3 0.36 0.42 0.48 0.54 0.6

Average hit rate ubiquitous LRU intra-AS coop

15 of 19

slide-48
SLIDE 48

Core Benefits of Cooperation Scheme (I)

  • Cache hit rate:
  • a cache hit: a request either locally served or successfully fetched from

neighbours.

  • Brite1:

1.0 2.0 3.0 4.0 5.0 6.0

Cache size percent (%)

0.0 0.06 0.12 0.18 0.24 0.3 0.36 0.42 0.48 0.54 0.6

Average hit rate ubiquitous LRU intra-AS coop

15 of 19

slide-49
SLIDE 49

Core Benefits of Cooperation Scheme (I)

  • Cache hit rate:
  • a cache hit: a request either locally served or successfully fetched from

neighbours.

  • Brite2:

1.0 2.0 3.0 4.0 5.0 6.0

Cache size percent (%)

0.0 0.06 0.12 0.18 0.24 0.3 0.36 0.42 0.48 0.54 0.6

Average hit rate ubiquitous LRU intra-AS coop

15 of 19

slide-50
SLIDE 50

Core Benefits of Cooperation Scheme (II)

  • Bandwidth saving:
  • unsatisfied requests by caches within the AS have to fetch data from
  • riginal content server and this yields the cross-traffic.
  • The amount of cross-traffic (in GB): one chunk is 10KB

AS Method 1755 ubi-LRU 4.06 2.70 1.57 0.69 0.34 0.19 AS-coop 3.91 2.47 1.25 0.47 0.21 0.12 3967 ubi-LRU 5.33 3.62 1.97 0.83 0.37 0.20 AS-coop 5.13 3.15 1.40 0.42 0.18 0.10 Brite1 ubi-LRU 4.73 3.38 2.06 1.16 0.70 0.45 AS-coop 4.61 3.19 1.77 0.93 0.53 0.31 Brite2 ubi-LRU 4.86 3.08 1.44 0.54 0.24 0.12 AS-coop 4.67 2.70 1.00 0.33 0.15 0.11

16 of 19

slide-51
SLIDE 51

Core Benefits of Cooperation Scheme (II)

  • Bandwidth saving:
  • unsatisfied requests by caches within the AS have to fetch data from
  • riginal content server and this yields the cross-traffic.
  • The amount of cross-traffic (in GB): one chunk is 10KB

AS Method 1% 2% 3% 4% 5% 6% 1755 ubi-LRU 4.06 2.70 1.57 0.69 0.34 0.19 AS-coop 3.91 2.47 1.25 0.47 0.21 0.12 3967 ubi-LRU 5.33 3.62 1.97 0.83 0.37 0.20 AS-coop 5.13 3.15 1.40 0.42 0.18 0.10 Brite1 ubi-LRU 4.73 3.38 2.06 1.16 0.70 0.45 AS-coop 4.61 3.19 1.77 0.93 0.53 0.31 Brite2 ubi-LRU 4.86 3.08 1.44 0.54 0.24 0.12 AS-coop 4.67 2.70 1.00 0.33 0.15 0.11

16 of 19

slide-52
SLIDE 52

Core Benefits of Cooperation Scheme (III)

  • Average hop count of requests:
  • intuitively, cache cooperation tends to generate more internal traffic;
  • measure the distribution of the hop count with which each

chunk-request is served;

17 of 19

slide-53
SLIDE 53

Core Benefits of Cooperation Scheme (III)

  • Average hop count of requests:
  • intuitively, cache cooperation tends to generate more internal traffic;
  • measure the distribution of the hop count with which each

chunk-request is served;

  • Below reports the CDF of hop counts when the cache size is 3%.

17 of 19

slide-54
SLIDE 54

Core Benefits of Cooperation Scheme (III)

  • Average hop count of requests:
  • intuitively, cache cooperation tends to generate more internal traffic;
  • measure the distribution of the hop count with which each

chunk-request is served;

  • AS 1755:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0

Hop count ubiquitous LRU intra-AS coop

17 of 19

slide-55
SLIDE 55

Core Benefits of Cooperation Scheme (III)

  • Average hop count of requests:
  • intuitively, cache cooperation tends to generate more internal traffic;
  • measure the distribution of the hop count with which each

chunk-request is served;

  • AS 3967:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0

Hop count ubiquitous LRU intra-AS coop

17 of 19

slide-56
SLIDE 56

Core Benefits of Cooperation Scheme (III)

  • Average hop count of requests:
  • intuitively, cache cooperation tends to generate more internal traffic;
  • measure the distribution of the hop count with which each

chunk-request is served;

  • Brite1:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0

Hop count ubiquitous LRU intra-AS coop

17 of 19

slide-57
SLIDE 57

Core Benefits of Cooperation Scheme (III)

  • Average hop count of requests:
  • intuitively, cache cooperation tends to generate more internal traffic;
  • measure the distribution of the hop count with which each

chunk-request is served;

  • Brite2:

10 20 30 40 50 60 70 80 90 100

CDF(%)

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0

Hop count ubiquitous LRU intra-AS coop

17 of 19

slide-58
SLIDE 58

Outline

1

Introduction

2

Cooperative Redundancy Elimination Problem formulation Modelling issues Greedy heuristic

3

Intra-AS Cache Cooperation Scheme

4

Performance Evaluation

5

Conclusion

18 of 19

slide-59
SLIDE 59

Conclusion

  • We studied the caching problem in CCN following a different

approach, advocating cooperative redundancy elimination after data have been cached. Two benefits reaped from intra-AS cache cooperation scheme:

caching slots released from RE can be used to cache other popular items; a broader view of cached items at neighbours helps serve locally unsatisfied requests;

Initial results:

greedy is efficiency in eliminating redundancy; intra-AS cache-coop improves caching performance of access routers; and reduces the AS cross-traffic without overloading internal links.

18 of 19

slide-60
SLIDE 60

Conclusion

  • We studied the caching problem in CCN following a different

approach, advocating cooperative redundancy elimination after data have been cached.

  • Two benefits reaped from intra-AS cache cooperation scheme:
  • caching slots released from RE can be used to cache other popular items;
  • a broader view of cached items at neighbours helps serve locally

unsatisfied requests;

Initial results:

greedy is efficiency in eliminating redundancy; intra-AS cache-coop improves caching performance of access routers; and reduces the AS cross-traffic without overloading internal links.

18 of 19

slide-61
SLIDE 61

Conclusion

  • We studied the caching problem in CCN following a different

approach, advocating cooperative redundancy elimination after data have been cached.

  • Two benefits reaped from intra-AS cache cooperation scheme:
  • caching slots released from RE can be used to cache other popular items;
  • a broader view of cached items at neighbours helps serve locally

unsatisfied requests;

  • Initial results:
  • greedy is efficiency in eliminating redundancy;
  • intra-AS cache-coop improves caching performance of access routers;
  • and reduces the AS cross-traffic without overloading internal links.

18 of 19

slide-62
SLIDE 62

Thank you very much!

19 of 19