1
Scaling FIBs with Virtual Aggregation:
How Much Stretch? How Much FIB savings?
An Evaluation
By
Scaling FIBs with Virtual Aggregation: How Much Stretch? How Much - - PowerPoint PPT Presentation
Scaling FIBs with Virtual Aggregation: How Much Stretch? How Much FIB savings? An Evaluation By Dan Jen jenster@cs.ucla.edu 1 Disclaimer Much of the work in this presentation did not make it into the draft. Recently approved work
1
By
2
– Recently approved work
3
4
– VA distributes the DFZ FIB entries over many
– “If DFZ doesn't fit amongst 4 routers, store it
5
– Independently deployable by ISPs
– ISPs immediately get full scalability benefits
– Seamless Interworking with current Internet.
6
– suboptimal paths (“stretch”) – load on networks
7
8
2 1
b a c
9
2 1
b a c A B C
10
2 1
has more specific information.
b a c A B C
64.0.0.0/2
11
A B C
2 1
mapping label
b c a
12
2 1 b a c A B C
0.1.2.3
0.0.0.0/8
13
2 1
b a c A B C
0.1.2.3
0.0.0.0/8
14
2 1
b a c A B C
L1 0.1.2.3
0.0.0.0/8
15
2 1
b a c A B C
L1 0.1.2.3
0.0.0.0/8
16
2 1
b a c A B C
0.1.2.3
0.0.0.0/8
17
2 1 b a c A B C
A B C
A B C
0.1.2.3
18
2 1 b a c A B C
A B C
A B C
0.1.2.3
19
– i.e. # of directory routers in a directory set.
20
21
22
– No optimizations whatsoever
23
24
– 8 (all major POPs have enough routers for this)
– 1 per major POP (less than 15% of all POPs)
– Same as location of major POPs
25
26
– 1/8th of DFZ – Virtual Prefixes – Egress → Label mappings
– Virtual Prefixes – Egress → Label mappings
27
– Tracerouted to each major POP. – Determined the one-way time to nearest major
– Calculated the worst-case stretch the small
28
– Destination ---- Source <-----> Directory.
29
30
– 1/8th of DFZ (~35k, 37k worst case) – Virtual Prefixes (256 /8s) – Egress → Label mappings (~20k)
31
– Virtual Prefixes (256 /8s) – Egress → Label mappings (~20k)
32
Worst-Case
33
– Which is why worst worst-stretch is 16ms
– Some are major POPs – Some default to major POPs
34
– Optimizations can change results.
– ISP has at least a few large POPs containing
– Smaller POPs can reach a nearby large POP
35
36
– No RIB relief – No Churn Insulation – No Separation of Locators and Identifiers
37
– General consensus that FIB is most immediate
– http://www.cs.ucla.edu/~lixia/draft-zhang-evolution-01.txt
38
39
40
41
– Multihoming: pooling reliability. – Bittorrent: pooling upstream capacity
– ISPs have many routers, and each store 1 or
– VA says: “Why not pool the storage of your
42
– But how much traffic?
– But how much stretch?
43
2 1 b a c A B C
0.0.0.0/2 64.0.0.0/2 128.0.0.0/2 192.0.0.0/2
128.1.2.3
44
2 1 b a c A B C
0.0.0.0/2 64.0.0.0/2 128.0.0.0/2 192.0.0.0/2
128.1.2.3
45
– “That's too much trouble!”
– “Could you really? A Directory Set in each
46
– Constantly doing an exhaustive search of the
– Constantly monitoring traffic load to directory
– All D and ND routers should be existing
47
– Many POPs have 2 or fewer routers storing
– Putting a full directory set in those POPs would
48
– http://tools.ietf.org/html/draft-iab-raws-report-02#section-4.5
49
50
51
– Study to be published at NSDI next month.
52
53
– ISPs could optimize topology to reduce the
– FIB can be divided amongst line cards on the
– If we do want to go this route, we should look
54
– D routers: Over 80% FIB reduction – ND routers: Over 90% FIB reduction
– Worst-Case stretch: 16ms – Avg-Case stretch: 8ms
55
–
http://www.cs.umd.edu/~nspring/talks/sigcomm-rocketfuel.pdf