Simulating Routing Schemes on Large- Scale Topologies Luc Hogie - - PowerPoint PPT Presentation

simulating routing schemes on large scale topologies
SMART_READER_LITE
LIVE PREVIEW

Simulating Routing Schemes on Large- Scale Topologies Luc Hogie - - PowerPoint PPT Presentation

Simulating Routing Schemes on Large- Scale Topologies Luc Hogie (INRIA, Fr), Dimitri Papadimitriou (A-L Bell Labs, Be), Issam Tahiri (INRIA, Fr), Frederic Majorczyk (Univ.Bordeaux/LaBRI, Fr) IEEE PADS Workshop Atlanta Georgia Tech. May


slide-1
SLIDE 1

1

Simulating Routing Schemes on Large- Scale Topologies

Luc Hogie (INRIA, Fr), Dimitri Papadimitriou (A-L Bell Labs, Be), Issam Tahiri (INRIA, Fr), Frederic Majorczyk (Univ.Bordeaux/LaBRI, Fr) IEEE PADS Workshop Atlanta – Georgia Tech. May 19, 2010

slide-2
SLIDE 2

2

Motivation (1)

Internet routing system

  • Autonomous Systems (AS)

– 30k with growth rate of about 10%

  • Number of routing table (RT) entries per router

– 300k entries with growth rate about 1.2-1.3/year – Worst-case projections: order of 1M entries within 5 years

  • BGP (Border Gateway Protocol) dynamics

– Exhibits instability (routing policies) – Delayed convergence upon routing updates (dynamic adaptation to topology / reachability changes)

slide-3
SLIDE 3

3

Motivation (2)

→ Internet routing system facing performance

limitations in terms of

– Scalability (growth rate of RT entries) – Dynamics (resulting from convergence and stability properties of BGP)

  • New routing schemes recently proposed

– Greedy routing schemes – Compact routing schemes: compact size of routing tables by omitting some network topology details such that resulting path length increase stays relatively small

slide-4
SLIDE 4

4

Bottom-line

  • DRMSim (Dynamic Routing Model Simulator)?

– To evaluate performances and determine dynamic properties of routing schemes on large-scale topologies (>10k nodes) – To compare them with current Internet routing system inter-domain routing scheme (BGP)

  • Dimensions

– General-purpose vs. Specialized simulators (e.g. SimBGP)

  • Optimized (in terms, e.g., of data structures and procedures) to execute

BGP at the microscopic level on topologies comprising order of 1k nodes

  • Can not be easily extended to other routing protocol models

– Macroscopic simulation (larger scale) vs. Microscopic simulation (smaller scale)

slide-5
SLIDE 5

5

Routing Schemes: Definition and Properties

  • Definition: algorithm that, for any destination node’s

identifier (e.g. IP address), computes and selects a routing path to derive next-hop towards which traffic units (e.g. IP datagrams) will be forwarded to reach this destination

  • Properties

– Centralized vs distributed computation of RT entries – Static vs dynamic (adaptive to e.g. topological or policy changes) – Topology name-dependent vs name-independent – Universal vs specialized (specific graph classes)

slide-6
SLIDE 6

6

Routing Schemes: Performance metrics

  • Performance

– Stretch

|Route (x,y)| = multiplicative stretch . Distance (x,y) with x, y ∈ G Additive stretch = |Route (x,y)| – Distance (x,y) with x, y ∈ G

– Memory space consumption = number of bits required to store locally RT entries (RT size) – Communication cost = number of routing update messages that needs to be exchanged between routers to converge upon topology change – Computational complexity (time and resources) to compute RT entries

  • Note: fundamental trade-offs
slide-7
SLIDE 7

7

Implemented Routing Schemes (1)

  • Routing model classes example

– Distance vector like RIP – Distance vector + loop-avoidance without network wide metric: path-vector like BGP – Computation criteria: Dji = min j≠k {djk + Dki}

slide-8
SLIDE 8

8

Implemented Routing Schemes (2)

  • NSR [Nisse09]: compact routing scheme

Properties

– Name-dependent: node identifiers embed topological information (topology change → renaming) – Distributed label assignment – Fault-tolerant – Specialized for k-chordal graphs: any cycle with length ≥ k contains a chord

  • Applicable to chordal graphs 3-chordal graph (a.k.a.

triangulated graph)

slide-9
SLIDE 9

9

Implemented Routing Schemes (3)

  • G = (V,E) a graph and T a routed spanning tree of G

– Labeling of nodes: integers assigned in synchronous distributed way (using Breadth First Search (BFS) tree) – Routing table: each node knows the name range interval

  • f its neighbors

– Source node x and destination node y

  • Routing decision process

– If node x ≡ node y, then stop – If there is node w ∈ NG(x), an ancestor of y in T, then choose w minimizing dT(w,y) Otherwise, choose the parent node of x in T

slide-10
SLIDE 10

10

Implemented Routing Schemes (4)

  • Execution example

Root 2.1.2 2.1 1 1.1 1.2 2 1.2.1 2.1.1 1.1.1 10: [1,10] 5: [1,5] 4: [6,9] 2: [1,2] 2: [3,4] 3: [6,8] 1: [3,3] 2: [1,1] 1: [6,6] 1: [7,7] 10 5 9 2 4 1 3 8 6 7 w y x

slide-11
SLIDE 11

11

Implemented Routing Schemes (5)

  • NSR performance

O (∆) O (δ log n) k - 1 k-chordal O (n) O (n) 1 chordal

  • Comp. complexity

(time) RT size Stretch Network

where − δ is the maximum node degree − ∆ is the max level in the BFS tree

slide-12
SLIDE 12

12

Objectives

  • Custom routing model simulator for

– Lightweight versions of routing schemes (trade-off: mesoscopic level) – Defining and implementing specific topology generators – Adjust granularity in order to achieve large-scale simulation of network processes

  • No currently available simulator would be capable to

meet these needs simpler than by developing our

  • wn solution
slide-13
SLIDE 13

13

Features

  • DRMSim features

– Large scale simulation with well optimized memory and CPU usage – Internet-like and other classic topology generators – Performance measures – Routing algorithm debug and monitoring tool

  • DRMSIM modularity

– New topology generators, routing protocols and metrics must be easily added by users – DRMSIM must be user-friendly. For instance: setting, starting a simulation must be an easy task

slide-14
SLIDE 14

14

Challenges

  • Stateful routing

– When using BGP, every node maintains a routing table consisting of paths towards every other node in the network – The amount of data stored is then in O(k × n2) with a factor k which is the diameter of the network

  • Updateful (distributed) routing

– Operations to routing tables (RT) are pretty even: very few nodes perform more RT access than others – Process of routing convergence: almost every event consists in an access (read or write, sometimes both) to local RT

slide-15
SLIDE 15

15

DRMSim Architecture

Discrete Event Simulator (DES) engine Execution scheme Routing model Topology model Graph

generate

Forwarding model Config

RT

topology parameters execution parameters scenario: 1:1, 1:N, M:1, M:N routing model init. forwarding model init. measurement metrics

init init CM ICM schedule handle schedule handle

Event list management and event processing loop Advances in simulation time

slide-16
SLIDE 16

16

Routing and Forwarding Model

  • A common source of confusion lies in the difference

between routing model simulation and forwarding model simulation

  • These functions are separated in the simulator

– Routing model: exchange of routing information messages for computation and/or selection of routes – Forwarding model: refers to the processing of incoming data packets. Selection of an outgoing interface as result using the packet’s destination address

slide-17
SLIDE 17

17

Routing Model

  • Routing model specifies information exchanged

between routers (routing update messages) and their communication mode between routers

– Routing information communication model is instantiated for each router at topology initialization (depending on routing scheme) – When a routing update message is received by a router, the routing algorithm is applied

  • Classes of protocols implemented

– Distance vector class: RIP and BGP – Compact class: NSR [Nisse09] and AGMNT [Abraham08] (trials)

slide-18
SLIDE 18

18

Execution Scheme

  • A scenario define what will be done during the simulation
  • It can be general (for every routing scheme)

– 1:1: send a packet from one source to one destination – 1:N: one router send packets to N other routers (N may be "any") – M:N: M routers send packets to N other routers (M, N may be "any") – etc.

  • But also routing scheme specific

– For BGP: run until convergence, wait that all routers contains the shortest path to all other routers – For NSR: tree computation – etc.

  • And dynamic network:

– Define probability of link – Define probability of router failure – etc.

slide-19
SLIDE 19

19

Set of Executions

  • A simulation batch execute a set of simulation

– Sequential execution: takes the set of individual simulation jobs and process them sequentially – Parallel execution: instantiates a predefined number of threads and uses them to execute jobs in parallel – Distributed execution (still under tests) use Remote Method Interface (RMI) technology to distribute individual simulation jobs across a set of cooperating computers

slide-20
SLIDE 20

20

Measurement

Discrete Event Simulator (DES) engine Execution scheme Routing model Topology model Graph

generate

Forwarding model Config

RT

topology parameters execution parameters scenario: 1:1, 1:N, M:1, M:N routing model init. forwarding model init. measurement metrics

init init

Measures

store store CM ICM schedule handle schedule handle

slide-21
SLIDE 21

21

Measurement Model

  • Approach

– When an event modifies the state of the system, compute (and store) new value for the corresponding metric – Code becomes targeted to a specific measurement

  • bjective: performance vs flexibility tradeoff
  • For each metric m: set of pairs (ti, vi)
slide-22
SLIDE 22

22

Simulation Results (NSR stretch)

  • NSR simulation on topology generated by GLP [Bu02]

Additive stretch (average and standard deviation) as generated by NSR in function of #nodes when applied to power-law topology obtained using generalized linear preferential attachment (GLP)

slide-23
SLIDE 23

23

Simulation Results (NSR)

  • GLP topology

– numberOfInitialNodes={6} – numberOfEdgesPerStep={1.15} – beta={0.6753} – stepProbability={0.4669}

Number of exec: 32 Memory: 1 to 6GB (up to 8k nodes) and 8GB (up to 10k nodes)

slide-24
SLIDE 24

24

Simulation Results (NSR)

Avg Additive stretch Avg Routing Table Size

slide-25
SLIDE 25

25

Simulation Results (MRAI impact on BGP)

  • Determine probability that BGP convergence phase,

involves k MRAI instances

– Consider a network of N BGP routers where a BGP session exists between any of the peers with probability p

  • Some execution parameters needed to be modified

to obtain the number of MRAI instances that a simulation takes before convergence

– Transmission delays → 0 – Delays of timers (other than the MRAI) → 0

  • Therefore, duration of BGP simulation until reaching

convergence is almost equal to the sum of MRAI instances

slide-26
SLIDE 26

26

Simulation Results (MRAI impact on BGP)

  • Simulation results obtained for P[MRAI=1]
  • Comparison to previous work obtain by numerical

analysis and SSFNET simulation

  • Redundant MRAI instances occur with probabilities

close to 1 for most cases

slide-27
SLIDE 27

27

Conclusion and Future Work

  • Extend routing policy model
  • Distributed DES (larger topologies)
  • Advanced execution monitoring tools
slide-28
SLIDE 28

28

(Some) References

  • [Abraham08] I.Abraham, C.Gavoille, D.Malkhi, N.Nisan, and M Thorup,

Compact name-independent routing with minimum stretch, ACM Transactions on Algorithms, Vol.4(3), June 2008.

  • [Bu02] T. Bu, and D. Towsley, On distinguishing between Internet power

law topology generators, In Proceedings of the 21st Annual Joint Conference of the IEEE Computer and Communications Societies of IEEE (INFOCOM'02), New York, USA, December 2002.

  • [Nisse09] N. Nisse, K. Suchan, and I. Rapaport, Distributed computing of

efficient routing schemes in generalized chordal graphs, In Proceedings of the 16th Colloquium on Structural Information and Communication Complexity (SIROCCO’09), Piran, Slovenia, May 2009.

  • More references in the paper.