simulating routing schemes on large scale topologies
play

Simulating Routing Schemes on Large- Scale Topologies Luc Hogie - PowerPoint PPT Presentation

Simulating Routing Schemes on Large- Scale Topologies Luc Hogie (INRIA, Fr), Dimitri Papadimitriou (A-L Bell Labs, Be), Issam Tahiri (INRIA, Fr), Frederic Majorczyk (Univ.Bordeaux/LaBRI, Fr) IEEE PADS Workshop Atlanta Georgia Tech. May


  1. Simulating Routing Schemes on Large- Scale Topologies Luc Hogie (INRIA, Fr), Dimitri Papadimitriou (A-L Bell Labs, Be), Issam Tahiri (INRIA, Fr), Frederic Majorczyk (Univ.Bordeaux/LaBRI, Fr) IEEE PADS Workshop Atlanta – Georgia Tech. May 19, 2010 1

  2. Motivation (1) Internet routing system • Autonomous Systems (AS) – 30k with growth rate of about 10% • Number of routing table (RT) entries per router – 300k entries with growth rate about 1.2-1.3/year – Worst-case projections: order of 1M entries within 5 years • BGP (Border Gateway Protocol) dynamics – Exhibits instability (routing policies) – Delayed convergence upon routing updates (dynamic adaptation to topology / reachability changes) 2

  3. Motivation (2) → Internet routing system facing performance limitations in terms of – Scalability (growth rate of RT entries) – Dynamics (resulting from convergence and stability properties of BGP) • New routing schemes recently proposed – Greedy routing schemes – Compact routing schemes: compact size of routing tables by omitting some network topology details such that resulting path length increase stays relatively small 3

  4. Bottom- line • DRMSim (Dynamic Routing Model Simulator)? – To evaluate performances and determine dynamic properties of routing schemes on large-scale topologies (>10k nodes) – To compare them with current Internet routing system inter-domain routing scheme (BGP) • Dimensions – General-purpose vs. Specialized simulators (e.g. SimBGP) • Optimized (in terms, e.g., of data structures and procedures) to execute BGP at the microscopic level on topologies comprising order of 1k nodes • Can not be easily extended to other routing protocol models – Macroscopic simulation (larger scale) vs. Microscopic simulation (smaller scale) 4

  5. Routing Schemes: Definition and Properties • Definition: algorithm that, for any destination node’s identifier (e.g. IP address), computes and selects a routing path to derive next-hop towards which traffic units (e.g. IP datagrams) will be forwarded to reach this destination • Properties – Centralized vs distributed computation of RT entries – Static vs dynamic (adaptive to e.g. topological or policy changes) – Topology name-dependent vs name-independent – Universal vs specialized (specific graph classes) 5

  6. Routing Schemes: Performance metrics • Performance – Stretch |Route (x,y)| = multiplicative stretch . Distance (x,y) with x, y ∈ G Additive stretch = |Route (x,y)| – Distance (x,y) with x, y ∈ G – Memory space consumption = number of bits required to store locally RT entries (RT size) – Communication cost = number of routing update messages that needs to be exchanged between routers to converge upon topology change – Computational complexity (time and resources) to compute RT entries • Note: fundamental trade-offs 6

  7. Implemented Routing Schemes (1) • Routing model classes example – Distance vector like RIP – Distance vector + loop-avoidance without network wide metric: path-vector like BGP – Computation criteria: D ji = min j ≠ k {d jk + D ki } 7

  8. Implemented Routing Schemes (2) • NSR [Nisse09]: compact routing scheme Properties – Name-dependent: node identifiers embed topological information (topology change → renaming) – Distributed label assignment – Fault-tolerant – Specialized for k-chordal graphs: any cycle with length ≥ k contains a chord • Applicable to chordal graphs � 3-chordal graph (a.k.a. triangulated graph) 8

  9. Implemented Routing Schemes (3) • G = (V,E) a graph and T a routed spanning tree of G – Labeling of nodes: integers assigned in synchronous distributed way (using Breadth First Search (BFS) tree) – Routing table: each node knows the name range interval of its neighbors – Source node x and destination node y • Routing decision process – If node x ≡ node y , then stop – If there is node w ∈ N G (x), an ancestor of y in T , then choose w minimizing d T (w,y) Otherwise, choose the parent node of x in T 9

  10. Implemented Routing Schemes (4) • Execution example Root 10: [1,10] 10 w 5: [1,5] 1 4: [6,9] 2 5 9 2.1 3: [6,8] 2: [1,2] 2: [3,4] 8 2 1.1 1.2 4 2: [1,1] 1.2.1 1 3 1: [3,3] 6 7 1: [7,7] 1.1.1 2.1.1 1: [6,6] 2.1.2 y x 10

  11. Implemented Routing Schemes (5) • NSR performance Network Stretch RT size Comp. complexity (time) chordal 1 O (n) O (n) O ( δ log n ) O ( ∆ ) k-chordal k - 1 where − δ is the maximum node degree − ∆ is the max level in the BFS tree 11

  12. Objectives • Custom routing model simulator for – Lightweight versions of routing schemes (trade-off: mesoscopic level) – Defining and implementing specific topology generators – Adjust granularity in order to achieve large-scale simulation of network processes • No currently available simulator would be capable to meet these needs simpler than by developing our own solution 12

  13. Features • DRMSim features – Large scale simulation with well optimized memory and CPU usage – Internet-like and other classic topology generators – Performance measures – Routing algorithm debug and monitoring tool • DRMSIM modularity – New topology generators, routing protocols and metrics must be easily added by users – DRMSIM must be user-friendly. For instance: setting, starting a simulation must be an easy task 13

  14. Challenges • Stateful routing – When using BGP, every node maintains a routing table consisting of paths towards every other node in the network – The amount of data stored is then in O(k × n 2 ) with a factor k which is the diameter of the network • Updateful (distributed) routing – Operations to routing tables (RT) are pretty even: very few nodes perform more RT access than others – Process of routing convergence: almost every event consists in an access (read or write, sometimes both) to local RT 14

  15. DRMSim Architecture Config generate Topology model Graph topology parameters init init Routing model RT Execution scheme Forwarding model CM ICM execution parameters scenario: 1:1, 1:N, M:1, M:N routing model init. schedule handle schedule handle forwarding model init. measurement metrics Discrete Event Simulator (DES) engine Event list management and event processing loop Advances in simulation time 15

  16. Routing and Forwarding Model • A common source of confusion lies in the difference between routing model simulation and forwarding model simulation • These functions are separated in the simulator – Routing model: exchange of routing information messages for computation and/or selection of routes – Forwarding model: refers to the processing of incoming data packets. Selection of an outgoing interface as result using the packet’s destination address 16

  17. Routing Model • Routing model specifies information exchanged between routers (routing update messages) and their communication mode between routers – Routing information communication model is instantiated for each router at topology initialization (depending on routing scheme) – When a routing update message is received by a router, the routing algorithm is applied • Classes of protocols implemented – Distance vector class: RIP and BGP – Compact class: NSR [Nisse09] and AGMNT [Abraham08] (trials) 17

  18. Execution Scheme • A scenario define what will be done during the simulation • It can be general (for every routing scheme) – 1:1: send a packet from one source to one destination – 1:N: one router send packets to N other routers (N may be "any") – M:N: M routers send packets to N other routers (M, N may be "any") – etc. • But also routing scheme specific – For BGP: run until convergence, wait that all routers contains the shortest path to all other routers – For NSR: tree computation – etc. • And dynamic network: – Define probability of link – Define probability of router failure – etc. 18

  19. Set of Executions • A simulation batch execute a set of simulation – Sequential execution: takes the set of individual simulation jobs and process them sequentially – Parallel execution: instantiates a predefined number of threads and uses them to execute jobs in parallel – Distributed execution (still under tests) use Remote Method Interface (RMI) technology to distribute individual simulation jobs across a set of cooperating computers 19

  20. Measurement Config generate Topology model Graph Measures topology parameters init store store init Routing model RT Execution scheme Forwarding model CM ICM execution parameters scenario: 1:1, 1:N, M:1, M:N routing model init. schedule handle schedule handle forwarding model init. measurement metrics Discrete Event Simulator (DES) engine 20

  21. Measurement Model • Approach – When an event modifies the state of the system, compute (and store) new value for the corresponding metric – Code becomes targeted to a specific measurement objective: performance vs flexibility tradeoff • For each metric m: set of pairs (t i , v i ) 21

  22. Simulation Results (NSR stretch) • NSR simulation on topology generated by GLP [Bu02] Additive stretch (average and standard deviation) as generated by NSR in function of #nodes when applied to power-law topology obtained using generalized linear preferential attachment (GLP) 22

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend