automatic discovery of the characteristics and capacities
play

Automatic discovery of the characteristics and capacities of a - PowerPoint PPT Presentation

Martin Q UINSON cole Normale Suprieure de Lyon, Laboratoire de lInformatique du Paralllisme Automatic discovery of the characteristics and capacities of a distributed computational platform December 11 th 2003 Introduction to the Grid


  1. The Network Weather Service: presentation Goal: (Grid) system availabilities measurement and forecasting Leaded by Prof. Wolski (UCSB), used by AppLeS, Globus, NetSolve, Ninf, DIET, . . . Architecture: Distributed system Sensor: conducts the measurements Memory: stores the results Forecaster: forecasts statistically the tendencies Name server: directory service like LDAP Memory Forecaster Nameserver Sensor Sensor Martin Q UINSON December 11th 2003 ⊳⊳ Slide 5 / 27 ⊲⊲ | |

  2. The Network Weather Service: presentation Goal: (Grid) system availabilities measurement and forecasting Leaded by Prof. Wolski (UCSB), used by AppLeS, Globus, NetSolve, Ninf, DIET, . . . Architecture: Distributed system Sensor: conducts the measurements Memory: stores the results Forecaster: forecasts statistically the tendencies Name server: directory service like LDAP External source Memory Forecaster Nameserver Test Test Sensor Sensor Steady state: regular tests Martin Q UINSON December 11th 2003 ⊳⊳ Slide 5 / 27 ⊲⊲ | |

  3. The Network Weather Service: presentation Goal: (Grid) system availabilities measurement and forecasting Leaded by Prof. Wolski (UCSB), used by AppLeS, Globus, NetSolve, Ninf, DIET, . . . Architecture: Distributed system Sensor: conducts the measurements Memory: stores the results Forecaster: forecasts statistically the tendencies Name server: directory service like LDAP Request Memory Forecaster Client Nameserver Sensor Sensor Handling of a request Martin Q UINSON December 11th 2003 ⊳⊳ Slide 5 / 27 ⊲⊲ | |

  4. The Network Weather Service: presentation Goal: (Grid) system availabilities measurement and forecasting Leaded by Prof. Wolski (UCSB), used by AppLeS, Globus, NetSolve, Ninf, DIET, . . . Architecture: Distributed system Sensor: conducts the measurements Memory: stores the results Forecaster: forecasts statistically the tendencies Name server: directory service like LDAP Memory Forecaster Client Nameserver Sensor Sensor Handling of a request Martin Q UINSON December 11th 2003 ⊳⊳ Slide 5 / 27 ⊲⊲ | |

  5. The Network Weather Service: presentation Goal: (Grid) system availabilities measurement and forecasting Leaded by Prof. Wolski (UCSB), used by AppLeS, Globus, NetSolve, Ninf, DIET, . . . Architecture: Distributed system Sensor: conducts the measurements Memory: stores the results Forecaster: forecasts statistically the tendencies Name server: directory service like LDAP Memory Forecaster Answer Client Nameserver Sensor Sensor Handling of a request Martin Q UINSON December 11th 2003 ⊳⊳ Slide 5 / 27 ⊲⊲ | |

  6. Measurements and Forecasting • Provided metrics: availableCpu (for an incoming process), currentCpu (for existing processes), bandwidthTcp, latencyTcp (Default: 64Kb in 16Kb messages; buffer=32Kb) , connectTimeTcp, freeDisk, freeMemory, . . . • Forecasting using statistics Data = serie: D 1 , D 2 , . . . , D n − 1 , D n . We want D n +1 . Methods are applied on D 1 , D 2 , . . . , D n − 1 . each one predict D n . Selection of the best on D n to predict D n +1 . Used statistical methods mean: running, (adapting) sliding window ; median: idem ; gradian: GRAD ( t, g ) = (1 − g ) × GRAD ( t − 1 , g ) + g × value ( t ) ; last value. Martin Q UINSON December 11th 2003 ⊳⊳ Slide 6 / 27 ⊲⊲ | |

  7. Conclusion about NWS � Complete environment � Uneasy to extend � Designed for scheduling � Sometimes difficult to deploy � Statistical forecasting � TCP only (myrinet-based?) � Widely used Related work NetPerf: HP project to sort network components, no interactivity GloPerf: Globus moves to NWS PingER: Regular pings between 600 hosts in 72 countries Iperf: Finds out the bandwidth by saturating the link for 30 seconds RPS: Forecasting limited to the CPU load Performance Co-Pilot (SGI): • Same kind of architecture • Low level data (/proc) ⇒ not easily usable by a scheduler • No forecasting Martin Q UINSON December 11th 2003 ⊳⊳ Slide 7 / 27 ⊲⊲ | |

  8. Overview • Introduction • NWS: Network Weather Service • FAST: Fast’s Agent System Timer • ALNeM: Application-Level Network Mapper • Conclusion Martin Q UINSON December 11th 2003 ⊳⊳ Slide 7 / 27 ⊲⊲ | |

  9. Fast Agent’s System Timer: presentation Goals: • gather routine’s performance on a given host at a given time • interactivity, ease of use Architecture: FAST library Needs modeling Sys availabilities NWS Martin Q UINSON December 11th 2003 ⊳⊳ Slide 8 / 27 ⊲⊲ | |

  10. Fast Agent’s System Timer: presentation Goals: • gather routine’s performance on a given host at a given time • interactivity, ease of use Architecture: FAST library Needs modeling Sys availabilities LDAP NWS Installation time Benchmarker Martin Q UINSON December 11th 2003 ⊳⊳ Slide 8 / 27 ⊲⊲ | |

  11. Fast Agent’s System Timer: presentation Goals: • gather routine’s performance on a given host at a given time • interactivity, ease of use Architecture: Client application FAST library Runtime library Needs modeling Sys availabilities LDAP NWS Installation time Benchmarker Martin Q UINSON December 11th 2003 ⊳⊳ Slide 8 / 27 ⊲⊲ | |

  12. FAST’s approach • Simple (sequential) routines like BLAS macro-benchmarking: benchmark {task; host} as a whole at installation • Getting the time: utime + stime to avoid backgroung load • Getting the space: step by step execution (like gdb) to track changes and search peak ⇒ rather long, but only once • Complex routines (ScaLAPACK) Structural decomposition by source analysis • Irregular routines (sparse algebra) No forecasting ⇒ selection of the fastest host Decomposition to extract simple parts Input of estimators from the application Routines needs modeling Related Work Elementary operation count: the myth of the constant Mflop/s • Analytical model, micro-benchmarking: complex �⇒ interactive, task description? • Probability, Markov: how to instanciate it at a given time? • Martin Q UINSON December 11th 2003 ⊳⊳ Slide 9 / 27 ⊲⊲ | |

  13. • Complex routines (ScaLAPACK) Structural decomposition by source analysis • Irregular routines (sparse algebra) No forecasting ⇒ selection of the fastest host Decomposition to extract simple parts Input of estimators from the application Routines needs modeling Related Work Elementary operation count: the myth of the constant Mflop/s • Analytical model, micro-benchmarking: complex �⇒ interactive, task description? • Probability, Markov: how to instanciate it at a given time? • FAST’s approach • Simple (sequential) routines like BLAS macro-benchmarking: benchmark {task; host} as a whole at installation • Getting the time: utime + stime to avoid backgroung load • Getting the space: step by step execution (like gdb) to track changes and search peak ⇒ rather long, but only once Martin Q UINSON December 11th 2003 ⊳⊳ Slide 9 / 27 ⊲⊲ | |

  14. • Irregular routines (sparse algebra) No forecasting ⇒ selection of the fastest host Decomposition to extract simple parts Input of estimators from the application Routines needs modeling Related Work Elementary operation count: the myth of the constant Mflop/s • Analytical model, micro-benchmarking: complex �⇒ interactive, task description? • Probability, Markov: how to instanciate it at a given time? • FAST’s approach • Simple (sequential) routines like BLAS macro-benchmarking: benchmark {task; host} as a whole at installation • Getting the time: utime + stime to avoid backgroung load • Getting the space: step by step execution (like gdb) to track changes and search peak ⇒ rather long, but only once • Complex routines (ScaLAPACK) Structural decomposition by source analysis Martin Q UINSON December 11th 2003 ⊳⊳ Slide 9 / 27 ⊲⊲ | |

  15. • Irregular routines (sparse algebra) No forecasting ⇒ selection of the fastest host Decomposition to extract simple parts Input of estimators from the application Routines needs modeling Related Work Elementary operation count: the myth of the constant Mflop/s • Analytical model, micro-benchmarking: complex �⇒ interactive, task description? • Probability, Markov: how to instanciate it at a given time? • FAST’s approach • Simple (sequential) routines like BLAS macro-benchmarking: benchmark {task; host} as a whole at installation • Getting the time: utime + stime to avoid backgroung load • Getting the space: step by step execution (like gdb) to track changes and search peak ⇒ rather long, but only once • Complex routines (ScaLAPACK) Freddy [CDQF03] , Structural decomposition by source analysis integration underway Martin Q UINSON December 11th 2003 ⊳⊳ Slide 9 / 27 ⊲⊲ | |

  16. Routines needs modeling Related Work Elementary operation count: the myth of the constant Mflop/s • Analytical model, micro-benchmarking: complex �⇒ interactive, task description? • Probability, Markov: how to instanciate it at a given time? • FAST’s approach • Simple (sequential) routines like BLAS macro-benchmarking: benchmark {task; host} as a whole at installation • Getting the time: utime + stime to avoid backgroung load • Getting the space: step by step execution (like gdb) to track changes and search peak ⇒ rather long, but only once • Complex routines (ScaLAPACK) Freddy [CDQF03] , Structural decomposition by source analysis integration underway • Irregular routines (sparse algebra) No forecasting ⇒ selection of the fastest host Decomposition to extract simple parts Input of estimators from the application Martin Q UINSON December 11th 2003 ⊳⊳ Slide 9 / 27 ⊲⊲ | |

  17. Quality of the modeling Time modeling dgeadd dgemm dtrsm icluster paraski icluster paraski icluster paraski Maximal 0.02s 0.02s 0.21s 5.8s 0.13s 0.31s error 6% 35% 0.3% 4% 10% 16% Average 0.006s 0.007s 0.025s 0.03s 0.02s 0.08s error 4% 6.5% 0.1% 0.1% 5% 7% icluster: bi-Pentium II, 256Mb, Linux, IMAG (Grenoble). Matrix addition dgeadd: Matrix multiplication paraski: Pentium III, 256Mb, Linux, IRISA (Rennes). dgemm: Triangular resolution dtrsm: network: Intra: LAN, 100Mb/s; Inter: VTHD network, 2.5Gb/s. Martin Q UINSON December 11th 2003 ⊳⊳ Slide 10 / 27 ⊲⊲ | |

  18. Quality of the modeling Time modeling dgeadd dgemm dtrsm icluster paraski icluster paraski icluster paraski Maximal 0.02s 0.02s 0.21s 5.8s 0.13s 0.31s error 6% 35% 0.3% 4% 10% 16% Average 0.006s 0.007s 0.025s 0.03s 0.02s 0.08s error 4% 6.5% 0.1% 0.1% 5% 7% icluster: bi-Pentium II, 256Mb, Linux, IMAG (Grenoble). Matrix addition dgeadd: Matrix multiplication paraski: Pentium III, 256Mb, Linux, IRISA (Rennes). dgemm: Triangular resolution dtrsm: network: Intra: LAN, 100Mb/s; Inter: VTHD network, 2.5Gb/s. Space modeling Almost perfect : Maximal error < 1% ; Average error ≈ 0.1% Code size + Matrix size (constant) (polynomial) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 10 / 27 ⊲⊲ | |

  19. Forecasting with background load dgemm with background load (CPU-intensive process in background). 700 Forecasted time on paraski Measured time on paraski 600 Forecasted time on icluster Measured time on icluster 500 Time (s) 400 300 200 100 0 128 256 384 512 640 768 896 1024 Matrices size Maximal error: 22% Average error < 10% Martin Q UINSON December 11th 2003 ⊳⊳ Slide 11 / 27 ⊲⊲ | |

  20. Forecasting of sequence with background load � C r = A r × B r − A i × B i C = client/servers over LAN C i = A r × B i + A i × B r 160 Measured time 140 Forecasted time 120 100 Time (s) 80 60 40 20 0 128 256 384 512 640 768 896 1024 Matrices size Maximal error: 25%; Average error: 13% Martin Q UINSON December 11th 2003 ⊳⊳ Slide 12 / 27 ⊲⊲ | |

  21. Comparison with NetSolve’s forecaster 700 300 NetSolve forecast NetSolve forecast Measured time Measured time 600 FAST forecast FAST forecast 250 500 200 Time (s) 400 Time (s) 150 300 100 200 50 100 0 0 0 128 256 384 512 640 768 896 1024 128 256 384 512 640 768 896 1024 1152 Matrices size Matrices size Computation time of dgemm. Communication time of dgemm. Martin Q UINSON December 11th 2003 ⊳⊳ Slide 13 / 27 ⊲⊲ | |

  22. Latency reduction FAST (cache miss) NWS (100685 s) µ (99569 s) µ 0.1 Time (s) FAST (cache hit) (24 s) µ Martin Q UINSON December 11th 2003 ⊳⊳ Slide 14 / 27 ⊲⊲ | |

  23. Responsiveness improvement Scheduler / NWS collaboration 1 0.9 0.8 0.7 CPU availability (%) 0.6 0.5 0.4 0.3 0.2 0.1 Task run Idle time Idle time 0 0 0.5 1 1.5 2 2.5 3 3.5 Time (minutes) Forecasting NWS: out of the box FAST: {sensors restart + forecaster reset} when the task starts or ends Theoretical value Martin Q UINSON December 11th 2003 ⊳⊳ Slide 15 / 27 ⊲⊲ | |

  24. Virtual booking: How does it work? FAST asks NWS to update NWS sensor Scheduled task 0 −1 correction 0 1 0 1 Time NWS Task NWS Scheduling Task updated ended updated decision started Martin Q UINSON December 11th 2003 ⊳⊳ Slide 16 / 27 ⊲⊲ | |

  25. Benefits of virtual booking 1 1 0.9 0.9 0.8 0.8 0.7 0.7 Cpu availability (%) Cpu availability (%) 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 Task running Task running 0.2 Idle time Idle time 0.2 Idle time Idle time 0.1 0.1 0 0 0 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 1.5 2 2.5 3 3.5 Time (minutes) Time (minutes) Measurements Forecasting NWS: ADAPT_CPU FAST: ADAPT_CPU + virtual booking + sensors restart + forecaster reset Theoretical value (Result of 4 different runs) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 17 / 27 ⊲⊲ | |

  26. Contributions of FAST Forecasting with load Responsiveness 1 160 Measured time 0.9 140 Forecasted time 0.8 120 0.7 100 Cpu availability (%) 0.6 Time (s) 80 0.5 0.4 60 0.3 40 Task running 0.2 Idle time Idle time 20 0.1 0 0 128 256 384 512 640 768 896 1024 0 0.5 1 1.5 2 2.5 3 3.5 Matrices size Time (minutes) Summary • Generic benchmarking solution • Simple interface to quantitative data • Parallel routines handling currently integrated • Integration: DIET, NetSolve , Grid-TLSE , cichlid • 15 000 lines of C code, Linux, Solaris, True64 • 2 journals and 3 conferences/workshops Martin Q UINSON December 11th 2003 ⊳⊳ Slide 18 / 27 ⊲⊲ | |

  27. Overview • Introduction • NWS: Network Weather Service • FAST: Fast’s Agent System Timer • ALNeM: Application-Level Network Mapper • Conclusion Martin Q UINSON December 11th 2003 ⊳⊳ Slide 18 / 27 ⊲⊲ | |

  28. Simplest: One big clique ; Better: Hierarchical Application-Level Network Mapper Goal: Mapping the network topology Authors: Arnaud Legrand, Martin Quinson Motivation: Server hosting, Simulation, Collective Communication Forecasting Target application: NWS hosting Problem: Network experiments must not collide (Clique concept) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 19 / 27 ⊲⊲ | |

  29. ; Better: Hierarchical Application-Level Network Mapper Goal: Mapping the network topology Authors: Arnaud Legrand, Martin Quinson Motivation: Server hosting, Simulation, Collective Communication Forecasting Target application: NWS hosting Problem: Network experiments must not collide (Clique concept) Simplest: One big clique Client Client Client Server Server Server Server Server Server Server Server Martin Q UINSON December 11th 2003 ⊳⊳ Slide 19 / 27 ⊲⊲ | |

  30. Application-Level Network Mapper Goal: Mapping the network topology Authors: Arnaud Legrand, Martin Quinson Motivation: Server hosting, Simulation, Collective Communication Forecasting Target application: NWS hosting Problem: Network experiments must not collide (Clique concept) Simplest: One big clique ; Better: Hierarchical Client Server Server Server Server Server Server Server Server Martin Q UINSON December 11th 2003 ⊳⊳ Slide 19 / 27 ⊲⊲ | |

  31. Application-Level Network Mapper Goal: Mapping the network topology Authors: Arnaud Legrand, Martin Quinson Motivation: Server hosting, Simulation, Collective Communication Forecasting Focus: Discover interferences (limiting common links), not really packet paths Martin Q UINSON December 11th 2003 ⊳⊳ Slide 19 / 27 ⊲⊲ | |

  32. Application-Level Network Mapper Goal: Mapping the network topology Authors: Arnaud Legrand, Martin Quinson Motivation: Server hosting, Simulation, Collective Communication Forecasting Focus: Discover interferences (limiting common links), not really packet paths Related work Method Restricted Focus Routers Notes authorized path all passive, dumb routers, LAN SNMP ICMP path all level 3 of OSI traceroute root path all link bandwidth, slow pathchar no path tree Other d in � = d out bipartite [Rabbat03] tomography no interference some tree only ENV Martin Q UINSON December 11th 2003 ⊳⊳ Slide 19 / 27 ⊲⊲ | |

  33. ALNeM: Notations bw � cd ( ab ) Def (non-interference): ( ab ) � rl ( cd ) ⇐ ⇒ ≈ 1 bw ( ab ) ( ab ) � bw � cd ( ab ) Def (interference): � rl ( cd ) ⇐ ⇒ ≈ 0 . 5 bw ( ab ) Def: Interference matrix I ( V, � � rl )   if ( ab ) � 1 � rl ( cd ) I ( V, � � rl )( a, b, c, d ) =  if not 0 I NTERFERENCE G RAPH : Given H and I ( H , � � rl ) , Find a graph G ( V, E ) and the associated routing satisfying:   H ⊂ V   I ( H , � � G ) = I ( H , � . � rl )    | V | is minimal. Martin Q UINSON December 11th 2003 ⊳⊳ Slide 20 / 27 ⊲⊲ | |

  34. ALNeM: Notations bw � cd ( ab ) Def (non-interference): ( ab ) � rl ( cd ) ⇐ ⇒ ≈ 1 bw ( ab ) ( ab ) � bw � cd ( ab ) Def (interference): � rl ( cd ) ⇐ ⇒ ≈ 0 . 5 bw ( ab ) Def: Interference matrix I ( V, � � rl )   if ( ab ) � 1 � rl ( cd ) I ( V, � � rl )( a, b, c, d ) =  if not 0 I NTERFERENCE G RAPH : Given H and I ( H , � � rl ) , Find a graph G ( V, E ) and the associated routing satisfying:   H ⊂ V   I ( H , � � G ) = I ( H , � . � rl )    | V | is minimal. Martin Q UINSON December 11th 2003 ⊳⊳ Slide 20 / 27 ⊲⊲ | |

  35. ALNeM: Notations bw � cd ( ab ) Def (non-interference): ( ab ) � rl ( cd ) ⇐ ⇒ ≈ 1 bw ( ab ) bw � cd ( ab ) ( ab ) � Def (interference): � rl ( cd ) ⇐ ⇒ ≈ 0 . 5 bw ( ab ) Def: Interference matrix I ( V, � � rl )   if ( ab ) � 1 � rl ( cd ) I ( V, � � rl )( a, b, c, d ) =  if not 0 I NTERFERENCE G RAPH : Given H and I ( H , � � rl ) , Find a graph G ( V, E ) and the associated routing satisfying:   H ⊂ V   I ( H , � � G ) = I ( H , � . � rl )    | V | is minimal. Martin Q UINSON December 11th 2003 ⊳⊳ Slide 20 / 27 ⊲⊲ | |

  36. Mathematical tools ⇒ ∀ ( u, v ) ∈ H , ( au ) � Def. (total interference): a ⊥ b ⇐ � rl ( bv ) � ⇒ ∃ ρ ∈ � Lemma (separator): ∀ a, b ∈ H , a ⊥ b ⇐ ∀ z ∈ H : ρ ∈ ( a − → z ) ∩ ( b − → z ) . V ( ⊥⇐ ⇒ ∃ ρ separator) Theorem: ⊥ is an equivalence relation (under some assumptions) Theorem (representativity): C equivalence class under ⊥ (under some assumptions) ∀ ρ, σ ∈ C , ∀ b, u, v ∈ H , ( ρ, u ) � � rl ( b, v ) ⇔ ( σ, u ) � � rl ( b, v ) (you can interchange any member of the class by any other in the matrix) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 21 / 27 ⊲⊲ | |

  37. Mathematical tools ⇒ ∀ ( u, v ) ∈ H , ( au ) � Def. (total interference): a ⊥ b ⇐ � rl ( bv ) � ⇒ ∃ ρ ∈ � Lemma (separator): ∀ a, b ∈ H , a ⊥ b ⇐ ∀ z ∈ H : ρ ∈ ( a − → z ) ∩ ( b − → z ) . V ( ⊥⇐ ⇒ ∃ ρ separator) Theorem: ⊥ is an equivalence relation (under some assumptions) Theorem (representativity): C equivalence class under ⊥ (under some assumptions) ∀ ρ, σ ∈ C , ∀ b, u, v ∈ H , ( ρ, u ) � � rl ( b, v ) ⇔ ( σ, u ) � � rl ( b, v ) (you can interchange any member of the class by any other in the matrix) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 21 / 27 ⊲⊲ | |

  38. Mathematical tools ⇒ ∀ ( u, v ) ∈ H , ( au ) � Def. (total interference): a ⊥ b ⇐ � rl ( bv ) � ⇒ ∃ ρ ∈ � Lemma (separator): ∀ a, b ∈ H , a ⊥ b ⇐ ∀ z ∈ H : ρ ∈ ( a − → z ) ∩ ( b − → z ) . V ( ⊥⇐ ⇒ ∃ ρ separator) Theorem: ⊥ is an equivalence relation (under some assumptions) Theorem (representativity): C equivalence class under ⊥ (under some assumptions) ∀ ρ, σ ∈ C , ∀ b, u, v ∈ H , ( ρ, u ) � � rl ( b, v ) ⇔ ( σ, u ) � � rl ( b, v ) (you can interchange any member of the class by any other in the matrix) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 21 / 27 ⊲⊲ | |

  39. Mathematical tools ⇒ ∀ ( u, v ) ∈ H , ( au ) � Def. (total interference): a ⊥ b ⇐ � rl ( bv ) � ⇒ ∃ ρ ∈ � Lemma (separator): ∀ a, b ∈ H , a ⊥ b ⇐ ∀ z ∈ H : ρ ∈ ( a − → z ) ∩ ( b − → z ) . V ( ⊥⇐ ⇒ ∃ ρ separator) Theorem: ⊥ is an equivalence relation (under some assumptions) Theorem (representativity): C equivalence class under ⊥ (under some assumptions) ∀ ρ, σ ∈ C , ∀ b, u, v ∈ H , ( ρ, u ) � � rl ( b, v ) ⇔ ( σ, u ) � � rl ( b, v ) (you can interchange any member of the class by any other in the matrix) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 21 / 27 ⊲⊲ | |

  40. Theorem: When | C inf | = 1 , the graph built is a solution. Theorem: If a tree being a solution exists, | C inf | = 1 . Remark: The graph built is optimal (wrt | V | since V = H ) Theorem: When I contains no interferences, the clique of C i is a valid solution. Remark: It is also optimal Algorithm for cliques of trees Equivalence class ⇒ greedy algorithm eating the leaves D C E B A G H F I A B C D E F G H I Martin Q UINSON December 11th 2003 ⊳⊳ Slide 22 / 27 ⊲⊲ | |

  41. Theorem: When | C inf | = 1 , the graph built is a solution. Theorem: If a tree being a solution exists, | C inf | = 1 . Remark: The graph built is optimal (wrt | V | since V = H ) Theorem: When I contains no interferences, the clique of C i is a valid solution. Remark: It is also optimal Algorithm for cliques of trees Equivalence class ⇒ greedy algorithm eating the leaves D E C B A G H F B D G I A C E F H I Martin Q UINSON December 11th 2003 ⊳⊳ Slide 22 / 27 ⊲⊲ | |

  42. Theorem: When | C inf | = 1 , the graph built is a solution. Theorem: If a tree being a solution exists, | C inf | = 1 . Remark: The graph built is optimal (wrt | V | since V = H ) Theorem: When I contains no interferences, the clique of C i is a valid solution. Remark: It is also optimal Algorithm for cliques of trees Equivalence class ⇒ greedy algorithm eating the leaves D E C B A D G G H F B I A C E F H I Martin Q UINSON December 11th 2003 ⊳⊳ Slide 22 / 27 ⊲⊲ | |

  43. Theorem: When | C inf | = 1 , the graph built is a solution. Theorem: If a tree being a solution exists, | C inf | = 1 . Remark: The graph built is optimal (wrt | V | since V = H ) Theorem: When I contains no interferences, the clique of C i is a valid solution. Remark: It is also optimal Algorithm for cliques of trees Equivalence class ⇒ greedy algorithm eating the leaves D E C B D A G G H F B I A C E F H I Martin Q UINSON December 11th 2003 ⊳⊳ Slide 22 / 27 ⊲⊲ | |

  44. Theorem: When I contains no interferences, the clique of C i is a valid solution. Remark: It is also optimal Algorithm for cliques of trees Equivalence class ⇒ greedy algorithm eating the leaves D E C B D A G G H F B I A C E F H I Theorem: When | C inf | = 1 , the graph built is a solution. Theorem: If a tree being a solution exists, | C inf | = 1 . Remark: The graph built is optimal (wrt | V | since V = H ) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 22 / 27 ⊲⊲ | |

  45. Algorithm for cliques of trees Equivalence class ⇒ greedy algorithm eating the leaves D E C B D A G G H F B I A C E F H I Theorem: When | C inf | = 1 , the graph built is a solution. Theorem: If a tree being a solution exists, | C inf | = 1 . Remark: The graph built is optimal (wrt | V | since V = H ) Theorem: When I contains no interferences, the clique of C i is a valid solution. Remark: It is also optimal Martin Q UINSON December 11th 2003 ⊳⊳ Slide 22 / 27 ⊲⊲ | |

  46. Extension for cycles Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) ⇒ Cut between a and b ! a b Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  47. Extension for cycles α β Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) ⇒ Cut between a and b ! a b Finding out how to cut Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  48. Extension for cycles I 2 I 3 I 1 α β Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) ⇒ Cut between a and b ! a b Finding out how to cut 8 → u ) and b �∈ ( a − ˘ ¯ I 1 = u ∈ C i : a ∈ ( b − → u ) I 4 = { a ; b } > > > > → u ) and b �∈ ( a − ˘ ¯ > I 2 = u ∈ C i : a �∈ ( b − → u ) < a → u ) and b ∈ ( a − ˘ ¯ I 3 = u ∈ C i : a �∈ ( b − → u ) > > the contrary would imply u > > → u ) and b ∈ ( a − ˘ ¯ > I 4 = u ∈ C i : a ∈ ( b − → u ) : b Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  49. Extension for cycles I 2 I 3 I 1 v α u β Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) u ⇒ Cut between a and b ! a v b Finding out how to cut u v a α β b a } 1 I 1 1 1 8 → u ) and b �∈ ( a − ˘ ¯ I 1 = u ∈ C i : a ∈ ( b − → u ) α > > } > 0\1 0 1 I 2 > → u ) and b �∈ ( a − ˘ ¯ > I 2 = u ∈ C i : a �∈ ( b − → u ) < β → u ) and b ∈ ( a − ˘ ¯ I 3 = u ∈ C i : a �∈ ( b − → u ) } > 0 0 1 > I 3 > > → u ) and b ∈ ( a − > ˘ ¯ I 4 = u ∈ C i : a ∈ ( b − → u ) : b Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  50. Extension for cycles I 2 I 3 I 1 v α u β Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) u ⇒ Cut between a and b ! a v b Finding out how to cut u v a α β b a } 1 I 1 1 1 8 → u ) and b �∈ ( a − ˘ ¯ I 1 = u ∈ C i : a ∈ ( b − → u ) α > > } > 0\1 0 1 I 2 > → u ) and b �∈ ( a − ˘ ¯ > I 2 = u ∈ C i : a �∈ ( b − → u ) < β → u ) and b ∈ ( a − ˘ ¯ I 3 = u ∈ C i : a �∈ ( b − → u ) } > 0 0 1 > I 3 > > → u ) and b ∈ ( a − > ˘ ¯ I 4 = u ∈ C i : a ∈ ( b − → u ) : b Topological sort on the graph associated to the matrix slice gives I 1 , I 2 , I 3 Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  51. ☎ ✆ ✄ ☎ ☎ ☎ ☎ ☎ ☎ ✞ ☎ ☎ ✆ ✄ ✆ ✆ ✆ ✆ ✝ ✝ ✝ ✝ ✝ ✝ ✄ ✄ ✞ ✁ ✞ ✞ ✞ � � � � � � ✁ ✁ ✄ ✁ ✁ ✁ ✂ ✂ ✂ ✂ ✂ ✂ ✄ ✞ Extension for cycles I 2 I 3 I 1 Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) ⇒ Cut between a and b ! a b Finding out how to cut How to connect parts afterward First step on I 1 → Finds 2 classes I 1 a and I 1 α ; a ∈ I 1 a . First step on I 3 → Finds 2 classes I 1 b and I 1 β ; b ∈ I 1 b . Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  52. ☎ ✆ ✄ ☎ ☎ ☎ ☎ ☎ ☎ ✞ ☎ ☎ ✆ ✄ ✆ ✆ ✆ ✆ ✝ ✝ ✝ ✝ ✝ ✝ ✄ ✄ ✞ ✁ ✞ ✞ ✞ � � � � � � ✁ ✁ ✄ ✁ ✁ ✁ ✂ ✂ ✂ ✂ ✂ ✂ ✄ ✞ Extension for cycles I 2 I 3 I 1 Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) ⇒ Cut between a and b ! a b Finding out how to cut How to connect parts afterward First step on I 1 → Finds 2 classes I 1 a and I 1 α ; a ∈ I 1 a . First step on I 3 → Finds 2 classes I 1 b and I 1 β ; b ∈ I 1 b . Reconnect I 1 a and I 1 b ; Reconnect I 1 α and I 1 β . Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  53. ☎ ✆ ✄ ☎ ☎ ☎ ☎ ☎ ☎ ☎ ✞ ☎ ✆ ✄ ✆ ✆ ✆ ✆ ✝ ✝ ✝ ✝ ✝ ✝ ✄ ✄ ✞ ✁ ✞ ✞ ✞ � � � � � � ✁ ✁ ✄ ✁ ✁ ✁ ✂ ✂ ✂ ✂ ✂ ✂ ✄ ✞ Extension for cycles I 2 I 3 I 1 Let a, b be the elements of C i with the more interferences. Lemma: no solution with ∃ z ∈ H so that z ∈ ( a − → b ) ⇒ Cut between a and b ! a b Finding out how to cut How to connect parts afterward First step on I 1 → Finds 2 classes I 1 a and I 1 α ; a ∈ I 1 a . First step on I 3 → Finds 2 classes I 1 b and I 1 β ; b ∈ I 1 b . Reconnect I 1 a and I 1 b ; Reconnect I 1 α and I 1 β . No demonstration of this... Martin Q UINSON December 11th 2003 ⊳⊳ Slide 23 / 27 ⊲⊲ | |

  54. Data collection Interference measurement between each pair of hosts. • Naïve algorithm: • N 4 , 30s. per step ⇒ 50 days for 20 hosts. • Speedups thanks to traceroute or other tomography • Independent tests in parallel • Validation of information sets • Refinement of existing graph? Deserve more investigation Martin Q UINSON December 11th 2003 ⊳⊳ Slide 24 / 27 ⊲⊲ | |

  55. • development on simulator (SimGrid [CLM03]) and immediate deployment • target: distributed event-based applications, C language • 10 000 lines of C code, Linux, Solaris • Submitted to one workshop Contributions of ALNeM • Retrieve the interference-based topology from direct measurements • Strong mathemathical basements (optimal for cliques of trees) • More generic than ENV (algorithm for cycles) • 2 000 lines of C code; one research report • Based on GRAS [Quinson03] 112 114 119 110 33 118 37 39 126 38 103 36 133 101 131 98 121 96 99 134 94 130 92 34 153 124 20 86 154 31 35 21 138 85 152 32 28 136 30 24 26 47 5 25 0 106 6 46 29 23 109 42 1 8 45 27 105 141 22 3 148 40 2 144 146 143 147 9 62 61 7 44 41 51 63 4 15 60 80 64 13 155 14 17 50 68 10 159 11 16 18 67 158 174 170 52 19 66 12 173 75 69 70 171 172 74 78 53 76 73 79 72 71 180 183 59 182 57 56 175 176 161 58 179 55 177 164 165 54 168 167 169 166 Martin Q UINSON December 11th 2003 ⊳⊳ Slide 25 / 27 ⊲⊲ | |

  56. Contributions of ALNeM • Retrieve the interference-based topology from direct measurements • Strong mathemathical basements (optimal for cliques of trees) • More generic than ENV (algorithm for cycles) • 2 000 lines of C code; one research report • Based on GRAS [Quinson03] • development on simulator (SimGrid [CLM03]) and immediate deployment • target: distributed event-based applications, C language • 10 000 lines of C code, Linux, Solaris • Submitted to one workshop 112 114 119 110 33 118 37 39 126 38 103 36 133 101 131 98 121 96 99 134 94 130 92 34 153 124 20 86 154 31 35 21 138 85 152 32 28 136 30 24 26 47 5 25 0 106 6 46 29 23 109 42 1 8 45 27 105 141 22 3 148 40 2 144 146 143 147 9 62 61 7 44 41 51 63 4 15 60 80 64 13 155 14 17 50 68 10 159 11 16 18 67 158 174 170 52 19 66 12 173 75 69 70 171 172 74 78 53 76 73 79 72 71 180 183 59 182 57 56 175 176 161 58 179 55 177 164 165 54 168 167 169 166 Martin Q UINSON December 11th 2003 ⊳⊳ Slide 25 / 27 ⊲⊲ | |

  57. Overview • Introduction • NWS: Network Weather Service • FAST: Fast’s Agent System Timer • ALNeM: Application-Level Network Mapper • Conclusion Martin Q UINSON December 11th 2003 ⊳⊳ Slide 25 / 27 ⊲⊲ | |

  58. Conclusion • Major issue on the Grid: collecting data (before scheduling) Martin Q UINSON December 11th 2003 ⊳⊳ Slide 26 / 27 ⊲⊲ | |

  59. Contributions: Future work: – Lower latency – Automatic deployment – Better responsiveness – Process management Contributions: Future work: – Generic benchmarking framework – Integration of Freddy – Unified interface to quantitative data – Irregular routines (sparse algebra) – Virtual booking – New metrics (like I/O)? – Integration: DIET, NetSolve, Grid-TLSE – Yet better integration within NWS – 2 journals; 3 conferences/workshops Conclusion • Major issue on the Grid: collecting data (before scheduling) • Gathering quantitative data : NWS + FAST NWS: System availability FAST: Routine needs Martin Q UINSON December 11th 2003 ⊳⊳ Slide 26 / 27 ⊲⊲ | |

  60. Contributions: Future work: – Generic benchmarking framework – Integration of Freddy – Unified interface to quantitative data – Irregular routines (sparse algebra) – Virtual booking – New metrics (like I/O)? – Integration: DIET, NetSolve, Grid-TLSE – Yet better integration within NWS – 2 journals; 3 conferences/workshops Conclusion • Major issue on the Grid: collecting data (before scheduling) • Gathering quantitative data : NWS + FAST NWS: System availability Contributions: Future work: – Lower latency – Automatic deployment – Better responsiveness – Process management FAST: Routine needs Martin Q UINSON December 11th 2003 ⊳⊳ Slide 26 / 27 ⊲⊲ | |

  61. Conclusion • Major issue on the Grid: collecting data (before scheduling) • Gathering quantitative data : NWS + FAST NWS: System availability Contributions: Future work: – Lower latency – Automatic deployment – Better responsiveness – Process management FAST: Routine needs Contributions: Future work: – Generic benchmarking framework – Integration of Freddy – Unified interface to quantitative data – Irregular routines (sparse algebra) – Virtual booking – New metrics (like I/O)? – Integration: DIET, NetSolve, Grid-TLSE – Yet better integration within NWS – 2 journals; 3 conferences/workshops Martin Q UINSON December 11th 2003 ⊳⊳ Slide 26 / 27 ⊲⊲ | |

  62. Conclusion • Major issue on the Grid: collecting data (before scheduling) • Gathering quantitative data : NWS + FAST • Gathering qualitative data : ALNeM ALNeM: Network topology to know about interferences Future work: Contributions: – Proof of NP-hardness . . . – Strong mathematical basements – . . . or exact algorithm – Optimal in size for cliques of trees – Experimentation on real platform – Partial cycle handling – Optimization of the measurements – GRAS: application development tool – Iterative algo. (modification detection) – Submitted to one workshop – Integration within NWS – Hosting of DIET Martin Q UINSON December 11th 2003 ⊳⊳ Slide 26 / 27 ⊲⊲ | |

  63. Selected publications Book chapter: 1 national • E. Caron, F. Desprez, E. Fleury, F. Lombard, J.-M. Nicod, M. Quinson, and F. Suter. Une approche hiérarchique des serveurs de calculs, in Calcul réparti à grande échelle . Hermès Science Paris, 2002. ISBN 2-7462-0472-X. Journals: 2 internationals (+ 1 submitted), 1 national • E. Caron, F. Desprez, M. Quinson, and F. Suter. Performance Evaluation of Linear Algebra Routines for Network Enabled Servers. Parallel Computing, special issue on Cluters and Computational Grids for scientific computing , 2003. • F. Desprez, M. Quinson. Dynamic Performance Forecasting for Network-Enabled Servers in a Grid Environment. Submitted to IEEE Transactions on Parallel and Distributed Systems . Conferences/workshops: 4 internationals (+ 2 submitted), 2 nationals. • Ph. Combes, F. Lombard, M. Quinson, and F. Suter. A Scalable Approach to Network Enabled Servers. Proceedings of the 7th Asian Computing Science Conference . LNCS 2550:110–124, Springer-Verlag, Jan 2002. • M. Quinson. Dynamic Performance Forecasting for Network-Enabled Servers in a Metacomputing Environment. International Workshop on Performance Modeling, Evaluation, and Optimization of Parallel and Distributed Systems (PMEO-PDS’02) , April 15-19 2002. • A. Legrand, M. Quinson. Automatic deployment of the Network Weather Service using the Effective Network View. Submitted to Workshop on Grid Benchmarking, associated to IPDPS’04 . • O. Aumage, A. Legrand, M. Quinson. Reconciling the Grid Reality And Simulation. Submitted to Parallel and Distributed Systems: Testing and Debugging, associated to IPDPS’04 . Martin Q UINSON December 11th 2003 ⊳⊳ Slide 27 / 27 ⊲⊲ | |

  64. Appendix

  65. Sensor in the middle ? Test Test A B C NWS NWS bp ( AC ) = min ( bp ( AB ); bp ( BC )) lat ( AC ) = lat ( AB ) + lat ( BC ) It’s a must to reassemble measurements in hierarchical monitoring Martin Q UINSON December 11th 2003 ⊳⊳ Slide 28 / 27 ⊲⊲ | |

  66. RPC and grid computing: GridRPC A simple idea: Implement the RPC model over the Grid • Remote Procedure Call: run a computation remotely C C C C C • Good and simple paradigm to implement the Grid Agent • Some of the functionalities needed: • Computation scheduling, data migration S S S S S • Security, fault-tolerance, interoperability, . . . • 5 fundamental components: Client Server Agent Monitor Database Martin Q UINSON December 11th 2003 ⊳⊳ Slide 29 / 27 ⊲⊲ | |

  67. RPC and grid computing: GridRPC A simple idea: Implement the RPC model over the Grid • Remote Procedure Call: run a computation remotely C C C C C • Good and simple paradigm to implement the Grid Agent • Some of the functionalities needed: • Computation scheduling, data migration S S S S S • Security, fault-tolerance, interoperability, . . . • 5 fundamental components: Client Several user interfaces which submit the requests to servers Server Agent Monitor Database Martin Q UINSON December 11th 2003 ⊳⊳ Slide 29 / 27 ⊲⊲ | |

  68. RPC and grid computing: GridRPC A simple idea: Implement the RPC model over the Grid • Remote Procedure Call: run a computation remotely C C C C C • Good and simple paradigm to implement the Grid Agent • Some of the functionalities needed: • Computation scheduling, data migration S S S S S • Security, fault-tolerance, interoperability, . . . • 5 fundamental components: Client Several user interfaces which submit the requests to servers Server Runs software modules to solve client’s requests Agent Monitor Database Martin Q UINSON December 11th 2003 ⊳⊳ Slide 29 / 27 ⊲⊲ | |

  69. RPC and grid computing: GridRPC A simple idea: Implement the RPC model over the Grid • Remote Procedure Call: run a computation remotely C C C C C • Good and simple paradigm to implement the Grid Agent • Some of the functionalities needed: • Computation scheduling, data migration S S S S S • Security, fault-tolerance, interoperability, . . . • 5 fundamental components: Client Several user interfaces which submit the requests to servers Server Runs software modules to solve client’s requests Agent Gets client’s requests and schedules them onto the servers Monitor Database Martin Q UINSON December 11th 2003 ⊳⊳ Slide 29 / 27 ⊲⊲ | |

  70. RPC and grid computing: GridRPC A simple idea: Implement the RPC model over the Grid • Remote Procedure Call: run a computation remotely C C C C C • Good and simple paradigm to implement the Grid Agent • Some of the functionalities needed: • Computation scheduling, data migration S S S S S • Security, fault-tolerance, interoperability, . . . • 5 fundamental components: Client Several user interfaces which submit the requests to servers Server Runs software modules to solve client’s requests Agent Gets client’s requests and schedules them onto the servers Monitor Monitors the current state of the resources Database Martin Q UINSON December 11th 2003 ⊳⊳ Slide 29 / 27 ⊲⊲ | |

  71. RPC and grid computing: GridRPC A simple idea: Implement the RPC model over the Grid • Remote Procedure Call: run a computation remotely C C C C C • Good and simple paradigm to implement the Grid Agent DB • Some of the functionalities needed: • Computation scheduling, data migration S S S S S • Security, fault-tolerance, interoperability, . . . • 5 fundamental components: Client Several user interfaces which submit the requests to servers Server Runs software modules to solve client’s requests Agent Gets client’s requests and schedules them onto the servers Monitor Monitors the current state of the resources Database Contains static and dynamic knowledges about resources Martin Q UINSON December 11th 2003 ⊳⊳ Slide 29 / 27 ⊲⊲ | |

  72. RPC and grid computing: GridRPC A simple idea: Implement the RPC model over the Grid • Remote Procedure Call: run a computation remotely C C C C C • Good and simple paradigm to implement the Grid Agent DB • Some of the functionalities needed: • Computation scheduling, data migration S S S S S • Security, fault-tolerance, interoperability, . . . • 5 fundamental components: Client Several user interfaces which submit the requests to servers Server Runs software modules to solve client’s requests Agent Gets client’s requests and schedules them onto the servers Monitor Monitors the current state of the resources Database Contains static and dynamic knowledges about resources Knowing the platform is crucial for the agent Martin Q UINSON December 11th 2003 ⊳⊳ Slide 29 / 27 ⊲⊲ | |

  73. Freddy Temps pdgemm(M, N, K) = � K � � � K � � × temps_dgemm + ( M × K ) τ q p + ( K × N ) τ p λ q p + λ p q + . q R R 50 Multiplication 45 Matrices A B Redistribution 40 35 30 Distributions 25 Ga Gb 20 15 Possible virtual 10 Gv2 Gv1 grids 5 0 Ga Gb Gv1 Gv2 meas. fore. meas. fore. meas. fore. meas. fore. F. Suter. Parallélisme mixte et prédiction de performances sur réseaux hétérogènes de machines parallèles. PhD thesis , 2002. E. Caron, F. Desprez, M. Quinson, and F. Suter. Performance Evaluation of Linear Algebra Routines for Network Enabled Servers. Parallel Computing, special issue on Cluters and Computational Grids for scientific computing (CCGSC’02) , 2003. Martin Q UINSON December 11th 2003 ⊳⊳ Slide 30 / 27 ⊲⊲ | |

  74. GRAS overview • development on simulator (SimGrid [CLM03]) and immediate deployment • target: distributed event-based applications, C language • 10 000 lines of code, Linux, Solaris • Futur: message expressivity, even higher performance, interoperability Build-in modules Logs control Leader election Locks Host management Bandwidth test Communications Syscalls virtualization Conditional execution Messages and callbacks Linux Solaris SimGrid Reality Simulation Data Representation Simulates execution span Constitutes a portability layer TCP SimGrid File Virtualizes expensive code Grounding features Logs Error handling Data structures Configuration Martin Q UINSON December 11th 2003 ⊳⊳ Slide 31 / 27 ⊲⊲ | |

  75. Hypothesis on the routing Hypothesis 1: Routing consistent • 1-to-N: no merge after branch • N-to-1: no split after join B A C Hypothesis 2: Routing symmetric Martin Q UINSON December 11th 2003 ⊳⊳ Slide 32 / 27 ⊲⊲ | |

  76. Algorithm for cliques of trees 1. Initialization: i ← 0 ; C i ← H ; E i ← ∅ ; V i ← ∅ 2. Classes lookup: h 1 , . . . , h p : classes of ⊥ over C i ; ∀ i, l i ∈ h i C i +1 ← { l 1 , . . . , l p } 3. Graph update: V i +1 ← V i ; E i +1 ← E i ∀ h j ∈ C i , ∀ v ∈ h j , do E i +1 ← E i +1 ∪ { ( v, l j ) } and V i +1 ← V i +1 ∪ { v } 4. Interference matrix update Let l α , l β , l γ , l δ ∈ C i +1 represent respectively h α , h β , h γ , h δ . For each m α , m β , m γ , m δ ∈ C i so that m α ∈ h α , m β ∈ h β , m γ ∈ h γ , m δ ∈ h δ . � � � � � � I ( C i +1 , � C i , � � ) = I l α , l β , l γ , l δ m α , m β , m γ , m δ � 5. Iterate 2–3 until C i = C i +1 . Martin Q UINSON December 11th 2003 ⊳⊳ Slide 33 / 27 ⊲⊲ | |

  77. ALNeM: example of execution Martin Q UINSON December 11th 2003 ⊳⊳ Slide 34 / 27 ⊲⊲ | |

  78. ALNeM: example of execution Martin Q UINSON December 11th 2003 ⊳⊳ Slide 34 / 27 ⊲⊲ | |

  79. ALNeM: example of execution Martin Q UINSON December 11th 2003 ⊳⊳ Slide 34 / 27 ⊲⊲ | |

  80. ALNeM: example of execution Martin Q UINSON December 11th 2003 ⊳⊳ Slide 34 / 27 ⊲⊲ | |

  81. ALNeM: example of execution Martin Q UINSON December 11th 2003 ⊳⊳ Slide 34 / 27 ⊲⊲ | |

  82. ALNeM: example of execution Martin Q UINSON December 11th 2003 ⊳⊳ Slide 34 / 27 ⊲⊲ | |

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend