testbeds for reproducible research
play

Testbeds for Reproducible Research Lucas Nussbaum - PowerPoint PPT Presentation

Testbeds for Reproducible Research Lucas Nussbaum lucas.nussbaum@loria.fr Lucas Nussbaum Testbeds for reproducible research 1 / 26 Outline Presentation of Grid5000 1 A look at two recent testbeds: 2 CloudLab Chameleon Lucas


  1. Testbeds for Reproducible Research Lucas Nussbaum lucas.nussbaum@loria.fr Lucas Nussbaum Testbeds for reproducible research 1 / 26

  2. Outline Presentation of Grid’5000 1 A look at two recent testbeds: 2 � CloudLab � Chameleon Lucas Nussbaum Testbeds for reproducible research 2 / 26

  3. The Grid’5000 testbed Lille Luxembourg ◮ World-leading testbed for HPC & Cloud Reims � 10 sites, 1200 nodes, 7900 cores Nancy Rennes � Dedicated 10-Gbps backbone network Lyon � 550 users and 100 publications per year Grenoble Bordeaux Toulouse Sophia Lucas Nussbaum Testbeds for reproducible research 3 / 26

  4. The Grid’5000 testbed Lille Luxembourg ◮ World-leading testbed for HPC & Cloud Reims � 10 sites, 1200 nodes, 7900 cores Nancy Rennes � Dedicated 10-Gbps backbone network Lyon � 550 users and 100 publications per year Grenoble Bordeaux Toulouse Sophia ◮ Not a typical grid / cluster / Cloud: � Used by CS researchers for HPC / Clouds / Big Data research � No users from computational sciences � Design goals: ⋆ Large-scale, shared infrastructure ⋆ Support high-quality, reproducible research on distributed computing Lucas Nussbaum Testbeds for reproducible research 3 / 26

  5. Outline Description and verification of the environment 1 Resources selection and reservation 2 Reconfiguring the testbed to meet experimental needs 3 Monitoring experiments, extracting and analyzing data 4 Lucas Nussbaum Testbeds for reproducible research 4 / 26

  6. Description and verification of the environment Typical needs: ◮ How can I find suitable resources for my experiment? ◮ How sure can I be that the actual resources will match their description? ◮ What was the hard drive on the nodes I used six months ago? Lucas Nussbaum Testbeds for reproducible research 5 / 26

  7. Description and verification of the environment Typical needs: ◮ How can I find suitable resources for my experiment? ◮ How sure can I be that the actual resources will match their description? ◮ What was the hard drive on the nodes I used six months ago? OAR properties nodes description Selection and Description of Verification of reservation of resources resources resources ( Reference API ) ( g5k-checks ) ( OAR ) API requests OAR commands and API requests High-level tools Users Lucas Nussbaum Testbeds for reproducible research 5 / 26

  8. Description of resources ◮ Describing resources � understand results � Detailed description on the Grid’5000 wiki � Machine-parsable format (JSON) � Archived ( State of testbed 6 months ago? ) Lucas Nussbaum Testbeds for reproducible research 6 / 26

  9. Verification of resources ◮ Inaccuracies in resources descriptions � dramatic consequences: � Mislead researchers into making false assumptions � Generate wrong results � retracted publications! ◮ Happen frequently: maintenance, broken hardware (e.g. RAM) Lucas Nussbaum Testbeds for reproducible research 7 / 26

  10. Verification of resources ◮ Inaccuracies in resources descriptions � dramatic consequences: � Mislead researchers into making false assumptions � Generate wrong results � retracted publications! ◮ Happen frequently: maintenance, broken hardware (e.g. RAM) ◮ Our solution: g5k-checks � Runs at node boot (can also be run manually by users) � Retrieves current description of node in Reference API � Acquires information on node using OHAI, ethtool, etc. � Compares with Reference API Lucas Nussbaum Testbeds for reproducible research 7 / 26

  11. Outline Description and verification of the environment 1 Resources selection and reservation 2 Reconfiguring the testbed to meet experimental needs 3 Monitoring experiments, extracting and analyzing data 4 Lucas Nussbaum Testbeds for reproducible research 8 / 26

  12. Resources selection and reservation ◮ Roots of Grid’5000 in the HPC community � Obvious idea to use a HPC Resource Manager ◮ OAR (developed in the context of Grid’5000) http://oar.imag.fr/ ◮ Supports resources properties ( ≈ tags) � Can be used to select resources (multi-criteria search) � Generated from Reference API ◮ Supports advance reservation of resources � In addition to typical HPC resource managers’s batch mode � Request resources at a specific time � On Grid’5000: used for special policy: Large experiments during nights and week-ends Experiments preparation during day Lucas Nussbaum Testbeds for reproducible research 9 / 26

  13. Using properties to reserve specific resources Reserving two nodes for two hours. Nodes must have a GPU and power monitoring: oarsub -p "wattmeter=’YES’ and gpu=’YES’" -l nodes=2,walltime=2 -I Reserving one node on cluster a, and two nodes with a 10 Gbps network adapter on cluster b: oarsub -l "{cluster=’a’}/nodes=1+{cluster=’b’ and eth10g=’Y’}/nodes=2,walltime=2" Advance reservation of 10 nodes on the same switch with support for Intel VT (virtualization): oarsub -l "{virtual=’ivt’}/switch=1/nodes=10,walltime=2" -r ’2014-11-08 09:00:00’ Lucas Nussbaum Testbeds for reproducible research 10 / 26

  14. Visualization of usage Lucas Nussbaum Testbeds for reproducible research 11 / 26

  15. Outline Description and verification of the environment 1 Resources selection and reservation 2 Reconfiguring the testbed to meet experimental needs 3 Monitoring experiments, extracting and analyzing data 4 Lucas Nussbaum Testbeds for reproducible research 12 / 26

  16. Reconfiguring the testbed ◮ Typical needs: � How can I install $SOFTWARE on my nodes? � How can I add $PATCH to the kernel running on my nodes? � Can I run a custom MPI to test my fault tolerance work? � How can I experiment with that Cloud/Grid middleware? � Can I get a stable (over time) software environment for my experiment? Lucas Nussbaum Testbeds for reproducible research 13 / 26

  17. Reconfiguring the testbed ◮ Operating System reconfiguration with Kadeploy: � Provides a Hardware-as-a-Service Cloud infrastructure � Enable users to deploy their own software stack & get root access � Scalable, efficient, reliable and flexible : 200 nodes deployed in ~5 minutes (120s with Kexec) ◮ Customize networking environment with KaVLAN � Deploy intrusive middlewares (Grid, Cloud) � Protect the testbed from experiments � Avoid network pollution � By reconfiguring VLANS � almost no overhead default VLAN A � Recent work: support several interfaces e routing between t s i Grid’5000 sites global VLANs all nodes connected SSH gw at level 2, no routing local, isolated VLAN only accessible through a SSH gateway connected to both networks routed VLAN separate level 2 network, B e reachable through routing t i s Lucas Nussbaum Testbeds for reproducible research 14 / 26

  18. Creating and sharing Kadeploy images ◮ Avoid manual customization: � Easy to forget some changes � Difficult to describe � The full image must be provided � Cannot really reserve as a basis for future experiments (similar to binary vs source code) ◮ Kameleon: Reproducible generation of software appliances � Using recipes (high-level description) � Persistent cache to allow re-generation without external resources (Linux distribution mirror) � self-contained archive � Supports Kadeploy images, LXC, Docker, VirtualBox, qemu, etc. http://kameleon.imag.fr/ Lucas Nussbaum Testbeds for reproducible research 15 / 26

  19. Changing experimental conditions ◮ Reconfigure experimental conditions with Distem � Introduce heterogeneity in an homogeneous cluster � Emulate complex network topologies CPU cores 0 1 2 3 4 5 6 7 n 1 n 4 CPU performance if0 20 Kbps, 100ms → if0 ← 10 Kbps, 200ms ← 1 Mbps, 30ms 1 Mbps, 30ms → ← 5 Mbps, 10ms ← 4 Mbps, 12ms n 3 10 Mbps, 5ms → if0 if1 ← 100 Mbps, 3ms 6 Mbps, 16ms → ← 200 Kbps, 30ms 512 Kbps, 40ms → if0 100 Mbps, 1ms → if0 n 2 n 5 VN 1 VN 2 VN 3 Virtual node 4 http://distem.gforge.inria.fr/ Lucas Nussbaum Testbeds for reproducible research 16 / 26

  20. Outline Description and verification of the environment 1 Resources selection and reservation 2 Reconfiguring the testbed to meet experimental needs 3 Monitoring experiments, extracting and analyzing data 4 Lucas Nussbaum Testbeds for reproducible research 17 / 26

  21. Monitoring experiments Goal: enable users to understand what happens during their experiment Power consumption CPU – memory – disk Network backbone Internal networks Lucas Nussbaum Testbeds for reproducible research 18 / 26

  22. Kwapi: a new framework to monitor experiments ◮ Initially designed as a power consumption measurement framework for OpenStack – then adapted to Grid’5000’s needs and extended ◮ For energy consumption and network traffic ◮ Measurements taken at the infrastructure level (SNMP on network equipment, power distribution units, etc.) ◮ High frequency (aiming at 1 measurement per second) ◮ Data visualized using web interface ◮ Data exported as RRD, HDF5 and Grid’5000 REST API 8000 Night or weekends 7000 Day and weekdays Global consumption (W) 6000 5000 4000 3000 2000 1000 0 Jan 29 2015 Feb 01 2015 Feb 04 2015 Feb 07 2015 Feb 10 2015 Feb 13 2015 Feb 16 2015 Feb 19 2015 Date Lucas Nussbaum Testbeds for reproducible research 19 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend