PlanetLab VINI: A Vi rtualized N etwork I nfrastructure Marc E. - - PowerPoint PPT Presentation

planetlab
SMART_READER_LITE
LIVE PREVIEW

PlanetLab VINI: A Vi rtualized N etwork I nfrastructure Marc E. - - PowerPoint PPT Presentation

PlanetLab VINI: A Vi rtualized N etwork I nfrastructure Marc E. Fiuczynski, Ph.D. Princeton UniversityResearch Scholar PlanetLab ConsortiumR&D Staff Member (mef@cs.princeton.edu) What is PlanetLab? Consortium : Academic,


slide-1
SLIDE 1

PlanetLab

VINI: A Virtualized Network Infrastructure Marc E. Fiuczynski, Ph.D.

Princeton University—Research Scholar PlanetLab Consortium—R&D Staff Member (mef@cs.princeton.edu)

slide-2
SLIDE 2

2

What is PlanetLab?

  • Consortium: Academic, Government, Industry

– Formally formed January 2004, hosted by Princeton U.

  • Several hundred member Universities

– HP and Intel as founding Industrial members

  • AT&T, France Telecom, Polish Telecom, Google, …

– United States Government funded (NSF and DARPA)

slide-3
SLIDE 3

3

What is PlanetLab?

  • Facility: Planetary-scale “overlay” & “underlay” network

– 800+ Linux-based servers at 400+ sites in 40+ countries

slide-4
SLIDE 4

4

PlanetLab Facility Today

  • 800+ servers at 400+ sites in 40+ countries
  • Co-located throughout the world @ Uni. & Companies
  • Co-located at network crossroads (Internet2, RNP, CERNET, …)
slide-5
SLIDE 5

5

What is PlanetLab?

  • Research Community: Distributed Sys. & Networking

– Researchers can get a set of “virtual machines” across these servers (SLICE) – In a SLICE researchers can deploy & evaluate … – … distributed systems services and applications “The next Internet will be created as an overlay in the current one” – … network architectures and protocols “The new Internet will be created in parallel next to the current one”

slide-6
SLIDE 6

6

Example Network Services

  • Scalable Large-File Transfer: CoBlitz—Princeton, LoCI—Tennessee
  • Content Distribution: Coral—NYU, CoDeeN—Princeton, CobWeb—Cornell
  • Distributed Hash Tables: OpenDHT—UC Berkeley; Chord-MIT
  • Routing Overlays: I3 Internet Indirection Infrastructure—UC Berkeley
  • Multicast Delivery Nets: End System Multicast—CMU, Tmesh-U. Michigan
  • Serverless Email: ePOST—Rice University
  • Publish-Subscribe News Access: CorONA—Cornell
  • Robust DNS Resolution: CoDNS—Princeton, CoDoNs—Cornell
  • Mobile Access: DHARMA—U. of Pennsylvania
  • Location/Anycast Services: OASIS—NYU, Meridian—Cornell
  • Internet Measurement: ScriptRoute—U. of Maryland
  • Above services communicate with >1M real users

and transmit ~4TB of data per day

slide-7
SLIDE 7

7

What is PlanetLab?

  • Software Platform: to create private PlanetLab networks

– Software, called MyPLC, is Open Source – Manages a set of (remote) machines – Manages distributed virtualization (SLICES) across machines

slide-8
SLIDE 8

8

PlanetLab Networks

  • Global PlanetLab (operated by the PlanetLab Consortium)
  • Small scale (< 5 sites)

– Intel (Wireless PlanetLab) – NYU (Medical PlanetLab) – U. Melbourne (e-science PlanetLab) – EPFL Switzerland (p2p PlanetLab) – … others we don’t know about …

  • PlanetLab-EU (EU federated with the “global” PlanetLab)
  • PlanetLab-JP (Japan federated with the “global” PlanetLab -- ‘08)
  • PlanetLab GOLD (Private network to foster commercial tech. transfer)
  • OneLab (Network for European Union Research Institutions)
  • TPLab (Polish Telco network evaluating mature services for commercial use)
  • EverLab (Network for European and Israeli Research Institutions)
  • CoreLab (Private network for Japanese Research Institutions)
  • VINI (US Network for layer-2+ Research)
slide-9
SLIDE 9

9

Polish Telecom PlanetLab

  • 11 Sites in Poland
  • 400Mbps - 2Gps per site
  • CDN Deployment (CoBlitz)

serving large files to TP broadband customers

  • Large files are from Polish

Web Portals:

  • Video blogs / News
  • Podcasts / Vodcasts
  • Games / game patches
slide-10
SLIDE 10

10

EverLab

slide-11
SLIDE 11

11

CoreLab

Collaborative Overlay Research Environment

Overlay test-bed based on “Private PlanetLab” Provision resources for mission critical services

Features we would like to have…

Custom hardware to optimize overlay forwarding PoP/Core collocation (nodes “inside” network) Custom hardware to optimize overlay forwarding

Target overlay research

Not just on distributed system apps More on network core architectures

Utilize both private & public environments

Local v.s. Global / Provisioned v.s. Best-Effort

  • 10 sites, 52 servers
  • Multi-Homed

Kyutech

Sendai Tsukuba Tokyo Nagano Kanazawa Nagoya OsakaKeihanna Kochi Okayama Kitakyushu Fukuoka Sapporo

Hiroshima U. Kochi-tech Osaka U. NICT Koganei NICT Otemachi

  • U. Tokyo

Tohoku U. Sapporo Medical U. NII

slide-12
SLIDE 12

12

VINI: PlanetLab for layer 3 research

  • VINI is a…

– Software Platform: based on PlanetLab

  • New kernel to support improved network virtualization
  • Tools to instantiate virtual topologies

– Facility: Private PlanetLab with its own nodes on NLR and “new” I2 s c

BGP BGP BGP BGP

slide-13
SLIDE 13

13

VINI

  • OBJECTIVE: Develop a strategy for…

… continually reinventing networking architectures and protocols for the new Internet

  • GOAL: Enable deployment studies of new networking

ideas in real networks

  • APPROACH: “PlanetLab + Layer 2”

–Implement layer 3, etc. in your slice

slide-14
SLIDE 14

14

Challenges of today’s Internet

  • Security

– known vulnerabilities lurking in the Internet

  • DDoS, worms, malware

– addressing security comes at a significant cost

  • federal government spent $5.4B in 2004
  • estimated $50-100B spent worldwide on security in 2004
  • Reliability

– e-Commerce increasingly depends on fragile Internet

  • much less reliable than the phone network (three vs five

9’s)

  • risks in using the Internet for mission-critical operations
  • barrier to ubiquitous VoIP

– an issue of ease-of-use for everyday users

slide-15
SLIDE 15

15

Challenges (cont)

  • Scale & Diversity

– the whole world is becoming networked

  • sensors, consumer electronic devices, embedded

processors – assumptions about edge devices (hosts) no longer hold

  • connectivity, power, capacity, mobility,…
  • Performance

– scientists have significant bandwidth requirements

  • each e-science community covets its own wavelength(s)

– purpose-built solutions are not cost-effective

  • being on the “commodity path” makes an effort sustainable
slide-16
SLIDE 16

16

Two Paths

  • 1. Incremental

– apply point-solutions to the current architecture

  • 2. Clean-Slate

– replace the Internet with a new network architecture

  • We can’t be sure the first path will fail, but…

– point-solutions result in increased complexity

  • making the network harder to manage
  • making the network more vulnerable to attacks
  • making the network more hostile to new applications

– architectural limits may lead to a dead-end

slide-17
SLIDE 17

17

Architectural Limits

  • Minimize trust assumptions

– the Internet originally viewed network traffic as fundamentally cooperative, but should view it as adversarial

  • Enable competition

– the Internet was originally developed independent of any commercial considerations, but today the network architecture must take competition and economic incentives into account

  • Allow for edge diversity

– the Internet originally assumed host computers were connected to the edges of the network, but host-centric assumptions are not appropriate in a world with an increasing number of sensors and mobile devices

slide-18
SLIDE 18

18

Limits (cont)

  • Design for network transparency

– the Internet originally did not expose information about its internal configuration, but there is value to both users and network administrators in making the network more transparent

  • Enable new network services

– the Internet originally provided only a best-effort packet delivery service, but there is value in making processing capability and storage capacity available in the middle of the network

  • Integrate with optical transport

– the Internet originally drew a sharp line between the network and the underlying transport facility, but allowing bandwidth aggregation and traffic engineering to be first-class abstractions has the potential to improve efficiency and performance

slide-19
SLIDE 19

19

Barriers to Second Path

  • Internet has become ossified

– no competitive advantage to architectural change – no obvious deployment path

  • Inadequate validation of potential solutions

– simulation models too simplistic – little or no real-world experimental evaluation

  • Testbed dilemma

– production testbeds: real users but incremental change – research testbeds: radical change but no real users

slide-20
SLIDE 20

20

Recommendation from NSF WG

It is time for the research community, federal governments, and commercial sector to jointly pursue the second path. This involves experimentally validating new network architectures, and doing so in a sustainable way that fosters wide-spread deployment.

slide-21
SLIDE 21

21

Why Now?

  • Active research community

– scores of architectural proposals – ready to step up to the challenge of making it real

  • Enabling technologies

– OS virtualization and interposition mechanisms – overlay networks are maturing – high-speed data pipes in the core – fast network processors and FPGAs

  • Infrastructure exists

– PlanetLab (as a starting point) – High-speed, geographically dispersed networks serving “real” users

  • National Lambda Rail (NLR), Internet2 in the United States
  • GIGA in Brazil
  • JGN2 in Japan
slide-22
SLIDE 22

22

Next Step: Meta Testbed

  • Goals

– support experimental validation of new architectures

  • simultaneously support real users and clean slate designs
  • allow a thousand flowers to bloom

– provide plausible deployment path

  • Key ideas

– virtualization

  • multiple architectures on a shared infrastructure
  • shared management costs

– opt-in on a per-user / per-application basis

  • attract real users
  • demand drives deployment / adoption
slide-23
SLIDE 23

23

VINI: Our Meta Testbed approach

  • Infrastructure

– PlanetLab provides “access network” with global reach

  • user desktops run proxy that allows them to opt-in
  • treat nearby PlanetLab node as ingress router

– NLR/I2 provides high-speed backbone in the United States

  • populate with programmable routers
  • extend slice abstraction to these routers
  • Usage model

– each architecture (service) runs in its own slice – two modes of use

  • short-term experiments
  • long-running stable architectures and services
slide-24
SLIDE 24

24

PlanetLab Node Software Architecture

Slice Manager (SM) Virtualization Software x86 Server Hardware Slice Slice Slice Slice

slide-25
SLIDE 25

25

Slices

slide-26
SLIDE 26

26

Slices

slide-27
SLIDE 27

27

Slices

slide-28
SLIDE 28

28

Extending Slices to a VINI testbed

slide-29
SLIDE 29

29

Extending Slices to a VINI testbed

slide-30
SLIDE 30

30

Extending Slices to a VINI testbed

slide-31
SLIDE 31

31

User Opt-in

Client Server NAT

wireless

slide-32
SLIDE 32

32

Internet in a Slice (IIAS)

XORP

(routing protocols)

vif1 vif2 vif0

IPv4 Fwd table

User Kernel

Filters, shapers PlanetLab VM

E-GRE tunnels

  • XORP in Network Container

– Adds routes to copy of kernel IPv4 forwarding table – Kernel forwards packets between virtual interfaces

  • Filters and shapers

– Add delay and loss, constrain bandwidth

  • Virtual interfaces

– Appear as Ethernet devices in a slice – Reduce MTU for tunneling

  • E-GRE tunnels

– Hack standard GRE tunnels to preserve MAC headers

slide-33
SLIDE 33

33

IIAS Performance

  • Preliminary benchmarks, 3GHz Xeon-based PCs
  • User-level only solution (presented in SIGCOMM ‘06):

– ~20Kpps IPv4 packets forwarded through Click in user-space – Approx 200Mbps max bandwidth

  • New kernel support for virtual network containers

– ~500Kpps IPv4 packets forwarded thru per-slice Network Container – Achieves 1Gbps link rate (with plenty of CPU to spare) – Comparison point: Native Linux ~750Kpps forwarding rate

  • Summary

– Get: big performance improvement over original IIAS – Give up: ability to modify the data plane

  • Stuck with kernel IPv4/IPv6 functionality
slide-34
SLIDE 34

34

Changing the Data Plane

XORP

(routing protocols)

vif1 vif2 vif0

Click

User Kernel

Filters, shapers PlanetLab VM

E-GRE tunnels

  • Idea: leverage Click in user space
  • Click config simpler

– Only implements forwarding and data plane changes – Interact with virtual devices via pcap, raw sockets

  • No performance results yet

– Better performance from pcap than UDP sockets?

slide-35
SLIDE 35

35

Architectural Thrusts

  • Built-in security

– worm and virus containment, DDoS prevention,…

  • Knowledge/Information/Decision Plane

– managability, fault & anomaly diagnosis, reliability,…

  • Network service infrastructure

– functionality, evolvability, reliability, heterogeneity,…

  • Naming and Addressing

– mobility, ease-of-use, reliability, evolvability,…

  • Global sensor network

– scalability, heterogeneity, mobility,…

  • e-Science infrastructure

– performance, managability, ease-of-use,…

  • Optical (MPLS, vLAN, etc.) integration

– performance, evolvability,…

slide-36
SLIDE 36

36

Success Scenarios

  • Create a new network architecture

– convergence of multiple architectural visions – approach to deployment succeeds – ready for commercialization

  • Meta testbed becomes the new architecture

– multiple architectures co-exist – create a climate of continual re-invention

  • Gain new insights and architectural clarity

– ideas retro-fitted into today’s architecture – pursuing second path improves the odds of first path succeeding

slide-37
SLIDE 37

37

Summary

  • PlanetLab provides a blueprint for introducing novel

technologies into the Internet

  • VINI is an enhancement to the PlanetLab software to

enable deployment studies of new networking ideas in real networks

  • More information / contacts

– PlanetLab: http://www.planet-lab.org

  • mef@cs.princeton.edu

– VINI: http://www.vini-veritas.net