Collaborative Science Research: Driving Network Innovation at ESnet - - PowerPoint PPT Presentation

collaborative science research driving network innovation
SMART_READER_LITE
LIVE PREVIEW

Collaborative Science Research: Driving Network Innovation at ESnet - - PowerPoint PPT Presentation

Collaborative Science Research: Driving Network Innovation at ESnet Presentation to Cisco Inder Monga Area Lead, Research and Services Energy Sciences Network Lawrence Berkeley National Lab, Berkeley Agenda Introduction to ESnet How ESnet


slide-1
SLIDE 1

Collaborative Science Research: Driving Network Innovation at ESnet

Presentation to Cisco Inder Monga

Area Lead, Research and Services Energy Sciences Network Lawrence Berkeley National Lab, Berkeley

slide-2
SLIDE 2

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Agenda

Introduction to ESnet How ESnet delivers its mission Looking beyond the horizon

2 Inder Monga ESnet

slide-3
SLIDE 3

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

President’s National Objectives for DOE

Energy to Secure America’s Future

Quickly Implement the Economic Recovery Package: Create Millions of New Green Jobs and Lay the Foundation for the Future Restore Science Leadership: Strengthen America’s Role as the World Leader in Science and Technology Reduce GHG emissions: Drive emissions 20 Percent below 1990 levels by 2020 Enhance energy security: Save More Oil than the U.S Currently Imports from the Middle East and Venezuela Combined within 10 years Enhance Nuclear Security: Strengthen non- proliferation activities, reduce global stockpiles of nuclear weapons, and maintain safety and reliability of the US stockpile

First Principle: Pursue material and cost-effective measures with a sense of urgency

3 Inder Monga ESnet

slide-4
SLIDE 4

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

President’s National Objectives for DOE

Energy to Secure America’s Future

DOE’s Strategic Framework: Science and Discovery at the Core ESnet exists solely to enable DOE’s science and discovery

  • Single facility linking all 6

disciplines with their global collaborators

Science Discovery Innovation

Lower ¡GHG ¡ emissions ¡ Clean, ¡Secure ¡ Energy ¡ Economic ¡ Prosperity ¡ Na;onal ¡ Security ¡

ESnet Mission Provide DOE with interoperable, effective, and reliable communications infrastructure and leading-edge network services in support of the agency's missions

4 Inder Monga ESnet

slide-5
SLIDE 5

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

The Energy Sciences Network (ESnet) A Department of Energy Facility

Tier1 ¡ISP ¡

Science ¡ Data ¡Network ¡ Na3onal ¡ Fiber ¡footprint ¡ Interna3onal ¡ Collabora3ons ¡ Mul3ple ¡ 10G ¡waves ¡ Distributed ¡ Team ¡of ¡35 ¡

5 Inder Monga ESnet

slide-6
SLIDE 6

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Network Planning Process

1) Exploring the plans and processes of the major stakeholders (the Office of Science programs, scientists, collaborators, and facilities):

1a) Data characteristics of scientific instruments and facilities

  • What data will be generated by instruments and supercomputers coming
  • n-line over the next 5-10 years?

1b) Examining the future process of science

  • How and where will the new data be analyzed and used – that is, how

will the process of doing science change over 5-10 years?

2) Understand all the Internet needs of DOE lab sites

  • Enterprise traffic profile (Web, Video, Email, SaaS…)

3) Observing current and historical network traffic patterns

  • What do the trends in network patterns predict for future network needs?

6 Inder Monga ESnet

slide-7
SLIDE 7

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Science Process Evolution: From Vials to Visualization

Instruments and Experiments Large Simulations Distributed Data gathering and analysis Collaboration and Sharing Visualization Verifiable Results

7 Inder Monga ESnet

slide-8
SLIDE 8

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Computational and Data Intensive Science

Scientific data growing exponentially

  • Simulation systems and observational devices growing in capability

exponentially

  • Data sets may be large, complex, disperse, incomplete, and

imperfect

Petabyte (PB) data sets common:

  • Climate modeling: estimates of the next IPCC data is in 10s of

petabytes

  • Genomics: JGI alone will have ~1 petabyte of data this year and

double each year

  • Particle physics: LHC is projected to produce 16 petabytes of data

per year

8 Inder Monga ESnet

slide-9
SLIDE 9

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Large Science Requirements

Bandwidth – 100+ Gb/s core by 2012, 1TB in few links by 2015 Reliability – 99.999% availability for large data centers

  • Large instruments depend on the network, 24 x 7, to accomplish their science

Global Connectivity - worldwide

  • Geographic reach sufficient to connect users and analysis systems to Science facilities

Services

  • Commodity IP is no longer adequate – guarantees are needed
  • Guaranteed bandwidth, traffic isolation, service delivery architecture compatible with Web

Services / Grid / “Systems of Systems” application development paradigms

  • Implicit requirement is that the service not have to pass through site firewalls which cannot

handle the required bandwidth (frequently 10Gb/s)

  • Visibility into the network end-to-end
  • Science-driven authentication infrastructure (PKI)

Assist users in effectively use the network

  • Performance is always an application’s perspective

9 Inder Monga ESnet

slide-10
SLIDE 10

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Aug 1990 100 GBy/ mo

Oct 1993 1 TBy/mo

Jul 1998 10 TBy/mo Nov 2001 100 TBy/ mo Apr 2006 1 PBy/mo

Log Plot of ESnet Monthly Accepted Traffic, January 1990 – Oct 2010

Planning for Growth - 80% YOY Growth

10 Inder Monga ESnet

  • ESnet Traffic Increases by

10X Every 47 Months, on Average

Terabytes / month

Nov 2010 10 PBy/mo

slide-11
SLIDE 11

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

FNAL (LHC Tier 1 site) Outbound Traffic

(courtesy Phil DeMar, Fermilab)

Overall ESnet traffic tracks the very large science use of the network

Red bars = top 1000 site to site workflows

Starting in mid-2005 a small number of large data flows dominate the network traffic Note: as the fraction of large flows increases, the overall traffic increases become more erratic – it tracks the large flows

Small Number of Large Flows Dominate

Orange bars = Virtual circuit flows

Terabytes/month accepted traffic

11 Inder Monga ESnet

slide-12
SLIDE 12

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Keep It Simple but Smart principles

  • Managing a nationwide

network with < 20 engineers

  • Automation and tools a huge

part of the requirements

  • Provisioning,

troubleshooting, monitoring, customer support etc.

  • Network Engineer + Software

Developer combos! 2750 miles / 4425 km

1625 miles / 2545 km

Moscow Cairo Dublin

12 Inder Monga ESnet

slide-13
SLIDE 13

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Network Services for Science

13 ¡

High-Speed Data Transfer

Hybrid Architecture

ESnet4 Advanced Network Initiative (100G)

Solving the end- to-end problem

Fasterdata

http://fasterdata.es.net

Automated Network Resource Management

Dynamic, Virtual Circuits for Science

OSCARS

http://www.es.net/oscars/

Distributed network monitoring and troubleshooting

perfSONAR

http://perfsonar.net

Inder Monga ESnet

slide-14
SLIDE 14

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Hybrid Architecture for ESnet4

ESnet IP core (single wave) ESnet Science Data Network (SDN) core (multiple waves) Metro Area Rings (multiple waves)

ESnet sites ESnet core network connection points Other IP networks Circuit connections to other science networks (e.g. USLHCNet) ESnet sites with redundant ESnet edge devices (routers or switches)

Chicago Atlanta Washington New York Seattle San Francisco/ Sunnyvale

The IP and SDN networks are fully interconnected and the link-by-link usage management implemented by OSCARS is used to provide a policy based sharing of each network by the other in case of failures

14 Inder Monga ESnet

slide-15
SLIDE 15

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

On-Demand Secure Circuit and Advanced Reservation System (OSCARS)

  • Original design goals

– User requested bandwidth between specified points for a specific period of time

  • User request is via Web Services or a Web browser interface
  • Provide traffic isolation

– Provide the network operators with a flexible mechanism for traffic engineering

  • E.g. controlling how the large science data flows use the available network capacity
  • Learning through customer’s experience :

– Flexible service semantics

  • E.g. allow a user to exceed the requested bandwidth, if the path has idle capacity – even if

that capacity is committed (now)

– Rich service semantics

  • E.g. provide for several variants of requesting a circuit with a backup, the most stringent of

which is a guaranteed backup circuit on a physically diverse path (2011)

  • Support the inherently multi-domain environment of large-scale science

– Interoperate with similar services other network domains in order to set up cross- domain, end-to-end virtual circuits

15 Inder Monga ESnet

slide-16
SLIDE 16

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Environment of Science is Inherently Multi-Domain

Inter-domain interoperability is crucial to serving science In order to set up end-to-end circuits across multiple domains:

1. The domains exchange topology information containing at least potential VC ingress and egress points 2. VC setup request (via IDC protocol) is initiated at one end of the circuit and passed from domain to domain as the VC segments are authorized and reserved

FNAL (AS3152) [US] ESnet (AS293) [US] GEANT (AS20965) [Europe] DFN (AS680) [Germany] DESY (AS1754) [Germany] End-to-end virtual circuit Example – not all of the domains shown support the VC service Topology exchange VC setup request Local InterDomain Controller Local IDC Local IDC Local IDC Local IDC VC setup request VC setup request VC setup request

OSCARS

User source User destination

VC setup request

16 Inder Monga ESnet

slide-17
SLIDE 17

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Network Mechanisms Underlying ESnet’s OSCARS

17 ¡

Best-effort IP traffic can use SDN, but under normal circumstances it does not because the OSPF cost of SDN is very high

Sink

Bandwidth conforming VC packets are given MPLS labels and placed in EF queue Regular production traffic placed in BE queue Oversubscribed bandwidth VC packets are given MPLS labels and placed in Scavenger queue Scavenger marked production traffic placed in Scavenger queue

Interface queues SDN SDN SDN IP IP IP IP Link

RSVP, MPLS, LDP enabled on internal interfaces standard, best-effort queue OSCARS high-priority queue explicit Label Switched Path Layer 3 VC Service: Packets matching reservation profile IP flow-spec are filtered out (i.e. policy based routing), “policed” to reserved bandwidth, and injected into an LSP. Layer 2 VC Service: Packets matching reservation profile VLAN ID are filtered out (i.e. L2VPN), “policed” to reserved bandwidth, and injected into an LSP.

bandwidth policer

OSCARS IDC

Source

low-priority queue LSP between ESnet border (PE) routers is determined using topology information from OSPF-TE. Path of LSP is explicitly directed to take SDN network where possible. On the SDN all OSCARS traffic is MPLS switched (layer 2.5).

No;f. ¡ AuthN ¡ PSetup ¡ Coord ¡ PCE ¡ Topo ¡ W ¡S ¡API ¡ ResMgr ¡ Lookup ¡ AuthZ ¡ Web ¡

Inder Monga ESnet

slide-18
SLIDE 18

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

OSCARS is a Production Service in ESnet

OSCARS is currently being used to support production traffic ≈ 50% of all ESnet traffic is now carried in OSCARS VCs Operational Virtual Circuit (VC) support

  • As of 11/2010, there are ~33 (up from 26 in 10/2009) long-term production VCs instantiated
  • 25 VCs supporting HEP: LHC T0-T1 (Primary and Backup) and LHC T1-T2
  • 3 VCs supporting Climate: NOAA Global Fluid Dynamics Lab and Earth System Grid
  • 2 VCs supporting Computational Astrophysics: OptiPortal
  • 1 VC supporting Biological and Environmental Research: Genomics
  • Short-term dynamic VCs
  • Between 1/2008 and 6/2010, there were roughly 5000 successful VC reservations
  • 3000 reservations initiated by BNL using TeraPaths
  • 900 reservations initiated by FNAL using LambdaStation
  • 700 reservations initiated using Phoebusa
  • 400 demos and testing (SC, GLIF, interoperability testing (DICE))

a A TCP path conditioning approach to latency hiding - http://damsl.cis.udel.edu/projects/phoebus/

Helped ESnet in winning Excellence.gov “Excellence in Leveraging Technology” award and InformationWeek’s 2009 “Top 10 Government Innovators” Award

18 Inder Monga ESnet

slide-19
SLIDE 19

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

OSCARS Open-Source Software

(http://code.google.com/p/oscars-idc/) The code base is undergoing its third rewrite (OSCARS v0.6)

  • Make it more modular and expose internal APIs
  • For example, ability to plug and play your own PCE
  • Targeted to facilitate research collaborations
  • As the service semantics get more complex (in response to user

requirements) focus “complex, compound network services”

  • Defining “atomic” service functions and building mechanisms for

users to compose these building blocks into custom services

19 Inder Monga ESnet

slide-20
SLIDE 20

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

OSCARS Version 0.6 Software Architecture

No;fica;on ¡Broker ¡

  • ¡Manage ¡subscrip3ons ¡
  • ¡Forward ¡no3fica3ons ¡

AuthN ¡

  • ¡Authen3ca3on ¡

Path ¡Setup ¡

  • ¡Network ¡element ¡interface ¡

Coordinator ¡

  • ¡Workflow ¡coordinator ¡

Path ¡Computa;on ¡ Engine ¡

  • ¡Constrained ¡path ¡

computa3ons ¡

Topology ¡Bridge ¡

  • ¡Topology ¡informa3on ¡

management ¡

Web ¡Services ¡API ¡

  • ¡Manages ¡external ¡WS ¡

communica3ons ¡

Resource ¡Manager ¡

  • ¡Manage ¡reserva3ons ¡
  • ¡Audi3ng ¡

Lookup ¡Bridge ¡

  • ¡Lookup ¡service ¡

AuthZ* ¡

  • ¡Authoriza3on ¡
  • ¡Cos3ng ¡

*Dis%nct ¡data ¡and ¡control ¡plane ¡ func%ons ¡

Web ¡Browser ¡User ¡ Interface ¡ perfSONAR ¡services ¡

  • ther ¡

IDCs ¡

SOAP ¡+ ¡WSDL ¡

  • ver ¡hQp/hQps ¡
  • ther ¡IDCs ¡

user ¡apps ¡

The lookup and topology services are now seconded to perfSONAR

20 Inder Monga ESnet

user ¡ apps ¡

slide-21
SLIDE 21

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Why does the Network seem so slow?

21 Inder Monga ESnet

slide-22
SLIDE 22

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Importance of end-to-end network performance for science

Very large files and very large flows

  • 10s to 100s of GB
  • Single flow rates of 100s to 1000s of Mbps
  • Network latency from 10s of msec to over 300msec

Packet loss must be essentially zero

  • Zero packet loss essential for multi-gigabit performance
  • Latency and packet loss interact in very unpleasant ways
  • Not true for commodity ISP / carrier networks

Large buffers are critical

  • Data center and LAN switch platforms typically have tiny interface

buffers

  • When placed in the path of wide area data transfers, these devices

cause performance problems

  • Therefore, data-intensive science cannot simply be put onto

commodity data center platforms

22 Inder Monga ESnet

slide-23
SLIDE 23

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Where are common problems?

Source Campus Backbone Regional D S Destination Campus NREN

Congested or faulty links between domains Latency dependant problems inside domains with small RTT

23 Inder Monga ESnet

slide-24
SLIDE 24

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Local testing will not find all problems

Source Campus R&E Backbone Regional D S Destination Campus Regional

Performance is good when RTT is < 20 ms Performance is poor when RTT exceeds 20 ms

24 Inder Monga ESnet

slide-25
SLIDE 25

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Network performance measurements infrastructure: perfSONAR

  • Multi-service, distributed

infrastructure

  • Distributed, rapid

troubleshooting and fault isolation

  • Latency and packet loss

measurement

  • Collaboration
  • Deployed in a large

number of research networks

http://weathermap.es.net

25 Inder Monga ESnet

slide-26
SLIDE 26

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

perfSONAR Architecture

performance GUI event subscription service human user client (e.g. part of an application system communication service manager) topology aggregator measurement archive(s)

m1

m2 m3

architectural relationship interface service measurement point examples

  • real-time end-to-end

performance graph (e.g. bandwidth or packet loss vs. time)

  • historical performance

data for planning purposes

  • event subscription service

(e.g. end-to-end path segment outage)

layer

measurement export measurement export m1 m4 m3 measurement export m1 m5 m3 m6

  • The measurement points

(m1….m6) are the real-time feeds from the network or local monitoring devices

  • The Measurement Export

service converts each local measurement to a standard format for that type of measurement

network domain 1 network domain 2 network domain 3

service locator path monitor

26 Inder Monga ESnet

slide-27
SLIDE 27

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

ESnet widely deploys perfSONAR

perfSONAR nodes deployed next to all backbone routers, and at all 10Gb connected sites

  • 31 locations deployed
  • Full list of active services at:
  • http://www.perfsonar.net/activeServices/

Instructions on using these services for network troubleshooting:

  • http://fasterdata.es.net

Federated information is extremely useful to help debug a number of problems

  • The only tool that we have to monitor circuits end-to-end across the networks from

the US to Europe

PerfSONAR measurement points are deployed at dozens of R&E institutions in the US and more in Europe

  • See https://stats1.es.net/perfSONAR/directorySearch.html

The value of perfSONAR increases as it is deployed at more sites

27 Inder Monga ESnet

slide-28
SLIDE 28

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Looking beyond the horizon

Lawrence Berkeley National Lab

28 Inder Monga ESnet

slide-29
SLIDE 29

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Discovered 16 elements Identified good and bad cholesterol Confirmed the Big Bang and discovered Dark Energy Turned windows into energy savers Unmasked a dinosaur killer Exposed the Radon risk Explained photosynthesis Created the toughest ceramic Pitted cool roofs against global warming Given fluorescent lights their big break Caught Malaria in the act Built a better battery Preserved the sounds of yesteryear Fabricated the smallest machines Made appliances pull their weight Brought safe drinking water to thousands Created a pocket-sized DNA sampler

  • Revealed the secrets of the human

genome

  • Redefined the causes of breast cancer
  • Given buildings an energy makeover
  • Supercharged the climate model
  • Derailed an ecological danger
  • Helped bring energy efficiency to

China

  • Pioneered medical imaging
  • Brought the stars closer

A legacy of improving our lives and understanding the world around us

29 Inder Monga ESnet

slide-30
SLIDE 30

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

ESnet ¡received ¡~$62M ¡in ¡ARRA ¡funds ¡from ¡DOE ¡for ¡an ¡Advanced ¡ Networking ¡Ini3a3ve ¡to: ¡

  • Build ¡an ¡end-­‑to-­‑end ¡100 ¡Gbps ¡prototype ¡network ¡ ¡
  • Handle ¡prolifera3ng ¡data ¡needs ¡between ¡the ¡three ¡DOE ¡supercompu3ng ¡

facili3es ¡and ¡NYC ¡interna3onal ¡exchange ¡point ¡ ¡

  • Build ¡a ¡network ¡testbed ¡facility ¡for ¡researchers ¡and ¡industry ¡

DOE ¡is ¡also ¡funding ¡$5M ¡in ¡network ¡research ¡using ¡the ¡testbed ¡ facility: ¡goal ¡of ¡near-­‑term ¡technology ¡transfer ¡to ¡the ¡produc3on ¡ ESnet ¡network ¡

ARRA Advanced Networking Initiative (ANI)

30 Inder Monga ESnet

slide-31
SLIDE 31

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

¡Separately ¡funded ¡$33 ¡million ¡for ¡Magellan, ¡an ¡associated ¡DOE ¡ cloud ¡compu3ng ¡project ¡that ¡will ¡u3lize ¡the ¡100 ¡Gbps ¡network ¡ infrastructure ¡

  • Establish ¡a ¡na3onwide ¡scien3fic ¡mid-­‑range ¡distributed ¡

compu3ng ¡and ¡data ¡analysis ¡testbed ¡ ¡

  • Two ¡sites ¡(NERSC ¡/ ¡LBNL ¡and ¡ALCF ¡/ ¡ANL) ¡planned ¡ ¡
  • Mul3ple ¡10’s ¡of ¡teraflops ¡and ¡mul3ple ¡petabytes ¡of ¡storage, ¡

as ¡well ¡as ¡appropriate ¡cloud ¡soaware ¡tuned ¡for ¡moderate ¡

  • concurrency. ¡

– See ¡hcp://www.nersc.gov/nusers/systems/magellan/ ¡and ¡ hcp://magellan.alcf.anl.gov/ ¡ ¡ ¡

ARRA Magellan Initiative

31 Inder Monga ESnet

slide-32
SLIDE 32

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Prototype 100G Topology

32

Magellan Magellan

Inder Monga ESnet

slide-33
SLIDE 33

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Progression: ¡

  • Start ¡out ¡as ¡a ¡tabletop ¡testbed, ¡then ¡move ¡out ¡to ¡the ¡wide-­‑area ¡when ¡100 ¡

Gbps ¡available ¡ ¡

Capabili3es: ¡

  • Ability ¡to ¡support ¡end-­‑to-­‑end ¡networking, ¡middleware ¡and ¡applica3on ¡

experiments, ¡including ¡interoperability ¡tes3ng ¡of ¡mul3-­‑vendor ¡100 ¡Gbps ¡ network ¡components ¡ ¡ ¡

  • Dynamic ¡network ¡provisioning ¡
  • Plan ¡to ¡acquire ¡dark ¡fiber ¡on ¡a ¡por3on ¡of ¡testbed ¡footprint ¡to ¡enable ¡

hybrid ¡(layer ¡0-­‑3) ¡network ¡research ¡

  • Use ¡Virtual ¡Machine ¡technology ¡to ¡support ¡protocol ¡and ¡middleware ¡

research ¡

  • Detailed ¡monitoring ¡so ¡researchers ¡will ¡have ¡access ¡to ¡all ¡possible ¡

monitoring ¡data ¡from ¡the ¡network ¡devices ¡

33

Testbed Overview

An Open Facility

Inder Monga ESnet

slide-34
SLIDE 34

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Tabletop: A layered view

Layer 0/1 Layer 3 Layer 2/Openflow Compute/ Storage

WDM Link 10GE Link 1GE Link

IO Tester App host Monitoring Host IO Testers WDM/ Optical

VMs VMs … VMs VMs VMs …

Research Applications

M O N I T O R I N G

34 Inder Monga ESnet

slide-35
SLIDE 35

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

north-wdm1 south-wdm1

Prod.

north-wdm2 east-wdm1 east-wdm2 Openflow Switch IOTester Openflow Switch south-wdm2 IOTester IO Tester

North Domain South Domain East Domain

Sample Configuration: Multi-Domain Multi-Layer Protection Testing

Test inter-domain

  • ptical protection

schemes Test inter-domain higher layer (> 1) protection schemes IO tester

35 Inder Monga ESnet

slide-36
SLIDE 36

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

ESnet Research

Solving hard problems collaboratively

36 Inder Monga ESnet

slide-37
SLIDE 37

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Atomic and Composite Network Services Architecture

Atomic Service (AS1) Atomic Service (AS2) Atomic Service (AS3) Atomic Service (AS4)

Composite Service (S2 = AS1 + AS2) Composite Service (S3 = AS3 + AS4) Composite Service (S1 = S2 + S3)

Service Abstraction Increases Service Usage Simplifies

Network Service Plane

Service templates pre-composed for specific applications

  • r customized by

advanced users Atomic services used as building blocks for composite services Network Services Interface Multi-Layer Network Data Plane

e.g. a backup circuit– be able to move a certain amount of data in or by a certain time e.g. monitor data sent and/or potential to send data e.g. dynamically manage priority and allocated bandwidth to ensure deadline completion

37 Inder Monga ESnet

slide-38
SLIDE 38

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Examples of Composite Network Services

1+1

LHC: Resilient High Bandwidth Guaranteed Connection Protocol Testing: Constrained Path Connection Reduced RTT Transfers: Store and Forward Connection

measure monitor topology find path connect protect

38 Inder Monga ESnet

slide-39
SLIDE 39

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Atomic Network Services Currently Offered by OSCARS ESnet OSCARS

Network Services Interface

Multi-Layer Multi-Layer Network Data Plane Connection creates virtual circuits (VCs) within a domain as well as multi-domain end-to-end VCs Path Finding determines a viable path based on time and bandwidth constrains Monitoring provides critical VCs with production level support

39 Inder Monga ESnet

slide-40
SLIDE 40

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Multi-layer networking

40 Inder Monga ESnet

slide-41
SLIDE 41

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Thank You!

Contact: imonga [at] es.net

41 Inder Monga ESnet