1
The Energy Sciences Network BESAC August 2004
William E. Johnston, ESnet Dept. Head and Senior Scientist
- R. P. Singh, Federal Project Manager
The Energy Sciences Network BESAC August 2004 William E. Johnston, - - PowerPoint PPT Presentation
The Energy Sciences Network BESAC August 2004 William E. Johnston, ESnet Dept. Head and Mary Anne Scott Senior Scientist Program Manager Advanced Scientific Computing R. P. Singh, Federal Project Manager Research Michael S. Collins, Stan
1
2
3
4
– BES represented by Nestor Zaluzec, ANL and Jeff Nichols, ORNL
5
6
7
8
9
10
gateway router router router router router router core router router peering router core router border router
router router
peering router DNS
11
10GE 10GE
RTR RTR
“lambdas”) are usually used in bi-directional pairs
(no framing – e.g. for digital HDTV)
RTR RTR RTR RTR
A ring topology network is inherently reliable – all single point failures are mitigated by routing traffic in the other direction around the ring.
TWC JGI
SNLL LBNL SLAC
YUCCA MT
BECHTEL
PNNL
LIGO INEEL
LANL SNLA
Allied Signal
PANTEX
ARM KCP NOAA OSTI ORAU SRS
ORNL JLAB PPPL
ANL-DC INEEL-DC ORAU-DC
LLNL/LANL-DC
MIT ANL BNL FNAL AMES
4xLAB-DC
NERSC
NREL
ALB HUB
LLNL
GA
DOE-ALB SDSC
Japan
GTN&NNSA
International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s)
Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6)
QWEST ATM
42 end user sites
ESnet IP
GEANT
Sinet (Japan) Japan – Russia(BINP) CA*net4 MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren
ESnet core: Packet over SONET Optical Ring and Hubs
ELP HUB S N V H U B C H I H U B NYC HUB
ATL HUB
D C H U B
peering points
MAE-E S t a r l i g h t Chi NAP Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix
P N W G
S E A H U B
hubs
S N V H U B
Abilene Abilene
high-speed peering points
Abilene Abilene M A N L A N A b i l e n e
CERN
STARLIGHT
MAE-E
NY-NAP
PAIX-E GA LBNL
ESnet Peering (connections to
NYC HUBS SEA HUB Japan SNV HUB MAE-W FIX-W PAIX-W 26 PEERS CA*net4
CERN
MREN Netherlands Russia StarTap Taiwan (ASCC) Abilene + 7 Universities 22 PEERS MAX GPOP GEANT
SInet (Japan) KEK Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren 20 PEERS 3 PEERS LANL TECHnet 2 PEERS 39 PEERS CENIC SDSC PNW-GPOP CalREN2
C H I N A P
Distributed 6TAP 19 Peers 2 PEERS KDDI (Japan) France EQX-ASH 1 PEER 1 PEER 5 PEERS
ATL HUB
University International Commercial Abilene
EQX-SJ
Abilene
6 PEERS
Abilene
14
AS routes peer 1239 701 209 3356 3561 7018 2914 3549 5511 174 6461 7473
3491
11537 5400 4323 4200 6395 2828 7132 SPRINTLINK 63384 51685 47063 41440 35980 28728 19723 17369 8190 5492 5032 4429 3529 3327 3321 2774 2475 2408 2383 UUNET- ALTERNET QWEST LEVEL3 CABLE- WIRELESS ATT-WORLDNET VERIO GLOBALCENTER OPENTRANSIT COGENTCO ABOVENET SINGTEL CAIS ABILENE BT TWTELECOM ALERON BROADWING XO 1961 SBC
15
Peering routers Start: 134.55.209.5 134.55.209.90 63.218.6.65 63.218.6.38 63.216.0.53 63.216.0.30 63.218.12.37 63.218.13.134 195.209.14.29 195.209.14.153 195.209.14.206 Finish: 194.226.160.10 ESnet core snv-lbl-oc48.es.net snvrt1-ge0-snvcr1.es.net pos3-0.cr01.sjo01.pccwbtn.net pos5-1.cr01.chc01.pccwbtn.net pos6-1.cr01.vna01.pccwbtn.net pos5-3.cr02.nyc02.pccwbtn.net pos6-0.cr01.ldn01.pccwbtn.net rbnet.pos4-1.cr01.ldn01.pccwbtn.net MSK-M9-RBNet-5.RBNet.ru MSK-M9-RBNet-1.RBNet.ru NSK-RBNet-2.RBNet.ru ESnet peering at Sunnyvale AS3491 CAIS Internet “ “ “ “ “ “ “ “ AS3491->AS5568 (Russian Backbone Network) peering point Russian Backbone Network “ “ “ “ Novosibirsk-NSC-RBNet.nsc.ru RBN to AS 5387 (NSCNET-2)
16
Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White
Workshop Panel Chairs
Ray Bair and Deb Agarwal Bill Johnston and Mike Wilde Rick Stevens Ian Foster and Dennis Gannon Linda Winkler and Brian Tierney Sandy Merola and Charlie Catlett
Available at www.es.net/#research
17
Feature Requirements Discipline Characteristics that Motivate High Speed Nets
distributed computing sites
elements/components as understanding increases
simulation data, 1-5 PBy / yr (just at NCAR)
to major users for post- simulation analysis
elements/components, including from other disciplines - this must be done with distributed, multidisciplinary simulation
load Networking
Climate
(near term) Analysis of model data by selected communities that have high speed networking (e.g. NCAR and NERSC)
streams for easier site access through firewalls
processing (computing and cache embedded in the net)
global data catalogues
transfer (across system / network failures) Middleware
guarantees for distributed, simulations
and work planners for reconstituting the data
Climate
(5 yr) Enable the analysis of model data by all of the collaborating community
large quantities of data
Climate
(5-10 yr) Integrated climate simulation that includes all high-impact factors
supporting distributed simulation - adequate bandwidth and latency for remote analysis and visualization of massive datasets
Vision for the Future Process of Science
analysis was driven by
18
Science Areas Today End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s 100 Gb/s 1000 Gb/s high bulk throughput Climate (Data & Computation) 0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s high bulk throughput SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy 0.066 Gb/s (500 MB/s burst) 0.198 Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/s time critical throughput Astrophysics 0.013 Gb/s (1 TBy/week) N*N multicast 1000 Gb/s computational steering and collaborations Genomics Data & Computation 0.091 Gb/s (1 TBy/day) 100s of users 1000 Gb/s + QoS for control channel high throughput and steering
19
20
Annual growth in the past five years has increased from 1.7x annually to just
21
Fermilab (US) → U. Chicago (US) CEBAF (US) → IN2P3 (FR) INFN Padva (IT) → SLAC (US)
Helmholtz-Karlsruhe (DE)→ SLAC (US) DOE Lab → DOE Lab DOE Lab → DOE Lab SLAC (US) → JANET (UK) Fermilab (US) → JANET (UK) Argonne (US) → Level3 (US) Argonne → SURFnet (NL) IN2P3 (FR) → SLAC (US)
Fermilab (US) → INFN Padva (IT)
ESnet Top 20 Data Flows, 24 hr. avg., 2004-04-20
SLAC (US) → INFN Padua (IT) 5.9 Terabytes
0.9 Terabytes SLAC (US)→ Helmholtz-Karlsruhe (DE) 0.9 Terabytes SLAC (US) → IN2P3 (FR) 5.3 Terabytes
FNAL (US)→ Helmholtz-Karlsruhe (DE) 0.6 Terabytes
FNAL (US)→ SDSC (US) 0.6 Terabytes
0.6 Terabytes
ESnet Top 10 Data Flows, 1 week avg., 2004-07-01
23
24
– trouble tickets are by email – engineering communication by email – engineering database interface is via Web
26
Picture detail
27 Cisco 7206 AOA-AR1 (low speed links to MIT & PPPL) ($38,150 list) Juniper M20 AOA-PR1 (peering RTR) ($353,000 list)
Juniper T320 AOA-CR1 (Core router) ($1,133,000 list)
Juniper OC192 Optical Ring Interface (the AOA end of the OC192 to CHI ($195,000 list) Juniper OC48 Optical Ring Interface (the AOA end of the OC48 to DC-HUB ($65,000 list)
AOA Performance Tester ($4800 list) Qwest DS3 DCX DC / AC Converter ($2200 list) Lightwave Secure Terminal Server ($4800 list)
Sentry power 48v 30/60 amp panel ($3900 list) Sentry power 48v 10/25 amp panel ($3350 list)
28 LBNL PPPL BNL AMES
Remote Engineer
infrastructure
DNS
Remote Engineer
infrastructure
TWC
Remote Engineer
ATL HUB
SEA HUB ALB HUB
NYC HUBS DC HUB E L P H U B CHI HUB SNV HUB
Duplicate Infrastructure Currently deploying full replication of the NOC databases and servers and Science Services databases in the NYC Qwest carrier hub Engineers, 24x7 Network Operations Center, generator backed power
translation)
revocation lists
service
Reliable operation of the network involves
network and infrastructure locations
29
(“The Spread of the Sapphire/Slammer Worm,” David Moore (CAIDA & UCSD CSE), Vern Paxson (ICIR & LBNL), Stefan Savage (UCSD CSE), Colleen Shannon (CAIDA), Stuart Staniford (Silicon Defense), Nicholas Weaver (Silicon Defense & UC Berkeley EECS) http://www.cs.berkeley.edu/~nweaver/sapphire ) Jan., 2003
30
31
router router border router
peering router
gateway router
ESnet second response – filter traffic from outside of ESnet Lab first response – filter incoming traffic at their ESnet gateway router ESnet third response – shut down the main peering paths and provide only limited bandwidth paths for specific “lifeline” services
peering router gateway router border router router attack traffic
ESnet first response – filters to assist a site
33
* Report as of July 15,2004
34
35
36
New York (AOA) C h i c a g
C H I ) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE sites
37
S C I C&C
instrument compute storage cache & compute
C&C C&C C&C C&C C&C C&C
C&C C&C C&C C&C C&C C&C
guaranteed bandwidth paths
need higher bandwidth
elements
elements
38
39
40
Europe Asia- Pacific
New York (AOA) C h i c a g
C H I ) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) DOE/OSC Labs New hubs Existing hubs
Possible new hubs Atlanta (ATL)
Metropolitan Area Rings
41
DEN DEN ELP ELP ALB ALB ATL ATL
MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites
Japan CERN Europe SDG SDG AsiaPac SEA SEA
NLR – ESnet hubs Qwest – ESnet hubs
SNV SNV Europe
10Gb/s 30Bg/s 40Gb/s
Japan CHI CHI
High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core
DC DC Japan NYC NYC
42
43
http://www.doecollaboratory.org/meetings/hpnpw
http://www.csm.ornl.gov/ghpn/wk2003
http://www.pnl.gov/scales/
http://www.es.net/hypertext/welcome/pr/Roadmap/index.html
http://www.cra.org/Activities/workshops/nitrd http://www.sc.doe.gov/ascr/20040510_hecrtf.pdf (public report)
http://www.fp-mcs.anl.gov/ascr-july03spw