DYNES: DYnamic NEtwork System
Artur Barczyk California Institute of Technology / US LHCNet TERENA e2e Workshop TERENA e2e Workshop Prague, November 29th, 2010
1
DYNES: DYnamic NEtwork System Artur Barczyk California Institute - - PowerPoint PPT Presentation
DYNES: DYnamic NEtwork System Artur Barczyk California Institute of Technology / US LHCNet TERENA e2e Workshop TERENA e2e Workshop Prague, November 29 th , 2010 1 2 Deployment Plan What is DYNES OUTLINE Status DYNES Overview What is
1
2
– A U.S-wide dynamic network “cyber-instrument” spanning ~40 US universities and ~14 Internet2 connectors – Extends Internet2’s dynamic network service “ION” into U.S. regional networks and campuses; Aims to support LHC traffic (also internationally) – Based on the implementation of the Inter-Domain Circuit protocol developed by ESnet and Internet2; Cooperative development also with GEANT, GLIF
– Collaborative team: Internet2, Caltech, Univ. of Michigan, Vanderbilt – The LHC experiments, astrophysics community, WLCG, OSG, other VOs – The community of US regional networks and campuses
– Support large, long-distance scientific data flows in the LHC, other programs (e.g. LIGO, Virtual Observatory), & the broader scientific community – Build a distributed virtual instrument at sites of interest to the LHC but available to R&E community generally
3
Caltech, V d bilt Vanderbilt,
PI E i B d
(Internet2)
– Harvey Newman (Caltech) – Paul Sheldon (V d bilt) (Vanderbilt) – Shawn McKee (Univ. of Michigan) Michigan)
4
production use today by some Tier2s as well as Tier1s
– As higher capacity storage and regional, national and transoceanic 40G d 100 Gb t k li k b il bl d ff d bl 40G and 100 Gbps network links become available and affordable.
an appropriate architecture, and nationwide and Int’l community i l t b involvement by – The LHC groups at universities and labs – Campuses, regional and state networks connecting to Internet2 – ESnet, US LHCNet, NSF/IRNC, other major networks in US & Europe
– DYNES will help provide standard services and low cost equipment – DYNES will help provide standard services and low cost equipment to help meet the needs
5
scientific community at all the campuses served, by coupling to their l i t analysis systems: – Dynamic network circuit provisioning: IDC Controller – Data transport: Low Cost IDC-capable Ethernet Switch; FDT Server for high th h t L t t h d d ( l LHC) throughput, Low cost storage array where needed (also non-LHC) – End-to-end monitoring services
dynamic circuit network (“ION”), plus the standard mechanisms, tools and equipment needed – To build circuits with bandwidth guarantees across multiple network domains, th U S d t E i th f t across the U.S. and to Europe in the future
T b ild i i h hi h h h bili i d di d – To build a community with high throughput capability, using standardized, common methods
6
7
– A DYNES instrument must provide two basic capabilities at the Tier 2S, Tier3s and regional networks: and regional networks:
bandwidth to ensure transfer performance 2 M it i f th t k d d t t f
performance
to allocate network resources and monitor to allocate network resources and monitor the transfer. This capability currently exists
ESnet but is not widespread at the campus ESnet, but is not widespread at the campus and regional level. – In addition Tier 2 & 3 sites require: 3 Hardware at the end sites capable of making
Two typical transfers that DYNES supports: one Tier2 - Tier3 and
8
supports: one Tier2 - Tier3 and another Tier1-Tier2. The clouds represent the network domains involved in such a transfer.
1. An Ethernet switch 2. An Inter-domain Controller (IDC)
consists of OSCARS, DRAGON, and perfSONAR. This allows the regional network to provision resources on-demand through i t ti ith th th interaction with the other instruments
require a disk array or FDT server
At the network level, each regional connects the incoming campus connection to the Ethernet switch provided. Optionally, if a regional network already has a qualified switch compatible with the dynamic software that they prefer they
require a disk array or FDT server because they are providing transport for the Tier 2 and Tier 3 data transfers not initiating them
compatible with the dynamic software that they prefer, they may use that instead, or in addition to the provided
allocated by OSCARS & DRAGON. The VLAN has quality of service (QoS) parameters set to guarantee the bandwidth requirements of the connection as defined in the VLAN These
data transfers, not initiating them.
9 requirements of the connection as defined in the VLAN. These parameters are determined by the original circuit request from the researcher / application. through this VLAN, the regional provides transit between the campus IDCs connected in the same region or to the global IDC infrastructure.
at a Tier2 or Tier3 site consists
combining low cost & high performance: 1 An Inter-domain Controller (IDC)
server Sites with 10GE
throughput capability will have a dual-port Myricom 10GE network interface in the server.
The Fast Data Transfer (FDT) server connects to the disk array via the SAS controller and runs FDT software developed by Caltech. FDT i h l i h d d h i ll
network interface in the server.
with a Serial Attached SCSI (SAS) controller capable of
FDT is an asynchronous multithreaded system that automatically adjusts I/O and network buffers to achieve maximum network
the sites in some cases. The FDT server serves as an aggregator/ throughput optimizer in this case, feeding smooth flows over the networks directly to the Tier2 or Tier3 clusters The IDC server
( ) p several hundred MBytes/sec to local storage.
10 networks directly to the Tier2 or Tier3 clusters. The IDC server handles the allocation of network resources on the switch, inter- actions with other DYNES instruments related to network pro- visioning, and network performance monitoring. The IDC creates virtual LANs (VLANs) as needed.
DYNES offers several connectivity options for the local sites and the RONs. Two examples:
11
Connector and campus incorporating DYNES as part of production infrastructure Campus using 1 of 2 connections to the regional for DYNES and the other for general purpose IP connectivity
– stream continuously a list of files i d d t th d t d d it h h i l d i – use independent threads to read and write on each physical device – transfer data in parallel on multiple TCP streams, when necessary – use appropriate size of buffers for disk IO and networking – resume a file transfer session
12
support for dedicated Mass Storage system, compression, dynamic circuit setup circuit setup, …
U i IDC API – Using IDC API – Limiting transfer rate on end-host
– blocking (1 thread per channel) – non-blocking (selector + pool of threads)
for distributed FS)
transfers, or –nettest flag)
13
5
5 4
s]
FDT used to automatically request circuit 3 2
[Gbps
provisioning between booths 1
14
15
Artur.Barczyk@cern.ch
16