March ¡12th, ¡2012
Fabio Hernandez
fabio@in2p3.fr
CC-IN2P3 IHEP connec0vity issues Progress report Fabio - - PowerPoint PPT Presentation
CC-IN2P3 IHEP connec0vity issues Progress report Fabio Hernandez fabio@in2p3.fr March 12th, 2012 2 Context Summary of the previous meeting: network throughput IHEP CC-IN2P3 good enough and stable
March ¡12th, ¡2012
fabio@in2p3.fr
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
2
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
3
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
CERNet is the location where the Orient link London-Beijing arrives to Beijing
this is a separate machine from Perfsonar
4
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
[Jérôme]
5
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
6
Sender @ in2p3.fr
Throughput to GEANT (London)
[Mbits/sec]
Throughput to IHEP
[Mbits/sec]
Throughput to ICEPP (JP)
[Mbits/sec]
iperf throughput averaged over 5 trials window size: 16MBytes, transfer time: 60 seconds date: 12/03/2012 source: Jérôme Bernier
**: 10 Gbps network card *: 2 x 1Gbps network card
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
exploit the available bandwidth
use data collected by ATLAS for their file transfers: http://bourricot.cern.ch/dq2/ftsmon
Current configuration of FTS channel IN2P3-CC — BEIJING-LCG2 60 simultaneous transfers, 10 streams per transfer For comparison, the FTS channel IN2P3-CC — TOKYO: 20 simultaneous transfers, 10 streams per transfer [Thanks to Wenjing and Xiaofei]
since we know the network is able to transport data fast in this direction
to CC-IN2P3, under comparable network conditions
7
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
8
1,000 800 600 400 200 200 400 600 800 1,000 ASGC IHEP TOKYO BNL FNAL TRIUMF CERN CNAF KIT NDGF PIC RAL SARA 442 386 677 687 937 485 934 215 712 608 438 646 112 922 58 194 918 839 656 926 361 37 223 360 22 46
Measured Network Throughput
Mbits/sec from CC-IN2P3 to CC-IN2P3
Asia North America Europe Average network throughput
10/03/2012
Source: CC-IN2P3’s Perfsonar https://ccperfsonar-lhcopn.in2p3.fr
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
9
6 12 18 24 30 IHEP BNL CNAF TOKYO TRIUMF 175 350 525 700
10 30 19 10 11 646 608 485 438 215
ATLAS File Transfer Rates vs. Network Throughput to CC-IN2P3
MBytes/sec Mbps
Unitary File Tranfer Rate to CC-IN2P3 [MBytes/sec] Network Throughput to CC-IN2P3 [Mbps]
ATLAS file transfer performance for file sizes > 1 GByte Period: 01/02/2012 to 10/03/2012 Source: ATLAS http://bourricot.cern.ch/dq2/ftsmon
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
30% better
3 times higher
throughput is similar for those 2 sites
very likely the network distance between BNL and CC-IN2P3 is shorter than between IHEP and CC-IN2P3 if so, we could see a correlation to the round trip time (could not measure it using CC-IN2P3’s Perfsonar) what are the parameters of the FTS channel BNL — CC-IN2P3
we have to look also at the latency between the sites, and perform dedicated tests
transfers from IHEP disk to CC-IN2P3 disk using different parameters?
10
fabio ¡hernandez
fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre
11