CC-IN2P3 IHEP connec0vity issues Progress report Fabio - - PowerPoint PPT Presentation

cc in2p3 ihep connec0vity issues
SMART_READER_LITE
LIVE PREVIEW

CC-IN2P3 IHEP connec0vity issues Progress report Fabio - - PowerPoint PPT Presentation

CC-IN2P3 IHEP connec0vity issues Progress report Fabio Hernandez fabio@in2p3.fr March 12th, 2012 2 Context Summary of the previous meeting: network throughput IHEP CC-IN2P3 good enough and stable


slide-1
SLIDE 1

March ¡12th, ¡2012

Fabio Hernandez

fabio@in2p3.fr

CC-­‑IN2P3 ¡— ¡IHEP ¡ connec0vity ¡issues

Progress report

slide-2
SLIDE 2

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

Context

  • Summary of the previous meeting:

network throughput IHEP → CC-IN2P3 good enough and stable network throughput CC-IN2P3 → IHEP very low file transfer rates still low compared to network throughput details: http://indico.in2p3.fr/conferenceDisplay.py?confId=6342

  • Today, we summarize the progress

collectively made since then

2

slide-3
SLIDE 3

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

Perfsonar @ IHEP

  • Reconfigured as recommended by Laurent to

capture the results of throughput tests in the direction CC-IN2P3 → IHEP [Fazhi]

  • Now capturing and recording data on this

direction also

3

slide-4
SLIDE 4

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

Throughput tests CC-IN2P3 → IHEP

  • Machine at CERNet available for throughput

tests using iperf [Fazhi]

CERNet is the location where the Orient link London-Beijing arrives to Beijing

  • Machine at IHEP ready for manually triggered

iperf tests [Fazhi]

this is a separate machine from Perfsonar

  • Throughput tests from several machines in

IN2P3 network to CERNet and IHEP [Jérôme]

4

slide-5
SLIDE 5

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

Preliminary test results (cont.)

  • Currently performing tests per link segment

[Jérôme]

CC-IN2P3 → GEANT (London) → IHEP LAL → GEANT (London) → IHEP

5

slide-6
SLIDE 6

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

Preliminary test results

6

Sender @ in2p3.fr

Throughput to GEANT (London)

[Mbits/sec]

Throughput to IHEP

[Mbits/sec]

Throughput to ICEPP (JP)

[Mbits/sec]

lallhcone01 300 730 175 ccage 480 350 220 ccxfert02 800 250 100 cclhcone01** 500 35 460 ccirdli001* 120 5 2

iperf throughput averaged over 5 trials window size: 16MBytes, transfer time: 60 seconds date: 12/03/2012 source: Jérôme Bernier

**: 10 Gbps network card *: 2 x 1Gbps network card

slide-7
SLIDE 7

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

File transfer rate: can we improve?

  • Investigate if we can do something to improve the file transfer rates to

exploit the available bandwidth

use data collected by ATLAS for their file transfers: http://bourricot.cern.ch/dq2/ftsmon

  • ATLAS transfers CC-IN2P3 ⟷ IHEP are scheduled by FTS @ CC-IN2P3

Current configuration of FTS channel IN2P3-CC — BEIJING-LCG2 60 simultaneous transfers, 10 streams per transfer For comparison, the FTS channel IN2P3-CC — TOKYO: 20 simultaneous transfers, 10 streams per transfer [Thanks to Wenjing and Xiaofei]

  • Focused for now on the file transfers in the direction IHEP → CC-IN2P3,

since we know the network is able to transport data fast in this direction

  • Let’s compare with the rates observed by other sites while transferring files

to CC-IN2P3, under comparable network conditions

7

slide-8
SLIDE 8

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

CC-IN2P3 Network Throughput

8

1,000 800 600 400 200 200 400 600 800 1,000 ASGC IHEP TOKYO BNL FNAL TRIUMF CERN CNAF KIT NDGF PIC RAL SARA 442 386 677 687 937 485 934 215 712 608 438 646 112 922 58 194 918 839 656 926 361 37 223 360 22 46

Measured Network Throughput

Mbits/sec from CC-IN2P3 to CC-IN2P3

Asia North America Europe Average network throughput

  • ver 3 months, up to

10/03/2012

Source: CC-IN2P3’s Perfsonar https://ccperfsonar-lhcopn.in2p3.fr

slide-9
SLIDE 9

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

File transfer rate vs. Network throughput

9

6 12 18 24 30 IHEP BNL CNAF TOKYO TRIUMF 175 350 525 700

10 30 19 10 11 646 608 485 438 215

ATLAS File Transfer Rates vs. Network Throughput to CC-IN2P3

MBytes/sec Mbps

Unitary File Tranfer Rate to CC-IN2P3 [MBytes/sec] Network Throughput to CC-IN2P3 [Mbps]

ATLAS file transfer performance for file sizes > 1 GByte Period: 01/02/2012 to 10/03/2012 Source: ATLAS http://bourricot.cern.ch/dq2/ftsmon

slide-10
SLIDE 10

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

Observations

  • IHEP and TOKYO transfer files at the same rate. IHEP’s network throughput is

30% better

  • IHEP and TRIUMF transfer files at the same rate. IHEP’s network throughput is

3 times higher

  • BNL transfers files 3 times as fast as IHEP does, even if the network

throughput is similar for those 2 sites

very likely the network distance between BNL and CC-IN2P3 is shorter than between IHEP and CC-IN2P3 if so, we could see a correlation to the round trip time (could not measure it using CC-IN2P3’s Perfsonar) what are the parameters of the FTS channel BNL — CC-IN2P3

  • Throughput seems not to be the only parameter to look at

we have to look also at the latency between the sites, and perform dedicated tests

  • Is it possible to have a gridftp endpoint at CC-IN2P3 so that we can test

transfers from IHEP disk to CC-IN2P3 disk using different parameters?

10

slide-11
SLIDE 11

fabio ¡hernandez

fabio@in2p3.fr CAS/IHEP ¡Compu8ng ¡Centre CNRS/IN2P3 ¡Compu8ng ¡Centre

Questions & Comments

11