cc in2p3 ihep connec0vity issues
play

CC-IN2P3 IHEP connec0vity issues Progress report Fabio - PowerPoint PPT Presentation

CC-IN2P3 IHEP connec0vity issues Progress report Fabio Hernandez fabio@in2p3.fr March 12th, 2012 2 Context Summary of the previous meeting: network throughput IHEP CC-IN2P3 good enough and stable


  1. CC-­‑IN2P3 ¡— ¡IHEP ¡ connec0vity ¡issues Progress report Fabio Hernandez fabio@in2p3.fr March ¡12th, ¡2012

  2. 2 Context • Summary of the previous meeting: network throughput IHEP → CC-IN2P3 good enough and stable network throughput CC-IN2P3 → IHEP very low file transfer rates still low compared to network throughput details: http://indico.in2p3.fr/conferenceDisplay.py?confId=6342 • Today, we summarize the progress collectively made since then fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  3. 3 Perfsonar @ IHEP • Reconfigured as recommended by Laurent to capture the results of throughput tests in the direction CC-IN2P3 → IHEP [Fazhi] • Now capturing and recording data on this direction also fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  4. 4 Throughput tests CC-IN2P3 → IHEP • Machine at CERNet available for throughput tests using iperf [Fazhi] CERNet is the location where the Orient link London-Beijing arrives to Beijing • Machine at IHEP ready for manually triggered iperf tests [Fazhi] this is a separate machine from Perfsonar • Throughput tests from several machines in IN2P3 network to CERNet and IHEP [Jérôme] fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  5. 5 Preliminary test results (cont.) • Currently performing tests per link segment [Jérôme] CC-IN2P3 → GEANT (London) → IHEP LAL → GEANT (London) → IHEP fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  6. 6 Preliminary test results Throughput to Throughput to Throughput to Sender @ in2p3.fr GEANT (London) IHEP ICEPP (JP) [Mbits/sec] [Mbits/sec] [Mbits/sec] lallhcone01 300 730 175 ccage 480 350 220 ccxfert02 800 250 100 cclhcone01** 500 35 460 ccirdli001* 120 5 2 **: 10 Gbps network card iperf throughput averaged over 5 trials window size: 16MBytes, transfer time: 60 seconds *: 2 x 1Gbps network card date: 12/03/2012 source: Jérôme Bernier fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  7. 7 File transfer rate: can we improve? • Investigate if we can do something to improve the file transfer rates to exploit the available bandwidth use data collected by ATLAS for their file transfers: http://bourricot.cern.ch/dq2/ftsmon • ATLAS transfers CC-IN2P3 ⟷ IHEP are scheduled by FTS @ CC-IN2P3 Current configuration of FTS channel IN2P3-CC — BEIJING-LCG2 60 simultaneous transfers, 10 streams per transfer For comparison, the FTS channel IN2P3-CC — TOKYO: 20 simultaneous transfers, 10 streams per transfer [Thanks to Wenjing and Xiaofei] • Focused for now on the file transfers in the direction IHEP → CC-IN2P3, since we know the network is able to transport data fast in this direction • Let’s compare with the rates observed by other sites while transferring files to CC-IN2P3, under comparable network conditions fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  8. 8 CC-IN2P3 Network Throughput Measured Network Throughput 1,000 Asia North America Europe 926 922 918 800 839 600 656 Average network throughput over 3 months, up to 400 10/03/2012 360 361 200 223 194 Mbits/sec Source: CC-IN2P3’s Perfsonar 58 0 46 https://ccperfsonar-lhcopn.in2p3.fr 37 22 112 215 200 386 438 442 400 485 608 646 600 677 687 712 800 934 937 1,000 ASGC IHEP TOKYO BNL FNAL TRIUMF CERN CNAF KIT NDGF PIC RAL SARA from CC-IN2P3 to CC-IN2P3 fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  9. 9 File transfer rate vs. Network throughput ATLAS File Transfer Rates vs. Network Throughput to CC-IN2P3 30 700 646 608 24 525 485 18 438 MBytes/sec Mbps 350 12 215 175 6 10 30 19 10 11 0 0 IHEP BNL CNAF TOKYO TRIUMF Unitary File Tranfer Rate to CC-IN2P3 [MBytes/sec] Network Throughput to CC-IN2P3 [Mbps] ATLAS file transfer performance for file sizes > 1 GByte Period: 01/02/2012 to 10/03/2012 Source: ATLAS http://bourricot.cern.ch/dq2/ftsmon fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  10. 10 Observations • IHEP and TOKYO transfer files at the same rate. IHEP’s network throughput is 30% better • IHEP and TRIUMF transfer files at the same rate. IHEP’s network throughput is 3 times higher • BNL transfers files 3 times as fast as IHEP does, even if the network throughput is similar for those 2 sites very likely the network distance between BNL and CC-IN2P3 is shorter than between IHEP and CC-IN2P3 if so, we could see a correlation to the round trip time (could not measure it using CC-IN2P3’s Perfsonar) what are the parameters of the FTS channel BNL — CC-IN2P3 • Throughput seems not to be the only parameter to look at we have to look also at the latency between the sites, and perform dedicated tests • Is it possible to have a gridftp endpoint at CC-IN2P3 so that we can test transfers from IHEP disk to CC-IN2P3 disk using different parameters? fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

  11. 11 Questions & Comments fabio ¡hernandez CAS/IHEP ¡Compu8ng ¡Centre fabio@in2p3.fr CNRS/IN2P3 ¡Compu8ng ¡Centre

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend