atlas root i o pt 2
play

ATLAS ROOT I/O pt 2 Atlas Hot Topics (with reference to CHEP - PowerPoint PPT Presentation

ATLAS ROOT I/O pt 2 Atlas Hot Topics (with reference to CHEP presentations) Big data interlude (not ATLAS ) ROOT I/O Monitoring and Testing on ATLAS Atlas feature requests /fixes Wahid Bhimji 6th Dec 2013 ROOT IO Workshop Hot


  1. ATLAS ROOT I/O pt 2 • Atlas Hot Topics (with reference to CHEP presentations) • Big data interlude (not ATLAS ) • ROOT I/O Monitoring and Testing on ATLAS • Atlas feature requests /fixes Wahid Bhimji 6th Dec 2013 ROOT IO Workshop

  2. Hot topics

  3. Old (current) ATLAS data flow See my talk - at last CHEP Bytestream RAW Some Simplification! Not ROOT Reco TAG U AOD ESD s Analysis Reduce Athena e Software AOD/ESD r Framework ROOT with POOL N dESD dAOD Analysis Persistency t u p D3PD l ROOT ntuples with D3PD Non-Athena Analysis e only primitive types User Code s and vectors of those (standard tools and examples) ATLAS ROOT-based data formats - CHEP 2012 3

  4. A problem with this is ✤ Much heavy-IO activity is not centrally organised ✤ Run using users own pure-ROOT code - up to them to optimise By wallclock time, Analysis=22% By no. of jobs, Analysis =55% ATLAS ROOT-based data formats - CHEP 2012 4

  5. New (future) ATLAS Model see Paul Laycock’s CHEP talk , xAOD ‘ ✤ xAOD easier to read in pure ROOT than current AOD. ✤ Reduction framework centrally controlled and does heavy lifting ✤ Common analysis software

  6. New (future) ATLAS Model ✤ Of interest to this group ✤ New data structure: the xAOD ✤ Opportunities for IO optimisations that don’t have to be in each users code: reduction framework and common analysis tools ✤ A step on the way: NTUP_COMMON ✤ Previously many physics groups have their own (large) D3PDs overlapping in content - using more space than need be ✤ So new common D3PD solves that issue. But has a huge number (10k+) of branches. Not an optimal solution - will go to xAOD.

  7. ATLAS Xrootd Federation (FAX) see Ilija VUKOTIC’s CHEP talk Aggregating storage in global namespace with transparent failover Of interest to this group: ✤ WAN reading requires decent I/O.. And is working out OK for us (though not exposed to random users yet (except as a fallback))

  8. Atlas Rucio see Vincent GARONNE’s CHEP talk ✤ Redesign of data management system Concepts - Highlights Software Stack • Better management of users, physics groups, ATLAS Open and standard Clients CLIs, APIs, Python Clients technologies: activities, data ownership, permission, quota, etc. • RESTful APIs Rucio • Data hierarchy with metadata support user.jdoe:AllPeriods Core Authentication & Authorization (https+json) • http caching Account, Scope, Data identifier, Namespace, Meta- user.jdoe:RunPeriodA � Files are grouped into datasets • WSGI server data, Replica registry, Subscription, Rules, Locks, � Datasets/Containers are grouped user.jdoe:Run1 user.jdoe:Run2 Quota, Accounting ... • Token-based in containers Analytics user.jdoe: user.jdoe: user.jdoe: Daemons authentication • Concepts covering changes in middleware ... ... File_0001 File_0250 File_0751 Popularity, Accounting, File conveyor, Reaper (X509, GSS) Metrics, Measures, Reports • Open source data � Federations access protocols � Cloud storage Middleware � Move towards open and widely adopted protocols Rucio Storage Element(RSE), FTS3, Networking 6 see Mario Lassing’s CHEP poster and lightening talk 4 Of interest to this group: • Supports container file (metalink) for multiple sources / failover. • Possible http based “federation”

  9. Big data interlude (not ATLAS)

  10. Chep theme was “BiG data ...” ✤ “Big data” in industry means Hadoop and its successors ✤ Number of CHEP presentations for physics event I/O (ie as well as the log mining / metadata use cases that came before))- e.g.: ✤ EBKE and Waller: Drillbit column store ✤ My “Hepdoop”poster; Maaike Limper’s poster and lightning talk ✤ Many on using Hadoop processing with ROOT files ✤ Most don’t see explicit performance gains from using Hadoop ✤ My impression: scheduling tools and HDFS filessytem very mature data structures less so (for our needs) but much of interest from Dremel

  11. Big data opportunities ✤ Opportunities to benefit from growth of “big data” ✤ “Impact” of our work ✤ Sharing technologies / ideas. Gaining something from the others ✤ As Fons said parts of Dremel “sound pretty much like ROOT” but that should make us sad as well as proud. ✤ It would be great if we make ROOT usable / used by these communities and their products useable by us: some areas are Friend trees, ROOT modular “distribution”; chunkable ROOT files etc. ✤ I realize this requires manpower but surprising if the hype can’t get us some money for transferring LHC big data expertise to industry

  12. ATLAS Monitoring and Testing

  13. What testing /monitoring we have Hammercloud dataset ✤ Still have Hammercloud ROOT I/O Regularly SVN submitting Define single tests Code; tests (see previous meeting). Now Release; ROOT source Dataset;… testing also FAX /WAN (via curl) Oracle ✤ ad hoc Hammercloud stress tests with Sites Db Uploads stats real analysis codes (also for FAX) Data mining tools Command line, Web interface, ROOT scripts ✤ But also now have server-side dCache& DPM& xRootD& posix& detailed xrootd records Castor& xRootD& dCache& xRootD& xRootD& xroot4j& • for federation traffic Monitor& xroot4j 1& Monitor& • Also local traffic for sites UDP& Collector&at& CERN& that use xrootd Collector&at& Consumers:& Ac=veMQ& SLAC& Detailed&stream& • Popularity&DB& • Regular monitoring but Summary& • SSB& stream& • Dashboard& also can be mined MonaLisa&

  14. From mining of xrootd records from Edinburgh site - all jobs Jobs with up to a million read operations ECDF • Most jobs have 0 vector operations (so not using TTC) • Some jobs do use it (but these mostly tests (shaded blue)) We need to be able to switch on TTreeCache for our users!

  15. Old slide from CHEP 2012 to remind of the impact of TTC Cpu Eff. 100% Events read ! TTreeCache 300 MB TTC essential at 1 Cpu Eff. No TTC some sites 0.95 0.9 ! Users still don’t set it 0.85 ! Different 0.8 optimal 0.75 values per 0.7 site 0.65 ! Ability to set 0.6 in job 0.55 environment 0.5 would be dCache localcopy DPM local copy dCap Direct Lustre xrootd/EOS GPFS useful ATLAS ROOT-based data formats - CHEP 2012 20

  16. Oct HC FAX stress tests - US cloud See Johannes 91 Elmsheuser 166 246 MWT2 et al.’s 49 CHEP poster BNL Width and numbers: 198 6 100 4 72 4 Event rate 5 54 87 118 Green = 100% success 99 AGLT2 Remote reading 1 284 1 5 37 SLAC working well in these tests (which do use TTreeCache !) Johannes Elmsheuser, Friedrich H¨ onig (LMU M¨ unchen) HammerCloud status and FAX tests 23/10/2013

  17. ROOT IO Feature Requests ✤ TTreeCache switch on/ configure in environ.: (including in ROOT 5.) ✤ This is useful even if TTC on by default or in new framework. ✤ Choices (multiple trees etc.) - but are there blockers? ✤ Support for new analysis model: generally for xAOD as it develops ✤ Specific Reflex feature “rules for the production of dictionaries for template types with markup” in ROOT 6 (already on todo list I hear) ✤ Advice on handling Trees with 5000+ branches. ✤ Planned http access would benefit from TDavixFile: is it in now?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend