global software distribution with cernvm fs
play

Global Software Distribution with CernVM-FS Jakob Blomer CERN 2016 - PowerPoint PPT Presentation

Global Software Distribution with CernVM-FS Jakob Blomer CERN 2016 CCL Workshop on Scalable Computing October 19th, 2016 jblomer@cern.ch CernVM-FS 1 / 15 The Anatomy of a Scientific Software Stack (In High Energy Physics) jblomer@cern.ch


  1. Global Software Distribution with CernVM-FS Jakob Blomer CERN 2016 CCL Workshop on Scalable Computing October 19th, 2016 jblomer@cern.ch CernVM-FS 1 / 15

  2. The Anatomy of a Scientific Software Stack (In High Energy Physics) jblomer@cern.ch CernVM-FS 2 / 15

  3. The Anatomy of a Scientific Software Stack (In High Energy Physics) My Analysis Code < 10 Python Classes CMS Software Framework O ( 1000 ) C++ Classes Simulation and I/O Libraries ROOT, Geant4, MC- XYZ CentOS 6 and Utilities O ( 10 ) Libraries jblomer@cern.ch CernVM-FS 2 / 15

  4. The Anatomy of a Scientific Software Stack (In High Energy Physics) My Analysis Code < 10 Python Classes How to install on. . . • my laptop: CMS Software Framework compile into /opt O ( 1000 ) C++ Classes ∼ 1 week • my local cluster: ask sys-admin to install in Simulation and I/O Libraries ROOT, Geant4, MC- XYZ /nfs/software > 1 week • someone else’s cluster: ? CentOS 6 and Utilities O ( 10 ) Libraries jblomer@cern.ch CernVM-FS 2 / 15

  5. The Anatomy of a Scientific Software Stack (In High Energy Physics) My Analysis Code changing < 10 Python Classes How to install (again) on. . . • my laptop: CMS Software Framework compile into /opt O ( 1000 ) C++ Classes ∼ 1 week • my local cluster: ask sys-admin to install in Simulation and I/O Libraries ROOT, Geant4, MC- XYZ /nfs/software > 1 week stable • someone else’s cluster: ? CentOS 6 and Utilities O ( 10 ) Libraries jblomer@cern.ch CernVM-FS 2 / 15

  6. Beyond the Local Cluster World Wide LHC Computing Grid • ∼ 200 sites: from 100 to 100 000 cores • Different countries, institutions, batch schedulers, OSs, . . . • Augmented by clouds, supercomputers, LHC@Home jblomer@cern.ch CernVM-FS 3 / 15

  7. What about Docker? Example: in Docker $ docker pull r-base Libs . . . Linux → 1 GB image − $ docker run -it r-base $ ... (fitting tutorial) → only 30 MB used Container (łAppž) − It’s hard to scale Docker: iPhone App Docker Image 20 MB 1 GB changes every month changes twice a week phones update staggered servers update synchronized → Your preferred cluster or supercomputer might not run Docker − jblomer@cern.ch CernVM-FS 4 / 15

  8. A File System for Software Distribution Software FS rAA Basic System Utilities Global HTTP Cache Hierarchy OS Kernel Worker Node’s Worker Node’s Central Web Server Memory Buffer Disk Cache Entire Software Stack Megabytes Gigabytes Terabytes Pioneered by CCL’s GROW-FS for CDF at Tevatron Refined in CernVM-FS, in production for CERN’s LHC and other experiments 1 Single point of publishing 2 HTTP transport, access and caching on demand 3 Important for scaling: bulk meta-data download (not shown) jblomer@cern.ch CernVM-FS 5 / 15

  9. One More Ingredient: Content-Addressable Storage Transformation HTTP Transport Caching & Replication Content-Addressed Read-Only Read/Write File System File System Objects Worker Nodes Software Publisher / Master Source Two independent issues 1 How to mount a file system (on someone else’s computer)? 2 How to distribute immutable, independent objects? jblomer@cern.ch CernVM-FS 6 / 15

  10. Content-Addressable Storage: Data Structures /cvmfs/icecube.opensciencegrid.org Object Store amd64-gcc6.0 • Compressed files and chunks 4.2.0 • De-duplicated ChangeLog . . File Catalog . • Directory structure, symlinks Compression, SHA-1 806fbb67373e9... • Content hashes of regular files • Digitally signed ⇒ integrity, authenticity Repository • Time to live • Partitioned / Merkle hashes (possibility of sub catalogs) Object Store File catalogs ⇒ Immutable files, trivial to check for corruption, versioning jblomer@cern.ch CernVM-FS 7 / 15

  11. Transactional Publish Interface Read/Write Scratch Area CernVM-FS Read-Only Union File System AUFS or OverlayFS Read/Write Interface File System, S3 Publishing New Content [ ~ ]# cvmfs_server transaction icecube.opensciencegrid.org [ ~ ]# make DESTDIR=/cvmfs/opensciencgrid.org/amd64-gcc6.0/4.2.0 install [ ~ ]# cvmfs_server publish icecube.opensciencegrid.org Uses cvmfs-server tools and an Apache web server jblomer@cern.ch CernVM-FS 8 / 15

  12. Transactional Publish Interface Read/Write Scratch Area CernVM-FS Read-Only Union File System AUFS or OverlayFS Read/Write Interface File System, S3 Reproducible: as in git, you can always come back to this state Publishing New Content [ ~ ]# cvmfs_server transaction icecube.opensciencegrid.org [ ~ ]# make DESTDIR=/cvmfs/opensciencgrid.org/amd64-gcc6.0/4.2.0 install [ ~ ]# cvmfs_server publish icecube.opensciencegrid.org Uses cvmfs-server tools and an Apache web server jblomer@cern.ch CernVM-FS 8 / 15

  13. Content Distribution over the Web Server side: stateless services Data Center Caching Proxy Web Servery O ( 100 ) nodes / server O ( 10 ) DCs / server Worker Nodes HTTP HTTP jblomer@cern.ch CernVM-FS 9 / 15

  14. Content Distribution over the Web Server side: stateless services Data Center Load Balancing Web Servery O ( 100 ) nodes / server O ( 10 ) DCs / server HTTP HTTP Worker Nodes HTTP HTTP jblomer@cern.ch CernVM-FS 9 / 15

  15. Content Distribution over the Web Server side: stateless services Data Center Caching Proxies Web Servery O ( 100 ) nodes / server O ( 10 ) DCs / server Failover Worker Nodes HTTP HTTP jblomer@cern.ch CernVM-FS 9 / 15

  16. Content Distribution over the Web Server side: stateless services Data Center Mirror Serversy Caching Proxies O ( 100 ) nodes / server O ( 10 ) DCs / server Failover Worker Nodes Geo-IP HTTP jblomer@cern.ch CernVM-FS 9 / 15

  17. Content Distribution over the Web Server side: stateless services Data Center Mirror Serversy Caching Proxies O ( 100 ) nodes / server O ( 10 ) DCs / server Failover Worker Nodes HTTP jblomer@cern.ch CernVM-FS 9 / 15

  18. Content Distribution over the Web Server side: stateless services Data Center Mirror Serversy Caching Proxies O ( 100 ) nodes / server O ( 10 ) DCs / server Worker Nodes Prefetched Cache jblomer@cern.ch CernVM-FS 9 / 15

  19. Mounting the File System Client: Fuse Available for RHEL, Ubuntu, OS X; Intel, ARM, Power Works on most grids and virtual machines (cloud) inflate+verify HTTP GET fd file descr. open(/ChangeLog) CernVM-FS SHA1 glibc libfuse user space syscall /dev/fuse kernel space Fuse . . . VFS inode cache NFS dentry cache ext3 jblomer@cern.ch CernVM-FS 10 / 15

  20. Mounting the File System Client: Parrot Available for Linux / Intel Works on supercomputers, opportunistic clusters, in containers inflate+verify Parrot Sandbox HTTP GET fd file descr. open(/ChangeLog) libcvmfs SHA1 glibc libparrot user space syscall / Parrot kernel space Fuse . . . VFS inode cache NFS dentry cache ext3 jblomer@cern.ch CernVM-FS 11 / 15

  21. Scale of Deployment • > 350 million őles under management • > 50 repositories • Installation service by OSG and EGI jblomer@cern.ch CernVM-FS 12 / 15

  22. Docker Integration Under Construction! Improved Docker Daemon Docker Daemon Funded Project pull & push containers file-based transfer Docker Registry CernVM File System jblomer@cern.ch CernVM-FS 13 / 15

  23. Client Cache Manager Plugins Under Construction! 3rd party plugins C library Memory, Transport cvmfs/fuse Cache Manager Ceph, Channel (Key-Value Store) libcvmfs/parrot RAMCloud, (TCP, Socket, . . . ) . . . Draft C Interface cvmfs_add_refcount ( s t r u c t hash object_id , change_by ) ; i n t cvmfs_pread ( s t r u c t hash object_id , i n t o f f s e t , i n t s i z e , ∗ b u f f e r ) ; void // T r a n s a c t i o n a l w r i t i n g i n f i x e d − s i z e d chunks cvmfs_start_txn ( s t r u c t hash object_id , i n t txn_id , i n f o o b j e c t _ i n f o ) ; s t r u c t cvmfs_write_txn ( i n t txn_id , void ∗ b u f f e r , i n t s i z e ) ; cvmfs_abort_txn ( i n t txn_id ) ; cvmfs_commit_txn ( i n t txn_id ) ; jblomer@cern.ch CernVM-FS 14 / 15

  24. Summary CernVM-FS Use Cases • Global, HTTP-based file system • Scientific software for software distribution • Distribution of static data • Works great with Parrot e. g. conditions, calibration • Optimized for small files, • VM / container distribution heavy meta-data workload cf. CernVM • Open source (BSD), • Building block for long-term used beyond high-energy physics data preservation Source code: https://github.com/cvmfs/cvmfs Downloads: https://cernvm.cern.ch/portal/filesystem/downloads Documentation: https://cvmfs.readthedocs.org Mailing list: cvmfs-talk@cern.ch jblomer@cern.ch CernVM-FS 15 / 15

  25. Backup Slides jblomer@cern.ch CernVM-FS 16 / 15

  26. CernVM-FS Client Tools Mount helpers Fuse Module • Setup environment (number of file • Normal namespace: descriptors, access rights, . . . ) /cvmfs/<repository> e. g. /cvmfs/atlas.cern.ch • Used by autofs on /cvmfs • Private mount as a user possible • Used by /etc/fstab or mount as root • One process per fuse module + mount -t cvmfs atlas.cern.ch watchdog process /cvmfs/atlas.cern.ch • Cache on local disk • Cache LRU managed Diagnostics • Nagios check available • NFS Export Mode • cvmfs_config probe • Hotpach functionality cvmfs_config reload • cvmfs_config chksetup • cvmfs_fsck Parrot • cvmfs_talk , connect to running • Built in by default instance jblomer@cern.ch CernVM-FS 17 / 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend