open science grid one grid among many
play

Open Science Grid one grid among many Ruth Pordes Fermilab May - PowerPoint PPT Presentation

Open Science Grid one grid among many Ruth Pordes Fermilab May 3rd 2006 of course a special grid its the people (some of them at the consortium meeting in Jan 06) 5/3/06 2 With special partners 5/3/06 3 The Open Science Grid


  1. Open Science Grid one grid among many Ruth Pordes Fermilab May 3rd 2006

  2. of course a special grid … it’s the people… (some of them at the consortium meeting in Jan 06) 5/3/06 2

  3. With special partners… 5/3/06 3

  4. The Open Science Grid Consortium brings  the grid service providers - middleware developers, cluster, network and storage administrators, local-grid communities  the grid consumers - from global collaborations to the single researcher, through campus communities to under-served science domains into a cooperative to share and sustain a common heterogeneous distributed facility in the US and beyond. Grid providers serve multiple communities; Grid consumers use multiple grids. 5/3/06 4

  5. The Open Science Grid Consortium brings I am the Executive Director Miron Livny is Manager of the OSG Distributed Facility: Head of the Condor Project and Virtual Data Toolkit, Coordinator of US federation in EGEE, member of EGEE/gLITE design team. Bill Kramer is the Chair of the Science Council; Head of Berkeley Lab NERSC supercomputing facility. Ian Foster is co-PI of the OSG Proposal: responsible for Globus and Computer Science research contributions and partnerships. Harvey Newman represents Advanced Network project contributions and collaborations. Alan Blateky is Engagement Coordinator for new communities.. Experiment software leadership: US ATLAS and US CMS Sw/C leaders, LIGO, CDF, D0, STAR etc. 5/3/06 5

  6. The OSG Eco-System: Bio Interdependence With international and national infrastructures - EGEE, TeraGrid; a growing number of campus grids - GLOW, GROW, GRASE, FermiGrid, Crimson Grid, TIGRE; the end-user integrated distributed systems - LIGO Data Grid, CMS and ATLAS distributed analysis systems, Tevatron SAMGrid, and STAR Data Grid. 5/3/06 6

  7. What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing clusters, storage and networks.  Owner controlled resources and usage policies.  Supporting Science  5 year Proposal submitted to NSF and DOE - should hear in June.  Open and Heterogeneous  Research groups transitioning from & extending (legacy) systems to Grids:  Experiments developing new systems.  Application Computer Scientists looking for Real life use of technology, integration, operation. 5/3/06  7 University Researchers...

  8. What is Open Science Grid? Blueprint Principles (june 2004) Preserve Site autonomy and shared Grid use with local access. VO based Environment and Services. Recursive principles throughout - support “grid of grids” 5/3/06 8

  9. First & foremost - delivery to the WLCG schedule for LHC science And soon a third: Naregi 5/3/06 9

  10. OSG: More than a US Grid Korea Brazil - (D0, STAR, LHC) Taiwan - (CDF, LHC) 5/3/06 10

  11. OSG 1 day last week: Routed from Local UWisconsin LHC Bioinformatics Campus Grid  50 Clusters : used locally as well as through the 2000 running jobs grid  5 Large disk or tape stores  23 VOs Run II  >2000 jobs running through Grid; 500 waiting jobs 5/3/06 11

  12. The Trend? OSG 0.4.0 deployment 5/3/06 12

  13. While LHC Physics drives the schedule and performance envelope 1 GigaByte/sec 5/3/06 13

  14. OSG also Serves other stakeholders  Gravitational Wave and other legacy Physics exps. E.g. From OSG Proposal: LIGO : With an annual science run of data  collected at roughly a terabyte of raw data per day, this will be critical to the goal of transparently carrying out LIGO data analysis on the opportunistic cycles available on other VOs hardware  Opportunity to share use of “standing army” of resources E.g. Genome Analysis and Database Update system ,   Interfacing existing computing and storage facilities and Campus Grids to a common infrastructure. E.g. FermiGrid Strategy: To allow opportunistic use of otherwise  dedicated resources. To save effort by implementing shared services. To work coherently to move all of our applications and services to run on the Grid. 5/3/06 14

  15. OSG also Serves other stakeholders  Gravitational Wave and other legacy Physics exps. E.g. From OSG Proposal: LIGO : With an annual science run of data  collected at roughly a terabyte of raw data per day, this will be critical to the goal of transparently carrying out LIGO data analysis on the opportunistic cycles available on other VOs hardware  Opportunity to share use of “standing army” of 3 Examples of Interoperation resources E.g. From OSG news: Genome Analysis and Database Update  system ,  Interface existing computing and storage facilities and Campus Grids to a common infrastructure. E.g. FermiGrid Strategy: To allow opportunistic use of otherwise  dedicated resources. To save effort by implementing shared services. To work coherently to move all of our applications and services to run on the Grid. 5/3/06 15

  16. Grid Laboratory of Wisconsin (GLOW): 5/3/06 16

  17. GLOW to OSG and the Football Pool problem:  Routing jobs from “lan-grid” local security, job, storage infrastructure and to “wan-grid”.  Middleware development from CMS DISUN outreach program.  The goal of the application is to determine the smallest "covering code" of ternary words of length six. (Or in the football pool, to determine how many lottery tickets one would have to buy to guarantee that no more than one prediction is incorrect.) Even after decades of study, only fairly weak bounds are known on this value. Solutions to this problem have applications in data compression, coding theory and statistical designs. 5/3/06 17

  18. Opportunistic Routing from GLOW to OSG 5/3/06 18

  19. TeraGrid Through high-performance network connections, TeraGrid integrates high-performance computers, data resources and tools, and high- end experimental facilities around the (US) country.  CDF MonteCarlo jobs running on Purdue TeraGrid resource; able to access OSG data areas and be accounted to both Grids. 5/3/06 19 http://www.nsf.gov/news/news_images.jsp?cntn_id=104248&org=OLPA

  20. Genome Analysis and Database Update system Runs across TeraGrid and OSG. Uses the Virtual Data System  (VDS) workflow & provenance. Pass through public DNA and protein databases for new and  newly updated genomes of different organisms and runs BLAST, Blocks, Chisel. 1200 users of resulting DB. Request: 1000 CPUs for 1-2 weeks. Once a month, every month.  On OSG at the moment >600CPUs and 17,000 jobs a week. 5/3/06 20

  21. Interoperation & Commonality with EGEE  OSG sites publish Information to WLCG BDII so Resource Brokers can route jobs.  Operations  Security  Middleware 5/3/06 21

  22. OSG Middleware Layers … VO Specific LHC Tevatron LIGO Applications Services & Services & CDF, D0 Data Grid Interfaces Interfaces Interfaces OSG Release Cache: VDT + Configuration, Validation, VO management, Virtual Data Toolkit (VDT) Common Services NMI + VOMS, MonaLisa, Clarens, AuthZ etc Infrastructure NSF Middleware Initiative (NMI): Condor, Globus, Myproxy 5/3/06 22

  23. OSG Middleware Layers … VO Specific LHC Tevatron LIGO Applications Services & Services & CDF, D0 Data Grid Interfaces Interfaces Interfaces OSG Release Cache: VDT + Configuration, Validation, VO management, Virtual Data Toolkit (VDT) Common Services NMI + VOMS, MonaLisa, Clarens, AuthZ etc Infrastructure NSF Middleware Initiative (NMI): Condor, Globus, Myproxy 5/3/06 23

  24. Virtual Data Toolkit V1.3.10b - a collection of components to integrate to a Distributed System Easy to download, install and use. Apache HTTPD 2.2.0 jClarens Web Service Registry 0.6.1 Apache Tomcat 5.0.28 JobMon 0.2 Clarens 0.7.2 KX509 20031111 ClassAds 0.9.7 MonALISA 1.4.12 Condor/Condor-G 6.7.18 MyProxy 3.4 DOE and LCG CA Certificates v4 (includes LCG 0.25 MySQL 4.1.11 CAs) DRM 1.2.10 Nest 0.9.7-pre1 EDG CRL Update 1.2.5 Netlogger 3.2.4 EDG Make Gridmap 2.1.0 PPDG Cert Scripts 1.7 Fault Tolerant Shell (ftsh) 2.0.12 PRIMA Authorization Module 0.3 Generic Information Provider 1.0.15 (Iowa 15-Feb-2006) PRIMA Authorization Module For GT4 Web Services 0.1.0 gLite CE Monitor (INFN prerelease from 2005-11-15) pyGlobus gt4.0.1-1.13 1.6.0 Globus Toolkit, pre web-services 4.0.1 pyGridWare gt4.0.1a Globus Toolkit, web-services 4.0.1 RLS 3.0.041021 GLUE Schema 1.2 draft 7 SRM Tester 1.1 GSI-Enabled OpenSSH 3.6 UberFTP 1.18 GUMS 1.1.0 Virtual Data System 1.4.4 Java SDK 1.4.2_10 VOMS 1.6.10.2 jClarens 0.6.1 VOMS Admin (client 1.2.10, interface 1.0.2, server 1.2.10) 1.2.10-r0 jClarens Discovery Services registration scripts 20060206 Common with EGEE/WLCG 5/3/06 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend