multi scale computing integrating local regional national
play

Multi-Scale Computing: Integrating Local, Regional, National and - PowerPoint PPT Presentation

Multi-Scale Computing: Integrating Local, Regional, National and International Grids through the TIGRE Project Alan Sill, Ph.D TIGRE Senior Scientist, High Performance Computing Center Adjunct Professor of Physics, Texas Tech University Goals


  1. Multi-Scale Computing: Integrating Local, Regional, National and International Grids through the TIGRE Project Alan Sill, Ph.D TIGRE Senior Scientist, High Performance Computing Center Adjunct Professor of Physics, Texas Tech University

  2. Goals of the TIGRE Project Provide a grid infrastructure that enables integration of computing systems, storage systems, databases, visualization labs and displays, even instruments and sensors across Texas. Facilitate new academic - government - private industry research partnerships by dramatically enhancing both computational capabilities and research infrastructure. Address research areas of interest to the State of Texas in which manifold increase of computing power, data access, and collaboration are necessary. Demonstrate new, preferred, enhanced or increased computing and storage handling capabilities offered by a statewide grid infrastructure.

  3. Background and History SuperComputing 1998: representatives from five Texas institutions met and agreed to cooperate and exchange notes on a variety of topics of interest in computing. High Performance Computing Across Texas (HiPCAT) was born as a result. TIGRE was proposed as one of Combined by Legislature into HiPCAT’s first projects. (Rice, one funding bill with LEARN: Texas Tech. U. of Houston, (Lone Star Education and Research Network) Texas A&M, and UT/TACC.) TIGRE $2.5M $7.3M Grid Software Development Fiber Optic Networking

  4. Project Organization Targeted application areas were selected by the steering committee , which also has responsibility for the overall direction of the project. These initial areas are: Biosciences and Medicine Air Quality Modeling Energy Exploration Technical implementation is carried out by the development team , consisting of two people at each of the five primary TIGRE institutions. Technical work organized by the development team into activities to meet project milestones , with work targeted to meet these milestones in collaboration across institutions.

  5. Funding: Texas Enterprise Fund Created as a tool to “create jobs in Texas.” Accompanied by a formal Economic Development Agreement with State. TEF intends to “create quality jobs and leverage private investment” to strengthen economic future of the state. This mandate differs from that of many grids.

  6. Project Progress to Date Goal is to achieve “quick build” toward working status: YEAR 1 YEAR 2 Q1: Q1: Project plan ✔ Alpha customer management services system Web site ✔ deployed & demonstrated; applications in three Certificate Authority ✔ application areas (in progress) Minimum testbed requirements ✔ Select 3 driving applications ✔ Q2: Project-wide global grid scheduler deployed (in progress) Q2: Q3: Alpha quality user portal ✔ Stable software status (only bug fixes after this) Required services for TIGRE specified Q3: Define server software stack ✔ Q4: Distribution Mechanism ✔ Complete hardening of software Simple demo of 1 TIGRE app ✔ Complete documentation Finalized procedures and policies to join Q4: TIGRE Alpha client software stack and Demonstrate TIGRE at SC installation method distributed ✔ ... So Far Making Excellent Progress!

  7. Details of TIGRE Software Based on the VDT, working in close cooperation with OSG. Based on Gram4 (Web Services); pre-WS only upon request. Client and server software stacks available. Goal is “one page” installation instructions that can be implemented quickly by newcomers. Uses a simplified set including GSI-OpenSSH, omitting much monitoring in favor of GPIR for lightweight status reporting. Installed on systems at all five primary TIGRE institutions; also running at several other locations throughout the state. Authentication via X.509 (IGTF + TACC CA recently accredited); authorization local mostly via grid-mapfiles. (TTU uses GUMS/PRIMA.)

  8. Server and Client Software Stack Contents Contents of the client software stack: Globus 4.x pre-web services and web services clients GSI OpenSSH client UberFTP MyProxy client Condor-G Contents of the cluster/compute server stack: Globus 4.x GRAM4 (web services) server GSI-OpenSSH server (note: as of this version works with PRIMA) GPIR monitoring (added manually after the VDT pieces) The client software stack Storage is not yet resolved in TIGRE, but initial investigations using SRM/DRM, SRB/iRODS, etc. are in progress.

  9. TIGRE User Portal http://tigreportal.hipcat.net)

  10. Application Progress: Biosciences and Medicine Initial demo was “UltraScan” : an analysis tool for reduction of data from ultracentrifuge biomolecular optical spectra. Researchers (Borries Demeler, UTHSC-San Antonio w/postdoc Emre Brookes): “We can do science we never did before!”

  11. Other Biosciences and Medicine Progress Beginning exploration of cancer radiotherapy modeling involving MD Anderson Cancer Center Houston, Joe Arrington Cancer Center and TTUHSC Southwest Cancer Center (Lubbock). Computationally intensive, good fit to grid computing and secure data transfer methods. Good fit to educational medical physics modeling needs. Identity management, security, privacy, license and export One of our most exciting near- control requirements! term applications.

  12. Application Progress: Air Quality Modeling One of the most computationally intensive application areas in Texas. Researchers identified at Texas A&M, U. Houston and TTU; application areas diverse and hard to separate from other general atmospheric modeling topics. Relevant code (WRF, MM5, etc.) exists at most institutions. Large amount of data generated; archival and redundancy, local access to data are among the relevant issues. Decide to start with data redundancy aspects while investigating more general code and framework needs. Goal: 31 day pool cache of modeling output results (~2 TB) to be shared among participating institutions.

  13. Application Progress: Energy Exploration Also an existing area of computationally intensive use within the state. Complications stem from industry sensitivities to shared access to resources and need to protect proprietary data. Research topics still nonetheless exist in abundance. Working with different researchers at all five TIGRE primary institutions. Beginning work with Kalman-filter based modeling methods for oil reservoir seismic data analysis. Some license management issues, minimized by degree to which researchers already own or have access to licenses.

  14. Authentication and Authorization Identity management and local authorization for virtual organizations within institutional boundaries is a very hot topic among institutional CIOs and the security organizations that serve them. Within Texas, we have one large TIGRE participant, the UT system, that has adopted Shibboleth for their own internal IdM federated across campuses, others that are looking at it and related products, and some that are not showing any signs of adopting it, as far as we can tell. Any solution we adopt has to work with both Shibbolized and non-Shibbolized campuses. So far we use X.509 certificates in combination with local gridmap or GUMS/PRIMA authz; TACC is working on an IGTF member-integrated (“MICS”) credential service profile. Other related work is in progress.

  15. Storage Topics TIGRE needs to work with a wide variety of storage models and data file sizes, compositions and types. Project formulation did not address this topic specifically. Atmospheric modeling application area has driven us to consider this topic more quickly than we expected. So far we have been working to understand options for SRM- interfaced storage and tools for grid-authenticated parallel transfer, access and storage. Plain gridftp, srmcp, srmcopy, RFT and GSI interfaces to SRB and/or iRODS being studied and tested. GridImage from NCI CaBIG (CaGrid) project also of interest for DICOM image transfers in radiotherapy applications.

  16. Lessons Learned It is important to note that the applications being pursued in this project are the result of selection of areas by the steering committee and not (although they are compatible) from the results of natural selection processes that tend to create collaboration-oriented VOs. People tend to need a reason to collaborate. The availability of resources on which to run is more of a problem (in the chicken-or-egg sense) when trying to create sub-VOs and collaborations from scratch. In each case we have been able to make progress by putting people into the field to learn needs of researchers and to try to pull people and resources together to pursue common needs.

  17. Outreach Outreach consists of pursuing needs beyond your comfort zone (by definition, think about the meaning of the word). In a grid context, this means doing things with the tools that we have not done before. Explaining has to be balanced with learning on the part of both the grid development team and the researchers. When the tools do not do what we want, we have to make them do what we (and the researchers) want them to. In many cases, this involves outreach to university decision makers (IT CIOs, Presidents, Provosts) as well as to the reseachers themselves to explain the need for new methods of working together as well as for networks, computers, etc.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend