A Glimpse of NCSAs Role in Support of Research Radha Nandkumar - - PowerPoint PPT Presentation

a glimpse of ncsa s role in support of research
SMART_READER_LITE
LIVE PREVIEW

A Glimpse of NCSAs Role in Support of Research Radha Nandkumar - - PowerPoint PPT Presentation

A Glimpse of NCSAs Role in Support of Research Radha Nandkumar Program Director, International Affiliations & Campus Relations National Center for Supercomputing Applications radha@ncsa.uiuc.edu U.S.- South America Workshop: Mechanics


slide-1
SLIDE 1

National Center for Supercomputing Applications

A Glimpse of NCSA’s Role in Support of Research

Radha Nandkumar Program Director, International Affiliations & Campus Relations National Center for Supercomputing Applications radha@ncsa.uiuc.edu U.S.- South America Workshop: Mechanics and Advanced Materials – Research and Education August 2-6, 2004

slide-2
SLIDE 2

National Center for Supercomputing Applications

Begin with Thanks

  • Organizers – Profs. Borges, Dumont, Espinosa, Paulino, and Rochinha

– For the invitation – Regular engagements – Making everyone feel welcome, special, and important

  • NSF

– Vision, encouragement and funding

  • Other Brazilian Collaborators

– Bruno Schulze, LNCC – Vinod Robello and Christina Maria Boeres - UFF – for their invitations, hosting the visit, and for several of their visits to NCSA to seal

  • ur collaborations.
  • Brazilian Colleagues and Community

– Hospitality and warmth

  • Other particiapnts in this workshop for interactions and continued

discussions

  • Added bonus and a sign of success - Events such as this workshop are

also enabling newer collaborations “between” U.S. researchers

slide-3
SLIDE 3

National Center for Supercomputing Applications

A little bit about myself

  • Professional preparation

– Nearly 2 decades of experience in HPC and computational science (NCSA); among the staff since inception! – Completed recently (May ’02) an Executive MBA – Ph.D. from the University of Illinois, Urbana-Champaign

  • Condensed Matter Physics in Astrophysical Systems (neutron

stars) – Thesis Advisor – Prof. David Pines

– Observational X-ray Astronomy (in India) and Cosmic ray physics (Univ. of Chicago) prior to the above.

  • Current Activities

– Enabling International partnerships and collaborations for NCSA – Enabling and monitoring interdisciplinary computational science research at NCSA

slide-4
SLIDE 4

National Center for Supercomputing Applications

Presentation Outline

  • Introduction to NCSA
  • Partnerships
  • Recent Changes &Trends
  • Computing Infrastructure
  • Software Infrastructure
  • Sample Applications
slide-5
SLIDE 5

National Center for Supercomputing Applications

National Center for Supercomputing Applications

  • NCSA

– a unit of the University of Illinois at Urbana-Champaign – a U.S. NSF HPC center started in 1985 with international collaboration, in its 19th year

  • federal, state, university, and industry

funded

– Transition from SCC & HPCC (12 yrs) to PACI Center (7 yrs) to SCI Center (soon) – a globally recognized leader in HEC and computational science, scientific visualizations, innovative software

  • Mission

– providing access to leading computing and information technologies

  • universities and industry
slide-6
SLIDE 6

National Center for Supercomputing Applications

Strength of Innovation

NCSA Telnet Mosaic Scientific Visualizations Virtual Director HDF, D2K, Open source software, CAVElib “In-a-Box” Software Suite Grid in a Box => NMI High End Computing Center – #4 in Top500 LES for Alliance and the TeraGrid

slide-7
SLIDE 7

National Center for Supercomputing Applications

NCSA Personnel in a nutshell

  • Current Direction & Vision

– Rob Pennington, Acting Interim Director – Danny Powell, Executive Director

  • Past Directors

Founding Director – Prof. Larry Smarr (now in SD) Second Director – Prof. Daniel Reed (now at NC)

  • A well-known and innovative organization

– 280 FTEs and more than 130 students in 8 buildings, – awaiting the completion of a new building that will house most of us.

  • New Director search is nearing completion
slide-8
SLIDE 8

National Center for Supercomputing Applications

NCSA Satellite Facilities

Alliance Center for Collaboration, Education, Science, and Software Arlington, VA Technology, Research, Education, and Commercialization Center Suburban Chicago

slide-9
SLIDE 9

National Center for Supercomputing Applications

NCSA Private Sector Partners

  • Previous participants

– Amoco – American Airlines – Dow Chemical – Eli Lilly – FMC – Kellogg – Phillips Petroleum – Schlumberger – Shell Oil – Tribune Company – United Technologies Corp.

  • Current/recent partners

– Allstate Insurance – Boeing – Caterpillar – Eastman Kodak – J. P. Morgan – Motorola – Sears

slide-10
SLIDE 10

National Center for Supercomputing Applications

International Affiliations

  • NCSA’s Affiliates

– COPPE, Brazil (RSN) – LNCC, Brazil (Most recent) – APAC, Australia – CCLRC, UK – KISTI, Korea – Kurchatov Institute, Russia – NCHC, Taiwan – NCSA is a member of PRAGMA – CDAC, India (in discussion) – BII, Singapore (in progress)

slide-11
SLIDE 11

National Center for Supercomputing Applications

History and Pathways to Brazil

  • First visit -- NSF/U.S.- Brazilian Collaboration - August, 2002

– Talks on NCSA in multiple locations in Brazil – COPPE, FAPERJ, USP, UBrasilia etc.

  • Visit to NCSA by LNCC faculty – November 2002

– Bruno Schulze, Leon Sinay

  • ACM/IFIP International Middleware Conference 2003 – Rio, June, 2003
  • NSF/U.S.- Brazilian Collaboration Workshop on Advanced Materials - June, 2003
  • Discussions on MOA with COPPE started in July 2003

  • Prof. Rochinha and Prof. Coutinho
  • Discussions on MOA with LNCC – August 2003

  • Prof. Marco Raupp, Prof. Bruno Schulze
  • Visit to NCSA by COPPE faculty –August 2003

  • Prof. Rochinha
  • Visit to NCSA by USP faculty, October 2003

  • Prof. Tereza Christina Carvalho
  • MOA between NCSA and LNCC – December 2003
  • Invitation to NCSA Staff for the LNCC Workshop on Grid Computing

– Highlighted NCSA-LNCC MOA – February 2-5, 2004 – Half a dozen NCSA/UIUC members participated in the workshop + our AU Affiliate

  • NSF/U.S.- Brazilian Collaboration Workshop on Advanced Materials - August 2004
  • MOA between NCSA and COPPE – Expected to be completed in August 2004
  • LNCC Computational Science Workshop and Mini-symposium – next week -August

2004

  • Middleware workshop in conjunction with the Middleware Conference – October

2004

slide-12
SLIDE 12

National Center for Supercomputing Applications

Recent Trends and Transitions

  • New Program

– Recent CISE reorganization – PACI Program sunset; New SCI Program at NSF

  • NCSA’s new directions

– Cyberinfrastructure in support of research, a new imperative

  • New modalities

– Coopetition to cooperation – NCSA & SDSC - maintain our uniqueness and also work together

  • Building Stronger Affiliations
slide-13
SLIDE 13

National Center for Supercomputing Applications

International collaborations outlook “The National Science Foundation should establish and lead a large-scale interagency, and internationally coordinated Advanced Cyberinfrastructure Program (ACP) to create, deploy and apply cyberinfrastructure in ways that radically empower all scientific and engineering research and allied education.”

  • - NSF’s Blue Ribbon Panel /Atkin’s Report

ACP => Shared CyberInfrastructure Program

slide-14
SLIDE 14

National Center for Supercomputing Applications

Cyberenvironments

A cyberenvironment is a subset of general CI capabilities and functionality that is designed and built to meet the needs of a particular community. It includes use of broadly used middleware and networks as well a community specific facilities, software frameworks, networks, and people. It is a persistent, robust, and supported capability.

Specific

slide-15
SLIDE 15

National Center for Supercomputing Applications

Our core expertise

  • Development & Deployment of Cyberinfrastructure

– Computing & Grid Infrastructure – Capability and Capacity Computing – Middleware Development and Deployment

  • Access to these environments for empowerment of

science and user communities

– Enabling breakthrough scientific discoveries – Increasing knowledge and understanding

  • Community engagement

– Nationally and internationally – Education, Outreach, and Training

  • Building successful collaborations

– To pursue all of the above

slide-16
SLIDE 16

National Center for Supercomputing Applications

NCSA focus in Cyberinfrastructure

Stable, robust and supported cyberenvironments for scientific research and communities

  • Community engagement to determine requirements
  • Science drivers to make sure that the requirements are

implemented correctly

  • R&D if/as necessary to do the development for the

requirements

  • Integration into the production environment
  • Continuing support plan of the products
slide-17
SLIDE 17

National Center for Supercomputing Applications

Active Collaborations => Cyberinfrastructure

  • Support the effective use of the extant

resources by applications scientists and educators

– Example: analysis of large complex datasets utilizing

  • n-demand and interactive resources to enable data to

knowledge results

  • Coordinated activities aimed at creating a

national cyberinfrastructure

– Partnerships and joint projects such as TeraGrid, NCSA/SDSC collaborations, NMI, NEESgrid, …

  • Encourage and actively participate in

interdisciplinary collaborations

slide-18
SLIDE 18

National Center for Supercomputing Applications

NCSA/Alliance Multiphase Strategy

  • Multiple user classes

– ISV software, hero calculations, data intensive analysis – distributed resource sharing, parameter studies

  • Four computing approaches

– shared memory multiprocessors

  • 12 32-way IBM IBM p690 systems (2 TF peak)
  • large memory and ISV support

– TeraGrid Itanium2 clusters

  • 64-bit Itanium2/Madison (10.6 TF peak)
  • ETF partners

– IA-32 clusters (>17 TF peak)

  • 32-bit systems for hero calculations
  • dedicated sub-clusters (3 TF each)

– To be allocated for weeks or longer to specific teams

– Alliance Technology Grid & Condor resource pools

  • Complemented by large-scale archives

– ~500 TB secondary and 2 PB tertiary storage

NCSA Control Room

slide-19
SLIDE 19

National Center for Supercomputing Applications

NCSA Computing Environment — 32 TF

Platinum

Intel Pentium III 1 GHz IBM cluster 1,024 processors 1 TF peak performance GPFS

Titan

Intel Itanium 800 MHz IBM cluster 320 processors 1 TF peak performance NFS

Copper

IBM POWER4 p690 systems 384 processors 2 TF peak performance GPFS, 24 TB

Mercury, phase 1

Intel Itanium 2 1.3 GHz IBM cluster 512 processors 2.662 TF peak performance GPFS, 60 TB

Mercury, phase 2

Intel Itanium 2 1.5 GHz IBM cluster 1,334 processors 8 TF peak performance GPFS, 170TB

Tungsten

Intel Xeon 3.0 GHz Dell cluster 2,560 processors 17.7 TF peak performance Lustre, 140 TB

#135 #111 #99 #35 #4

slide-20
SLIDE 20

National Center for Supercomputing Applications

Extensible TeraGrid Facility

NCSA: Compute Intensive SDSC: Data Intensive PSC: Compute Intensive

IA64 IA64 Pwr4 EV68 IA32 IA32 EV7 IA64 Sun

10 TF IA-64 128 large memory nodes 230 TB Disk Storage 3 PB Tape Storage GPFS and data mining 4 TF IA-64 DB2, Oracle Servers 500 TB Disk Storage 6 PB Tape Storage 1.1 TF Power4 6 TF EV68 71 TB Storage 0.3 TF EV7 shared-memory 150 TB Storage Server 1.25 TF IA-64 96 Viz nodes 20 TB Storage 0.4 TF IA-64 IA32 Datawulf 80 TB Storage Extensible Backplane Network

LA Hub Chicago Hub

IA32 Storage Server Disk Storage Cluster Shared Memory Visualization Cluster

LEGEND

30 Gb/s

IA64

30 Gb/s 30 Gb/s 30 Gb/s 30 Gb/s

Sun Sun

ANL: Visualization Caltech: Data collection analysis

40 Gb/s

Backplane Router

+ ETF2 sites: TACC, IU, Purdue and ORNL in FY04

slide-21
SLIDE 21

National Center for Supercomputing Applications

Additional ETF Sites - 2004

LA Hub Chicago Hub 30 Gb/s 30 Gb/s 30 Gb/s 30 Gb/s 30 Gb/s 40 Gb/s

Cluster Agg Switch

LEGEND

Backplane Router

Linda Winkler (winkler@mcs.anl.gov) SDSC Caltech NCSA PSC ANL

20 Gb/s Atlanta Hub 20 Gb/s

IPGrid

10 Gb/s 20 Gb/s 10 Gb/s

PU IUB IUPUI TACC

10 Gb/s 10 Gb/s

10 Gb/s

ORNL

slide-22
SLIDE 22

National Center for Supercomputing Applications

Combine Data Resources Across TG

Home Directory Node Local Storage Scratch/Staging Storage Parallel Filesystem Archival System Cap.

H

SDSC

H

100 TB QFS 64 TB GPFS SAM-FS 2 TB NFS

L

NCSA

H

70 GB/node 8 TB TTS 1.5 PB UniTree 1 TB NFS

L

ANL

H

132 GB/node IA-32 16 TB PVFS 4 TB NFS

L

Caltech

H

70 GB/node 80 TB PVFS 1.2 PB HPSS 140 GB NFS

L

PSC

H

38 GB/node TCS 30 TB PFS 3 PB DMF .5 TB NFS TCS

L

6 PB HPSS /

L

64 GB/node IA-64

L

35 GB/node

S S

100 TB QFS

S

39 TB GPFS

S

24TB SLASH

slide-23
SLIDE 23

National Center for Supercomputing Applications

Access to HPC Resources

  • Allocation Process

– PIs are U.S. Researchers from academic institutions – Co-PIs/Collaborators could be non-U.S. academicians – Initial small amount of resources with simple web based requests on-line to start-out – Larger, subsequent resource allocations by proposal submissions for peer-review – Review Boards meet frequently

  • Our HPC User Community

– ~2000 users per year – ~500 PIs projects – ~58% usage by MPS Division

  • Math & Physical Sciences -
  • AST, CHE, DMR, DMS & PHY

Materials Research Projects Usage 500000 1000000 1500000 2000000 2500000 3000000 3500000 1 9 8 6 1 9 8 7 1 9 8 8 1 9 8 9 1 9 9 1 9 9 1 1 9 9 2 1 9 9 3 1 9 9 4 1 9 9 5 1 9 9 6 1 9 9 7 1 9 9 8 1 9 9 9 2 2 1 2 2 DMR

slide-24
SLIDE 24

National Center for Supercomputing Applications

Users by Type in 2003

Grad Student 38% Faculty 29% Postdoc 19% Research Staff 10% Undergrad 2% Other 2%

slide-25
SLIDE 25

National Center for Supercomputing Applications

Cluster in a Box/OSCAR

  • Community code base with strong support

– Bald Guy Software, Dell, IBM, Intel, Indiana, MSC.Software, NCSA, ORNL, Sherbrooke University

  • Six releases within the past year

– >29,000 downloads during this period

  • Recent additions

– HDF4/HDF5 I/O libraries – OSCAR database for cluster configuration – Itanium2 and Gelato consortium integration – NCSA cluster monitor package (Clumon) – NCSA VMI 2 messaging layer

  • Myrinet, gigabit Ethernet and Infiniband

– PVFS

  • The First Annual OSCAR Symposium

– May 11-14, 2003, Québec, CANADA

slide-26
SLIDE 26

National Center for Supercomputing Applications

Grid in a Box => NMI

  • Goals

– middleware integration

  • packaging, testing, documentation

– community building

  • instruments, laboratories and data
  • Features and status

– middleware used by many projects – additional funding supporting new sites

www.nsf-middleware.org

slide-27
SLIDE 27

National Center for Supercomputing Applications

Display Wall In A Box

  • Current software

– Chromium and Pixel Blaster Movie Player – Argonne movie viewer – VTK geometry viewer and VNC

  • Download via www.ncsa.uiuc.edu

Rear Projectors LCD Panels

slide-28
SLIDE 28

National Center for Supercomputing Applications

Visualization of Multiple Simulations

Source: Bob Wilhelmson

slide-29
SLIDE 29

National Center for Supercomputing Applications

NCSA’s Application Centric Efforts

  • Astronomy and Astrophysics
  • Arts and Humanities
  • Biology
  • Engineering
  • Environmental Sciences
  • Geosciences
  • Integration of Education
slide-30
SLIDE 30

National Center for Supercomputing Applications

CI Architecture

Other services

TeraGrid CII

Network

Allocation, performance monitoring, NWS, VMI, GridSolve, MPI

Data- base SMP Cluster Mass Store

Network

Data Movement

GridFTP, MetaData, RFT, File Systems

Security

Kerberos, GSI, KCA, Authorization, Credential Management, OTP, auditing, monitoring, IDS

Physical Core Services

Co- Scheduling

Compute, Data, Network

Data Management

SRB, HDF, Semi- automatic MetaData generation, Data Fusion, Sensor Data Framework

Visualization systems

Application- Centric Services

WorkFlow Services

OGRE

Portal Services

OGCE

User Applications

Cactus

Data Mining

Selection, Transformations, Modeling, Presentation, Predictive Methods, Motif Analysis

Visualization

Automatic Feature Extraction, Mesa, Maya, Trajectory Viz, Professional Visualization Services, VTK

Collaborative Tools

Chef, AG

Cyber- Environments

Resource Management

Condor, Globus, Scheduling, On- demand, Info Svcs, Monitoring

Tools/ Libraries

J2EE, PyGlobus, CoG, Atlas, Blast, FFT...

Bio Portal

...

Chemistry Portal Weather Portal Astro Portal

slide-31
SLIDE 31

National Center for Supercomputing Applications

Black Hole Collision Problem

1963 Hahn and Lindquist IBM 7090 One Processor Each 0.2 MF 3 Hours 1977 Eppley and Smarr CDC 7600 One Processor Each 35 MF 5 Hours 1999 Seidel and Suen, et al. NCSA SGI Origin 256 Processors Each 500 MF 40 Hours 300X 30,000X 1,800,000,000X 2001 Seidel et al NCSA Pentium III 256 Processors Each 1 GF 500,000 Hours total plus 500,000 hours at NERSC ~200X

slide-32
SLIDE 32

National Center for Supercomputing Applications

Tornado Modeling – Data to Knowledge

Large complex datasets require data analysis and visualization in the search for understanding

Wilhelmson and Cox

slide-33
SLIDE 33

National Center for Supercomputing Applications

Tornado Simulation

Cox and Wilhelmson

slide-34
SLIDE 34

National Center for Supercomputing Applications

Tornado Simulation – Ground Level

Cox and Wilhelmson

slide-35
SLIDE 35

National Center for Supercomputing Applications

Data Management and Visualization Data Set

– Computations were performed on 16 processors of the IBM p690 (Regatta) supercomputer at NCSA – ~ 16,000 time steps, 8.6 days wall clock time – Simulation spans 2.8 hours – 3D volume data available every 2 seconds around main tornado event 6300-9600 sec (~ 55 minutes) (tornado duration ~ 40 minutes) – Data save format HDF – compressed (save ~ 6 x space) and chunked (allows faster reading of partial volumes of data) – Data volume ~ 650 GB in compressed and scales form (~4.0 TBytes if all data in 32 bit form) – Animation time – 30 fps x 2 sec, therefore 1 sec animation = 1 minute in real time

Source: Cronce, Gilmore, Romine, Wilhelmson

slide-36
SLIDE 36

National Center for Supercomputing Applications

Visualization tools and techniques

  • Visualizations by the Experimental Technologies group
  • f NCSA
  • Trajectories calculated from the model dynamical fields

based on original trajectory code by David Wojtowicz (DAS)

  • Renderings

generated with Maya software (professional rendering tool commonly used in animated movies, video games, etc…)

Source: Cronce, Gilmore, Romine, Wilhelmson

slide-37
SLIDE 37

National Center for Supercomputing Applications

MEAD Expedition => LEAD

– Modeling Environment for Atmospheric Discovery

  • cyberinfrastructure for Grid-based parametric studies

– mesoscale convective systems and hurricanes

  • recall the Hurricane Floyd experience

– Features

  • WRF and ROMS coupling

– community atmospheric and ocean models

  • Grid workflow management
  • data management and visualization

– very large computed and derived data sets – high performance parallel I/O using HDF5 – metadata, mining and machine learning

  • model and performance analysis
  • education and outreach

– See www.ncsa.uiuc.edu/Expeditions/MEAD

slide-38
SLIDE 38

National Center for Supercomputing Applications

LEAD Project Motivation

  • Each year, mesoscale weather – floods, tornadoes, hail,

strong winds, lightning, and winter storms – causes hundreds of deaths, routinely disrupts transportation and commerce, and results in annual economic losses > $13B.

Source: Kelvin Droegemeier

slide-39
SLIDE 39

National Center for Supercomputing Applications

CI Underpinnings for LEAD

– On-demand – Real time – Automated/intelligent sequential tasking – Resource prediction/scheduling – Fault tolerance – Dynamic interaction – Interoperability – Grid and Web services – Personal virtual spaces

Source: Kelvin Droegemeier

slide-40
SLIDE 40

National Center for Supercomputing Applications

Parallel Multi-Scale and Multi-Physics Simulations

micro meso

10-10 m 102 m 10-15 s 106 s macro macro

Source: Keshav Pingali, Cornell

slide-41
SLIDE 41

National Center for Supercomputing Applications

Adaptive Software/Understanding Fracture

  • Wide range of length and time scales
  • Macroscopic components used in engineering practice
  • Macroscopic behavior simulated using finite-element

method

  • Homogeneous materials at the macroscale become

heterogeneous, polycrystalline assemblies as one zooms down to mesoscales (1-10 microns)

  • Structures at the mesoscale (grain boundaries,

dislocation cell structures, and coalescing voids etc.) must be understood in terms of the collective behavior of large numbers of lattice defects (vacancies, dislocations, etc.)

  • Atomistic modeling is required to develop effective

descriptions of lattice defects, and collections of such defects, for use at the mesoscale

10-3 10-6 10-9 m

Source: Keshav Pingali, Cornell

slide-42
SLIDE 42

National Center for Supercomputing Applications

Multiscale Physics

  • How to do it without losing accuracy?

– QMC /DFT – DFT-MD – SE-MD – FE

  • How to make it parallel? (Load balancing with

different methods)

< 100Å < 1000 Å 10000 Å

Physical Scale Macroscopic Macroscopic

Mesoscale Mesoscale

Nanoscale

Atomic

Nanoscale

Atomic

Simulation Scale

Classical Monte Carlo Classical Monte Carlo

Effective Mass Schrödinger Equation Effective Mass Schrödinger Equation

Car-Parrinello Quantum Monte Carlo Car-Parrinello Quantum Monte Carlo

Multiscale Hierarchy

MFlop GFlop TFlop

Integrating what is at the microscopic quantum level with the mesoscopic classical level – Great Challenge. Lots of software and interdisciplinary work needed.

Source – David Ceperley, UIUC

slide-43
SLIDE 43

National Center for Supercomputing Applications

Questions?

http://www.ncsa.uiuc.edu radha@ncsa.uiuc.edu