The Open Science Computing Ecosystem at the Texas Advanced Computing - - PowerPoint PPT Presentation

the open science computing ecosystem at the texas
SMART_READER_LITE
LIVE PREVIEW

The Open Science Computing Ecosystem at the Texas Advanced Computing - - PowerPoint PPT Presentation

The Open Science Computing Ecosystem at the Texas Advanced Computing Center (TACC) Siva Kulasekaran and Doug James November 13, 2014 siva@tacc.utexas.edu, djames@tacc.utexas.edu The TACC Ecosystem Stampede Maverick HPC Jobs Vis &


slide-1
SLIDE 1

Siva Kulasekaran and Doug James

November 13, 2014 siva@tacc.utexas.edu, djames@tacc.utexas.edu

The Open Science Computing Ecosystem at the Texas Advanced Computing Center (TACC)

slide-2
SLIDE 2

The TACC Ecosystem

Vis Lab

Immersive Vis Colaborative Touch Screen 3D

Stampede

HPC Jobs 6400+ Nodes 10 PFlops 14+ PB Storage

Lonestar

HTC Jobs 1800+ Nodes 22000+ Cores 146 GB/node

Maverick

Vis & Analysis Interactive Access 132 K40 GPUs

Wrangler

Data Intensive Computations 10 PB Storage High IOPS

Corral

Data Collections 6 PB Storage Databases IRODS

Rodeo

Cloud Services User VMs XXX VCores XXX PB

Ranch

Tape Archive 160 PB Tape 1TB Access Cache

Stockyard

Shared Workspace 20 PB Storage 1 TB per user Project Workspace

slide-3
SLIDE 3

Wrangler Hardware @TACC

Interconnect with 1 TB/s throughput IB Interconnect 120 Lanes (56 Gb/s) non-blocking

High Speed Storage System 500+ TB 1 TB/s 250M+ IOPS

Access & Analysis System 96 Nodes 128 GB+ Memory Haswell CPUs Mass Storage Subsystem 10 PB (Replicated)

Three primary subsystems: – A 10PB, replicated disk storage system. – An embedded analytics capability of several thousand cores. – A high speed global

  • bject store
  • 1TB/s
  • 250M+ IOPS
slide-4
SLIDE 4

Stampede - High Level Overview

  • Base Cluster (Dell/Intel/Mellanox):

– Intel Sandy Bridge processors – Dell dual-socket nodes w/32GB RAM (2GB/core) – 6,400 nodes (102,400 cores, 2.2PF Peak) – 56 Gb/s Mellanox FDR InfiniBand interconnect

  • Co-Processors:

– 6,880 Intel Xeon Phi Coprocessors – First large scale production Phi system – 7.4+ PF peak performance

  • Max Total Concurrency:

– exceeds 500,000 cores – 1.8M threads

  • Entered production operations on January 7, 2013
slide-5
SLIDE 5

TACC Services: Data

  • Online, long term, high integrity file or relational

– Corral

  • Online short term, high capacity (PBs), high speed

– Stockyard, /scratch

  • Offline, long term, ultra high capacity (tape):

– Ranch

  • Object store:

– Wrangler (coming soon)

  • Data management and curation, Data analysis and statistics
  • Static and dynamic web services, Database applications
slide-6
SLIDE 6

Storage Solutions at TACC

Supported Services

§ Storage of input data, results, processed data, interim data products § Data management services to allow for controlled sharing of data with colleagues, collaborators, and public § Databases to store and query structured data § GIS extensions for geographically structured data § Web services to integrate data with other portals and data gateways § Capability to develop storage solutions for PHI data

Not Supported Services

§ Systems backup and restoration services § Administrative data storage

slide-7
SLIDE 7

TACC Services: Computing

  • Batch

– Stampede, Lonestar, Wrangler

  • Interactive

– Maverick

  • Short term, ephemeral virtual machines:

– Rodeo (and soon-to-be-announced systems)

  • Persistent VM (for datasets hosted at TACC)

– Available: contact us

  • Hadoop:

– Rustler, Wrangler

  • Computing research platforms:

– Chameleon and others

slide-8
SLIDE 8

TACC Services: Other

  • Data visualization
  • Ticket support
  • Training
  • Web portal/gateway development and hosting
  • HPC collaboration and consulting (optimization, parallelization,

workflow support, porting, etc.)

  • Proposal support
slide-9
SLIDE 9

TACC User Portal

Easy to get started! portal.tacc.utexas.edu

slide-10
SLIDE 10

For more information: www.tacc.utexas.edu

Siva Kulasekaran and Doug James siva@tacc.utexas.edu djames@tacc.utexas.edu