Cray Data Virtualization Service a Method for Heterogeneous File - - PowerPoint PPT Presentation

cray data virtualization service
SMART_READER_LITE
LIVE PREVIEW

Cray Data Virtualization Service a Method for Heterogeneous File - - PowerPoint PPT Presentation

Cray Data Virtualization Service a Method for Heterogeneous File System Connectivity David Wallace and Stephen Sugiyama Cray Data Virtualization Service A little background and history Cray DVS Support in CLE 2.1 What if April


slide-1
SLIDE 1

Cray Data Virtualization Service

a Method for Heterogeneous File System Connectivity

David Wallace and Stephen Sugiyama

slide-2
SLIDE 2

Cray Data Virtualization Service

A little background and history Cray DVS Support in CLE 2.1 What if…

April 14, 2008 Cray Inc. Confidental Slide 2

slide-3
SLIDE 3

DVS: Background and history

Concept derived from Cray T3E system call forwarding

  • Focused on I/O forwarding aspects

Initial work focused on clusters Some design objectives

  • Provide a low(er) cost alternative to having HBAs on all nodes in a

cluster

  • Utilize bandwidth and capacity of a cluster’s high speed network
  • Utilize bandwidth and capacity of a cluster’s high speed network
  • Provide global access to file systems resident on I/O nodes
  • Provide high performance, parallel file system access
  • Target I/O patterns of High Performance Computing applications

Very large block sizes Sequential access Low data re-use

April 14, 2008 Cray Inc. Confidental Slide 3

slide-4
SLIDE 4

n1 /dvs n2 /dvs n3 /dvs n4 /dvs n5 /dvs

Serial DVS: Multiple Clients to a Single Server

High Speed Network (Ethernet, Quadrics, Myrinet)

April 14, 2008 Cray Inc. Confidental Slide 4

IO1 /dvs

SAN Direct Attached /dev/sda1

  • DVS uses a client-server model
  • The DVS server(s) stacks on the

local file system(s) managing the disks

  • Serial DVS
  • All requests are routed through a

single DVS server

  • Provides similar functionality as

NFS

EXT3, XFS

slide-5
SLIDE 5

Client /dvs

Single DVS Server to a Single I/O Device

Open, read, write request passed to VFS and intercepted by DVS client. DVS forwards request to DVS server

April 14, 2008 Cray Inc. Confidental Slide 5

IO1 /dvs

SAN Direct Attached /dev/sda1

  • On server, request passed to local file system
  • Meta data, locking operations are local
  • Data is read/written to disk
  • Uses standard Linux buffer management
  • Local cache
  • I/O readahead/write behind

EXT3, XFS

slide-6
SLIDE 6

ClientA ClientB ClientC ClientD

DVS

DVS Storage Network (virtualized file system)

DVS: Multiple I/O node support

DVS DVS DVS

SIO-0

DVS

/dvs SIO-3

DVS

/dvs SIO-2

DVS

/dvs SIO-1

DVS

/dvs

slide-7
SLIDE 7

ClientD

DVS

DVS: Parallel file system

DVS DVS DVS

write(‘/x/y/z’) A B C D I/O request ClientC ClientA ClientB SIO-0

DVS

/dvs SIO-3

DVS

/dvs SIO-2

DVS

/dvs SIO-1

DVS

/dvs write(‘/x/y/z’) write(‘/x/y/z’) write(‘/x/y/z’) write(‘/x/y/z’) Aggregate:

  • Buffer cache
  • Readahead
  • Write behind
slide-8
SLIDE 8

SO, WHERE ARE WE TODAY?

April 14, 2008 Cray Inc. Confidental Slide 8

slide-9
SLIDE 9

Cray XT Scalable Lustre I/O: Direct Attached

Cray XT Supercomputer

  • Compute nodes
  • Login nodes
  • Lustre OSS
  • Lustre MDS
  • NFS Server
  • Boot/SDB node

1 GigE Backbone 1 GigE Backbone 1 GigE Backbone 1 GigE Backbone

April 14, 2008 Cray Inc. Confidental Slide 9

1 GigE Backbone 1 GigE Backbone 1 GigE Backbone 1 GigE Backbone 10 GigE 10 GigE 10 GigE 10 GigE

Backup Backup Backup Backup and and and and Archive Archive Archive Archive Servers Servers Servers Servers Lustre global parallel Filesystem Lustre global parallel Filesystem Lustre global parallel Filesystem Lustre global parallel Filesystem

  • Each compute node runs a Lustre client
  • The NFS server would allow to export the Lustre filesystem to other

systems in the network

slide-10
SLIDE 10

Cray XT accessing Lustre and Remote Filesystems

Cray XT Supercomputer

  • Compute nodes
  • Login nodes
  • Lustre OSS
  • Lustre MDS
  • NFS Clients
  • Boot/SDB node

1 GigE Backbone 1 GigE Backbone 1 GigE Backbone 1 GigE Backbone

April 14, 2008 Cray Inc. Confidental Slide 10

1 GigE Backbone 1 GigE Backbone 1 GigE Backbone 1 GigE Backbone

Lustre global parallel Filesystem Lustre global parallel Filesystem Lustre global parallel Filesystem Lustre global parallel Filesystem

  • Each Compute and Login node runs a Lustre client
  • Lustre is accessable by every node within the Cray XT system
  • Each Login node imports the remote filesystem
  • Filesystem only accessable from Login nodes
  • For global accessability, login nodes need to copy files into Lustre

Remote System

NFS server exports the remote filesystem

slide-11
SLIDE 11

Why not use NFS throughout?

Cray XT Supercomputer

  • Compute nodes
  • Login nodes
  • Lustre OSS
  • Lustre MDS
  • NFS Clients
  • Boot/SDB node

1 GigE Backbone 1 GigE Backbone 1 GigE Backbone 1 GigE Backbone

April 14, 2008 Cray Inc. Confidental Slide 11

1 GigE Backbone 1 GigE Backbone 1 GigE Backbone 1 GigE Backbone

Lustre global parallel Filesystem Lustre global parallel Filesystem Lustre global parallel Filesystem Lustre global parallel Filesystem

  • Each Compute and Login node runs a Lustre client
  • Lustre is accessable by every node within the XT system
  • Each Login AND Compute node imports the remote filesystem
  • The remote filesystem is accessable from all nodes
  • No copies required for global accessability

Remote System

NFS server exports the remote filesystem

slide-12
SLIDE 12

Issues with NFS

Cray XT systems have thousands of compute nodes

  • A single NFS server typically cannot manage more than 50 clients
  • Could cascade NFS servers

There only can be a single primary NFS server The other servers would run as clients on the incoming side and servers on the outbound side

April 14, 2008 Cray Inc. Confidental Slide 12

Cascaded servers would have to run in user space

  • Complicates system administration

Performance impacts

  • NFS clients run as daemons, thus introducing OS jitter on the

compute nodes

  • The primary NFS server will be overwhelmed
  • TCP/IP protocol less efficient than native protocol within the Cray

SeaStar network

slide-13
SLIDE 13

Immediate Need

Access to NFS mounted file systems (/home on login nodes)

  • For customers migrating from CVN, equivalent functionality to YOD

I/O

April 14, 2008 Cray Inc. Confidental Slide 13

slide-14
SLIDE 14

ClientA ClientB ClientC ClientD

DVS DVS DVS DVS

DVS Storage Network (virtualized file system)

DVS: Support for NFS in CLE 2.1

NFS SIO-0 SIO-3 SIO-2 SIO-1

DVS DVS DVS DVS

slide-15
SLIDE 15

Cray DVS – from the admin point of view

Compute Nodes

DVS Client

Compute Nodes

DVS Client

Compute Nodes

DVS Client

HSN SIO Node

DVS Server NFS mount

CRAY XT4

October 2007 v6 Cray Proprietary Information Slide 15

Admin mounts file systems in fstab per usual

mount -t dvs /nfs/user/file4

NFS Server

slide-16
SLIDE 16

From Users Point-of-View

Application Open() Read() Write() Close()

April 14, 2008 Cray Inc. Confidental Slide 16

/nfs/home/dbw/file Compute Node File system appears to be local

slide-17
SLIDE 17

So what if …

Users have data on one or more file systems (e.g. EXT3,

XFS, GPFS, Panasas, NFS) on servers in the data center which they want access from the compute nodes on the Cray XT system WITHOUT having to copy files?

Then …

April 14, 2008 Cray Inc. Confidental Slide 17

Then …

You install Cray DVS (Data Virtualization Service) server

software on each of the existing file servers and you install DVS client software on each of the compute nodes and this allows the admin to “mount –t dvs” the file systems

slide-18
SLIDE 18

DVS DVS DVS

Compute Nodes

DVS Client

Compute Nodes

DVS Client

Compute Nodes

DVS Client

SeaStar Interconnect

DVS

CRAY XT4

DVS Concept in a Cray XT Environment

April 14, 2008 Cray Inc. Confidental Slide 18

SIO Node

DVS Server

SIO Node

DVS Server

SIO Node

DVS Server

SIO Node

DVS Server

NFS Server

NFS Client

SAM QFS Server

PanFS Client

GPFS Server

GPFS Client

D A S

EXT3/4 or XFS

slide-19
SLIDE 19

Cray DVS - Summary

  • No file copying required!
  • Simplicity of DVS client allows for larger scale than NFS or cluster file systems
  • DVS can amplify the scale of compute nodes serviced to O(10000)
  • Can project a file system beyond limit of underlying clustered file system

GPFS on Linux is limited to 512 clients

  • The seek, latency and transfer time (physical disk I/O) for every I/O node is
  • verlapped (mostly parallel)

April 14, 2008 Cray Inc. Confidental Slide 19

  • Every I/O node does read-ahead and write aggregation (in parallel)
  • The effective page cache size is the aggregate size of all of the I/O nodes page

caches

  • Allows the interconnects to be utilized more fully:
  • multiple I/O nodes can drive a single app node interconnect at it's maximum speed
  • multiple app nodes can drive all of the I/O node interconnects at their maximum

speed

  • Takes advantage of RDMA for those interconnects that support it (Cray

SeaStar, Quadrics, Myricom)

slide-20
SLIDE 20

Cray DVS – Initial Customer Usage

ORNL

  • Began field trial in December 2007
  • Installed in production on ~7200 XT3 cores
  • Replacement for Catamount YOD-I/O functionality
  • Mounting NFS mounted /home file systems on XT compute nodes

CSC – Finland

  • Working with Cray to test DVS with GPFS

April 14, 2008 Cray Inc. Confidental Slide 20

  • Working with Cray to test DVS with GPFS
  • Installed on TDS and undergoing development and testing

CSCS

  • Begin early access testing 2Q2008
slide-21
SLIDE 21

Acknowledgements

David Henseler, Cray Inc Kitrick Sheets, KBS Software Jim Harrell, Cray Inc Chris Johns, Cassatt Corp The DVS Development Team

  • Stephen Sugiyama
  • Stephen Sugiyama
  • Tim Cullen
  • Brad Stevens
  • And others

April 14, 2008 Cray Inc. Confidental Slide 21

slide-22
SLIDE 22

April 14, 2008 Cray Inc. Confidental Slide 22

Thank You