Operated by Los Alamos National Security, LLC for the U.S. Department - - PowerPoint PPT Presentation

operated by los alamos national security llc for the u s
SMART_READER_LITE
LIVE PREVIEW

Operated by Los Alamos National Security, LLC for the U.S. Department - - PowerPoint PPT Presentation

Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA Los Alamos National Laboratory LA-UR-16-22883 MarFS A Near-POSIX Namespace Leveraging Scalable Object Storage David Bonnie May 4 th , 2016 Operated by Los


slide-1
SLIDE 1

Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA

slide-2
SLIDE 2

Los Alamos National Laboratory

MarFS

David Bonnie May 4th, 2016

A Near-POSIX Namespace Leveraging Scalable Object Storage

LA-UR-16-22883

Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA

slide-3
SLIDE 3

Los Alamos National Laboratory 5/4/16 | 3

Overview

  • What’s the problem?
  • What do we really need?
  • Why existing solutions don’t work
  • Intro to MarFS
  • What is it?
  • How does it work?
  • Status
  • Current
  • Future
slide-4
SLIDE 4

Los Alamos National Laboratory 5/4/16 | 4

What’s the problem?

  • Campaign Storage (Trinity+ Version)

Memory

Parallel File System

Archive Memory Burst Buffer Parallel File System Campaign Storage Archive

1-2 PB/sec Residence – hours Overwritten – continuous 4-6 TB/sec Residence – hours Overwritten – hours 1-2 TB/sec Residence – days/weeks Flushed – weeks 100-300 GB/sec Residence – months-years Flushed – months-years 10s GB/sec (parallel tape) Residence – forever

slide-5
SLIDE 5

Los Alamos National Laboratory 5/4/16 | 5

What do we really need?

  • Large capacity storage, long residency
  • No real IOPs requirement for data access
  • “Reasonable” bandwidth for streaming
  • Metadata / tree / permissions management that’s easy

for people and existing applications

  • Do we need POSIX?
slide-6
SLIDE 6

Los Alamos National Laboratory 5/4/16 | 6

Why existing solutions don’t work

  • So we need a high capacity, reasonable bandwidth

storage tier…

  • Parallel tape is hard and expensive
  • Object solutions?
  • Big POSIX expensive, $$$ hardware
  • Existing solutions don’t make the right compromises (for

us)

  • Petabyte scale files, and bigger
  • Billions of “tiny” files
  • Try to maintain too much of POSIX, this leads to complicated

schemes, too many compromises

slide-7
SLIDE 7

Los Alamos National Laboratory 5/4/16 | 7

So what is MarFS?

  • MarFS is a melding of the parts of POSIX we (people) like with scale-
  • ut object storage technology
  • Object style storage is scalable, economical, safe (via erasure),

with simple access methods

  • POSIX namespaces provide people usable storage
  • Challenges:
  • Objects disallow update-in-place (efficiently)
  • Access semantics totally different (permissions, structure)
  • Namespace scaling to billions/trillions of files
  • Single files in the many petabyte+ range
slide-8
SLIDE 8

Los Alamos National Laboratory 5/4/16 | 8

So what is MarFS?

  • What are we restricting?
  • No update in place, period
  • Writes only through data movement tools, not a full VFS interface
  • 100% serial writes possible through FUSE, pipes?
  • What are we gaining (through the above)?
  • Nice workloads for the object storage layer
  • Full POSIX metadata read/write access, full data read access
slide-9
SLIDE 9

Los Alamos National Laboratory 5/4/16 | 9

So what is MarFS?

  • Stack overview:
  • Smallish FUSE daemon for interactive metadata manipulation,

viewing, etc

  • Parallel file movement tool (copy/sync/compare/list)
  • A handful of tools for data management (quotas, trash, packing)
  • Library that the above utilizes as a common access path
  • Metadata stored in at least one global POSIX namespace
  • Utilizes standard permissions for security, xattr/sparse file support
  • Data stored in at least one object/file storage system
  • Very small files packed, very large files split into “nice” size objects
slide-10
SLIDE 10

Los Alamos National Laboratory 5/4/16 | 10

Scaling basics

  • So how does the namespace scale?
  • Up to N-way scaling for individual directories/trees
  • Up to M-way scaling within directories for file metadata
  • Directory MD is abstracted to allow for alternate storage (kvs)
  • We’re using GPFS, lists are easy, so it’s manageable!
  • How does the data movement scale?
  • No real limit on number of data storage repositories
  • New data can be striped within a repo and across repos
  • Repos can be scaled up and scaled out
  • New repos can be added at any time
slide-11
SLIDE 11

Los Alamos National Laboratory 5/4/16 | 11

Object Repo X

MarFS Scaling

MarFS ProjectA Dir DirA DirA.A DirA.A.A DirA.A.A.A DirB DirA.A.B FA FF FB FD FC FE FI FH FG ProjectN Dir DirA DirA.A DirA.A.A DirA.A.A.A DirB DirA.A.B FA FF FB FD FC FE FI FH FG Namespace Project A Namespace Project N

N

  • N

a m e S p a c e s

PFS MDS

  • A

PFS MDS

  • N

PFS MDS

  • A.1

PFS MDS

  • A.2

PFS MDS

  • A.M

PFS MDS

  • N.1

PFS MDS

  • N.2

PFS MDS

  • N.M
  • Uni

Object File Packed Object File Mul Object File

Object Repo A N X M MDS File Systems

  • (for

metadata

  • nly)

Namespaces MDS holds Directory Metadata File Metadata is hashed

  • ver

M mul ple MDS

Striping across 1 to X Object Repos

slide-12
SLIDE 12

Los Alamos National Laboratory 5/4/16 | 12

MarFS Internals Overview Uni-File

/GPFS-MarFS-md1 /GPFS-MarFS-mdN Dir1.1 UniFile

  • A rs:

uid, gid, mode, size, dates, etc.

Xa rs

  • bjid

repo=1, id=Obj001,

  • bjoffs=0,
  • chunksize=256M,

Objtype=Uni, NumObj=1, etc.

trashdir /MarFS

  • top

level namespace aggrega on

M e t a d a t a D a t a Object System 1

  • Object

System X

  • Dir2.1

Obj001

slide-13
SLIDE 13

Los Alamos National Laboratory 5/4/16 | 13

MarFS Internals Overview Mul -File

/GPFS-MarFS-md1 /GPFS-MarFS-mdN Dir1.1 Mul File

  • A rs:

uid, gid, mode, size, dates, etc.

Xa rs

  • bjid

repo=1, id=Obj002.,

  • bjoffs=0,
  • chunksize=256M,

ObjType=Mul , NumObj=2, etc.

trashdir /MarFS

  • top

level namespace aggrega on

M e t a d a t a D a t a Object System 1

  • Object

System X

  • Dir2.1

Obj002.1 Obj002.2

slide-14
SLIDE 14

Los Alamos National Laboratory 5/4/16 | 14

MarFS Internals Overview Mul -File

(striped Object Systems)

/GPFS-MarFS-md1 /GPFS-MarFS-mdN Dir1.1 Mul File

  • A rs:

uid, gid, mode, size, dates, etc.

Xa rs

  • bjid

repo=S, id=Obj002.,

  • bjoffs=0,
  • chunksize=256M,

ObjType=Mul , NumObj=2, etc.

trashdir /MarFS

  • top

level namespace aggrega on

M e t a d a t a D a t a Object System 1

  • Object

System X

  • Dir2.1

Obj002.1 Obj002.2

slide-15
SLIDE 15

Los Alamos National Laboratory 5/4/16 | 15

MarFS Internals Overview Packed-File

/GPFS-MarFS-md1 /GPFS-MarFS-mdN Dir1.1 UniFile

  • A rs:

uid, gid, mode, size, dates, etc.

Xa rs

  • bjid

repo=1, id=Obj003,

  • bjoffs=4096,
  • chunksize=256M,

Objtype=Packed, NumObj=1, Ojb=4

  • f

5, etc.

trashdir /MarFS

  • top

level namespace aggrega on

M e t a d a t a D a t a Object System 1

  • Object

System X

  • Dir2.1

Obj003

slide-16
SLIDE 16

Los Alamos National Laboratory 5/4/16 | 16

Current Status

  • Where are we now?
  • Open Science runs completed without real issue, ~4 PB scale system
  • ~2-3 GB/s bandwidth, utilized Scality RING storage
  • Discovered edge-case bugs with varied workloads
  • Next system is currently being deployed, ~30 PB scale
  • ~28 GB/s bandwidth, also utilizing Scality RING storage
  • Packing in parallel movement utility in progress
slide-17
SLIDE 17

Los Alamos National Laboratory 5/4/16 | 17

Future work

  • Metadata in-directory scaling
  • Billion files in a directory…
  • Compression / encryption within MarFS
  • Data protection in MarFS – erasure on erasure, dual copy, etc
  • Other access methods
  • HPSS, Globus, etc
  • Migration tools (background movement)
  • Dual copy would allow for DR opportunities on tape/offline media,

tools to support this

  • Alternate views of metadata (files within date, related project files, etc)
slide-18
SLIDE 18

Los Alamos National Laboratory 5/4/16 | 18

Learn more!

  • https://github.com/mar-file-system/marfs
  • https://github.com/pftool/pftool

Open Source BSD License Partners Welcome

slide-19
SLIDE 19

Los Alamos National Laboratory 5/4/16 | 19

Thanks for your attention!

slide-20
SLIDE 20

Los Alamos National Laboratory 5/4/16 | 20

Backup

MarFS send/rcv

  • bjects

MarFS sending

  • bjects

MarFS sending

  • bjects

MC Comp MC Comp MC Comp MC Comp MC Comp MC Comp

Cap Bal Components /c1a /c1b

Host 1 Host 2 Host 3

Cap Bal Components /c4a /c4b Cap Bal Components /c2a /c2b Cap Bal Components /c5a /c5b Cap Bal Components /c3a /c3b Cap Bal Components /c6a /c6b /C1a/SC1 /C1a/SCM /C1b/SC1 /C1b/SCM Spread tree hash path to balance

  • bjects

per directory

Failure Domain Failure Domain

Balance Domain Balance Domain