The CineGRID collaboration University of Amsterdam Jeroen Roodhart - - PDF document

the cinegrid collaboration
SMART_READER_LITE
LIVE PREVIEW

The CineGRID collaboration University of Amsterdam Jeroen Roodhart - - PDF document

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group The CineGRID collaboration University of Amsterdam Jeroen Roodhart 12/02/2009 .Plan What is CineGrid Who are involved Use


slide-1
SLIDE 1

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

The CineGRID collaboration

Jeroen Roodhart

University of Amsterdam 12/02/2009

slide-2
SLIDE 2

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

.Plan

  • What is CineGrid
  • Who are involved
  • Use cases
  • Grand vision
  • Storage in CineGrid context
  • Current setup in Amsterdam
  • Experience
  • Lessons
  • Future
  • Summary
slide-3
SLIDE 3

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

What is CineGrid?

CineGrid is a non-profit international membership organization... CineGrid’s mission is to build an interdisciplinary community focused on the research, development and demonstration

  • f networked collaborative tools, enabling

the production, use and exchange of very high-quality digital media over high-speed photonic networks. (From the site: http://cinegrid.org)

slide-4
SLIDE 4

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Who are involved

  • Lots of (big) names, you find them in the

members section of the site.

  • More important:
slide-5
SLIDE 5

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Keio University President Anzai UCSD Chancellor Fox

Used 1Gbps Dedicated Sony NTT SGI

Use cases

(CdL)

Keio/Calit2 Collaboration: Trans-Pacific 4K teleconference

slide-6
SLIDE 6

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Use cases

(CdL)

CineGrid @ SARA

slide-7
SLIDE 7

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Use cases

(CdL)

Holland Festival 2007 – Era la Notte

slide-8
SLIDE 8

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Use cases

  • Scientific visualisation
  • Film editing processes
  • High def. Collaboration environments
  • Medical applications
  • Entertainment venues

– Dome theatres – 4K Cinema

slide-9
SLIDE 9

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Grand Vision

(Cees de Laat's “find the beautiful lady on the beach”) RDF describing Infrastructure

content content RDF/CG RDF/CG RDF/ST RDF/NDL RDF/NDL RDF/VIZ RDF/CPU

Application: find video containing x, then trans-code to it view on Tiled Display

PG&CdL

slide-10
SLIDE 10

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Grand Vision (Ctd.)

  • CineGrid will be using iRODS
  • Intention to place NDL and semantic information

within iRODS – Storage/content delivery nodes/transcoding

slide-11
SLIDE 11

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Storage in CineGrid context

(CdL)

3840*2160

slide-12
SLIDE 12

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Storage in CineGrid context (ctd.)

(CdL)

Format

x

Y Rate /s Color bits/pix Frame pix Frame MByte Flow MByt/s Stream Gbit/s 720p HD 1280 720 60 24 921600 2.8 170 1.3 1080p HD 1920 1080 30 24 2073600 6.2 190 1.5 2k 2048 1080 24 48 36 2211840 10 240 480 1.2 2.4 SHD 3840 2160 30 24 8294400 25 750 6.0 4k 4096 2160 24 36 8847360 40 960 7.6

slide-13
SLIDE 13

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Storage in CineGrid context

  • Buying n x 1T disks doesn't work
  • Traditional approach using FCAL, NFS, CIFS

might not be fast enough

  • We may now run into issues with

– Complexity – Scalability – Speed considerations – Integrity

slide-14
SLIDE 14

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Why think about storage (ctd.)

  • CineGRID

– Large data sets – Traditional interconnecting of sites

  • Relaying movies to display sites
  • Display movie (from local cache)

– Interconnect services

  • Streaming server transcodes 4k

movie to lower res

  • Display movie stream
  • Both models have different storage

requirements

slide-15
SLIDE 15

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Current setup in Amsterdam

(CdL)

slide-16
SLIDE 16

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Current setup in Amsterdam (ctd.)

slide-17
SLIDE 17

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Current setup in Amsterdam (ctd.)

  • Choices:

– Existing 10Ge interconnect (we're into networking) – Sun x4500 Thumpers

  • 48 x 1Tb data disks/2 x Opteron

– Running OpenSolaris “Nevada” – ZFS filesystem – Looking into upgrade to x4540, more

  • n this later
slide-18
SLIDE 18

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Current setup in Amsterdam (ctd.)

  • About 18T in use, 13T left
  • RAIDZ1 for now

(speed/space considerations)

  • Both thumpers are synced, using

ZFS snapshot streaming

  • 10Ge connection to “streaming node”

node41 and (suitcees/node41) to “Optiputer net”

  • Syncing of thumpers may stop if other use

dictates

slide-19
SLIDE 19

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience

  • Ease of administration

#!/usr/bin/bash for i in 1 2 3 4 5 6 7; do if [ $i = "1" ]; then zpool_command="zpool create -f mypool raidz2 " else zpool_command="zpool add mypool raidz2 " fi $zpool_command c0t${i}d0 c1t${i}d0 c4t${i}d0 c5t${i}d0 c6t${i}d0 c7t${i}d0 done zpool add mypool spare c0t0d0 c1t0d0 c6t0d0 c7t0d0

Wait two minutes --> approx. 32T filesystem

slide-20
SLIDE 20

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.)

  • Syncing filesystems

# Make a “point in time” snapshot of the pool >zfs snapshot mypool@20080521_1 # Stream the snapshot using RBUDP to the other host >zfs send mypool@20080521_1 | \ sendstream 192.168.57.25 8000m 8000 # On the other host, receive the stream: >recvstream 192.168.57.24 8000 | \ zfs receive mypool/basketcees@20080521_1

This may take longer...

slide-21
SLIDE 21

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.)

slide-22
SLIDE 22

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.)

  • Got a Thor to play with :)

– Can we move the Thumper disks to the Thor?

slide-23
SLIDE 23

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.)

  • Yes you can!

– And you'll even keep your ZFS volumes if you export them nicely ;)

  • Probably you don't want to move the OS

disks though...

  • So let's look at some tests...
slide-24
SLIDE 24

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.) Thor

slide-25
SLIDE 25

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.) Thor

slide-26
SLIDE 26

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.) Thor

  • Not much faster! But there's some strange

variation here...

  • So what if we would split per controller ...
slide-27
SLIDE 27

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.) Thor

slide-28
SLIDE 28

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.) Thor

slide-29
SLIDE 29

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Lessons

  • Raw storage system speed may be enough for

uncompressed 4K, but that doesn't scale to concurrent streaming

  • Using Thor-s it can help to think about

controller/disk assignment

  • With the ixgb NICs we max out at 6Gbps on a

10Ge link! (comparable using iRODs) This needs to improve to use Thor-speed!

  • Considering the “CineGRID application”

– We don't yet have a standard solution from storage to “Film display” – Conventional streaming tools use “file system paradigm”

slide-30
SLIDE 30

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Future

  • Filesystems

– Linux: Ext4, BTRFS looks promising – Solaris: ZFS remains very strong in benchmarks and usability

  • Clustering

– Lustre – GlusterFS – pNFS

  • Networking

– RDMA/iWARP interconnect (nice if we went 100Ge)

slide-31
SLIDE 31

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Summary

  • Storage backend speed of individual modern

systems may be sufficient for 1 stream: _But_ we will likely want more

  • We need to consider the entire component

stack of the CineGRID application

  • We probably need an approach where

“streaming nodes” can access data using cluster technology – Fast interconnect (RDMA/QDR Infinib.) – More than one storage server – New technologies may lead to more elegant designs (e.g. SSD/ZFS/Lustre)

slide-32
SLIDE 32

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Backup slides

slide-33
SLIDE 33

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Experience (ctd.)

slide-34
SLIDE 34

Sun x4500 “Thumper” architectuur

slide-35
SLIDE 35

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Effective filesystem size

(Using 500MB disks)

Size (TB) Z2_14x3 16 Z1_6x7 16 Z2_10x4 14 Z1_2x3x7 2 x 6.2 Z2_6x7 12 mirror_3way 6.7 mirror_3wayx2 2 x 3.1 ZFS config

slide-36
SLIDE 36

ATG - Informatiseringscentrum - Universiteit van Amsterdam Systems- and Network Engineering Research Group

Mirrorset variances

mirror_3wayx2/c0-c2/iozone_encore.science.uva.nl_snv_103_Mirrored_3wayx2_0_variance_w.txt 256 10886 1024 14316.2 4096 16805.2 mirror_3wayx2/c3-c5/iozone_encore.science.uva.nl_snv_103_Mirrored_3wayx2_1_variance_w.txt 256 16110.7 1024 19730.2 4096 21569.9 mirror_3way/iozone_encore.science.uva.nl_snv_103_Mirrored_3way_variance_w.txt 256 110910 1024 49829.2 4096 87392.1

  • mirror_3wayx2/c0-c2/iozone_encore.science.uva.nl_snv_103_Mirrored_3wayx2_0_variance_r.txt

256 14942.1 1024 11748.9 4096 23844.1 mirror_3wayx2/c3-c5/iozone_encore.science.uva.nl_snv_103_Mirrored_3wayx2_1_variance_r.txt 256 18825.6 1024 24269.6 4096 41326.4 mirror_3way/iozone_encore.science.uva.nl_snv_103_Mirrored_3way_variance_r.txt 256 27245.5 1024 53021.4 4096 41020.8