Tech Tour Winning Technology Roadmaps Sig Knapstad Cutting Edge - - PowerPoint PPT Presentation

tech tour
SMART_READER_LITE
LIVE PREVIEW

Tech Tour Winning Technology Roadmaps Sig Knapstad Cutting Edge - - PowerPoint PPT Presentation

Tech Tour Winning Technology Roadmaps Sig Knapstad Cutting Edge sk@ceag.com 1 2 Avid Nexis E5 3 Tech Tour Spectra BlackPearl 4 Simplified Overview T950 5 6 Tech Tour Winning Technology Roadmaps What does your storage landscape


slide-1
SLIDE 1

1

TechTour

Winning Technology Roadmaps

Sig Knapstad

Cutting Edge sk@ceag.com

slide-2
SLIDE 2

2

slide-3
SLIDE 3

3

Avid Nexis E5

slide-4
SLIDE 4

Spectra BlackPearl

4

TechTour

slide-5
SLIDE 5

Simplified Overview

5

T950

slide-6
SLIDE 6

6

slide-7
SLIDE 7

Winning Technology Roadmaps

  • What does your storage

landscape look like?

  • “Do not delete anything” is

the new mantra

  • 30% more storage

companies than last year

7

TechTour

slide-8
SLIDE 8

ZETA-WHAT?

slide-9
SLIDE 9

If you ask Brian or Zeke…

  • Requirements:
  • Storage Considerations-speed-cost-capacity
  • Data Considerations-business value-recall probability-retention
  • Need to match the storage requirements to the storage resource
  • This has become more complex as new forms of storage have been

introduced

9

slide-10
SLIDE 10

Storage Architectures Available Today

10

TechTour

When to choose Object over SAN or NAS?

slide-11
SLIDE 11

Block Storage

  • Files are split into evenly sized blocks of data, each with its
  • wn address
  • No additional information (metadata)
  • The client is loaded with specialized software that is

responsible for presenting the local file-system

  • Everything is handled and controlled by the SAN software
  • SAN file-system
  • Ideal for high performance
  • The file system is managed by the client
  • Limited scalability

11

TechTour

slide-12
SLIDE 12

File Storage (NAS)

  • Based on a traditional file system comprising files
  • rganized in hierarchical directories and subdirectories
  • File level access to storage, as opposed to block level
  • Storage exposed as a network file system
  • NFS or CIFS/SMB
  • Breaks down at large scale due to the overhead of the

underlying file system that must be maintained

  • The file system is managed by the storage server not the

client

  • NAS system has to manage user privileges

12

TechTour

slide-13
SLIDE 13

Object Storage

  • Object storage, by contrast, doesn’t

split files up into raw blocks of data.

  • Units of storage are called objects
  • Objects can be grouped in buckets,

but are otherwise stored in a flat address space

  • Every object contains three things:
  • The data itself
  • Metadata
  • A globally unique identifier

(Applications identify the object via this ID)

13

TechTour

slide-14
SLIDE 14

Object Storage

  • Good for unstructured data
  • No real file system is needed here
  • Unlimited scalability
  • Geographic reach (access anywhere)
  • Single namespace
  • Built-in data resilience
  • Extensible and flexible metadata tagging
  • High throughput but doesn’t perform well for

low-latency, I/O intensive workloads

14

TechTour

  • Direct integration (via rest api)
  • Gateway (allows for authentication)
slide-15
SLIDE 15

Out with the old, in with the new? Not so fast-

  • Both Block and NAS methods for storing data worked fine for

years.

  • So why is there a need for another concept, Object?
  • Both Block and NAS need to implement functionality for user’s

access rights allowing them to make changes to the data.

  • What we now see is that much of the data being archived is

immured unstructured data. Content or materiel that will never be changed again. This is where Object storage shines.

slide-16
SLIDE 16

Does it ever rain in the cloud?

What if something goes wrong?

slide-17
SLIDE 17
  • Parity-error correction calculated via an algorithm/RAID-overhead as much as 50%
  • Replication-creating multiple copies in your storage system
  • Erasure Coding—data is broken into fragments that are expanded and encoded with a configurable

number of redundant pieces of data and stored across different locations. Efficient, usually around 20% overhead

  • Recovery:
  • With EC, the bigger the faster-the whole cluster is involved in the recovery and only recovers the

data it needs

  • RAID or NAS/SAN-bottlenecked through a single controller

Protection: losing data is like death and taxes

slide-18
SLIDE 18
  • Drive fail probability 1/1000
  • EC is described as m:n, with m data and n parity
  • So an EC of 10:2, 12 drives yields a probability of losing data

(losing 3 drives) as one in a billion, so we have a durability of…

  • “nine nines”
  • 99.9999999%
  • Amazon’s durability is “eleven nines”, or 99.999999999%

Nine lives…

slide-19
SLIDE 19

ACTIVE ARCHIVE

  • Lots of storage choices: Tape, Cloud, Object Store, Moving data from one type of

storage to another while maintaining visibility

  • Automatically optimizes where data is stored to reduce infrastructure costs and

management overhead, minimizing silos and maximizing utilization

  • Enhance business continuance through integrated backups, data replication, and

multiple disaster recovery strategies

  • Optimizes storage infrastructure and makes informed choices about expansion

GET THE RIGHT DATA IN THE RIGHT PLACE

slide-20
SLIDE 20

“One Ring to Rule Them All”

20

TechTour

Asset Management Layer

slide-21
SLIDE 21

Industry Trends

  • Your data may be here, over there, up there or down here
  • In addition to having a MAM or PAM control the movement of data, there will be new tools

emerging this Fall from several manufacturers that are sophisticated Global Data Movers.

  • DMs will cover everything from Ingest to Tiering, Migration and Archiving to any storage-DISK-

TAPE-WAN-CLOUD

  • Index where the data is on any storage
  • Smart Analytics to clarify storage spending, utilization and duplication
  • Ability to search for assets
  • POLICY BASED (not always a good thing), ability to quarantine files from migration
slide-22
SLIDE 22

“One Ring to Rule Them All”

22

TechTour

DATA MANAGEMENT FABRIC-GDM

slide-23
SLIDE 23

Industry Trends-Smarter Data Management

  • Eliminate storage silos: globally manage all files across otherwise incompatible

storage silos. Global Namespace creates a unified view across the entire storage environment simultaneously connecting all files regardless of vendor, platform and protocol.

  • File migrations are automated across all storage with no disruption to users or
  • applications. Intelligent data movement accelerates applications by ensuring files are

always where they need to be, when they need to be there. User-defined policies ensure the hot and warm files are always on the right storage tier and cold data is on the lowest cost storage, on-premises or in the cloud.

  • Data Insights delivers actionable intelligence for predictive, real-time analytics, and

verification & reporting that facilitate storage planning and optimize application performance.

slide-24
SLIDE 24

Industry Trends-Smarter Data Management

QUERY ENGINE ANALYTICS ENGINE

VISUALIZATION

ENGINE DYNAMIC DATA MOVER ENGINE METADATA ENGINE WORKFLOW ENGINE DISK TAPE WAN CLOUD

Global queries across all storage Lifecycle management, QoS, & capacity optimization Data Insights & reporting Migrate, I/O balancing, tiering Extract, auto-classification, tag, organize & manage Automated policies based on schedule and triggers

slide-25
SLIDE 25

25

Image courtesy of Strongbox

Metadata Driven Archives

slide-26
SLIDE 26

Brand Convergence

26

slide-27
SLIDE 27

REpresentational State Transfer

27

slide-28
SLIDE 28
slide-29
SLIDE 29

How do we move data into Object Storage?

slide-30
SLIDE 30

30

slide-31
SLIDE 31
slide-32
SLIDE 32
slide-33
SLIDE 33

San Francisco 49ers

Asset Management

REST API

slide-34
SLIDE 34
  • Now deploying APIs for Google Cloud and Azure
  • Video, Speech-to-text, Photo recognition, AI
  • Rest API at AT&T Park for OnPrem Object Storage
  • Game changer for Cutting Edge and our clients
slide-35
SLIDE 35

Data over the WAN

slide-36
SLIDE 36

37

Multi-Site Media Project

Spring 2018

slide-37
SLIDE 37

Project Overview

38

  • Goal: Increase collaboration between existing Stadium location and new office facility located 5 miles away in Oakland
  • New site is linked to the Stadium by a high-bandwidth data circuit enabling multi-site workflows where editors can access

media at either location

  • Upgrade to Stadium Infrastructure included:
  • New versions of Media Composer and Interplay
  • Replace ISIS with new 240TB Avid Nexis Storage
  • Add Transcoding (Content Agent)
  • Improve Browsing (Avid MediaCentral UX)
  • Upgraded Archive
  • Add Jack London Square Systems:
  • Add Edit Workstations and Browse/Edit Laptops
  • Add More Avid Nexis Storage(240TB)
  • Add Transcoding (Content Agent)
  • Enable “bi-directional” workflow between Jack London Square & Stadium
slide-38
SLIDE 38

39

10Gb Fibre

slide-39
SLIDE 39

Workflow Overview

40

  • Media Ingest from either location
  • Interplay Check-In from either location
  • Transcode to DNxHD 145
  • Browsing using Interplay or MediaCentral UX
  • Editing
  • Using media stored in either location
  • Ability to begin edits at one site, finish at the other
  • Archiving to XenData and Spectralogic T120
  • Ingest and transcode RED 4K media to Avid DNxHD
slide-40
SLIDE 40

41

10Gb Fibre

slide-41
SLIDE 41

Denver Broncos Multi-Site Solution Architecture

Summer 2018

slide-42
SLIDE 42

Deployment Plan

  • Create an identical storage solution for both locations
  • 2 x 96TB TerraBlock SAN Online Editorial Storage Pools
  • Hub Server
  • 192TB TerraBlock SAN Data Replication & Disaster Recovery Storage Repository
  • Leverage existing technology and IT infrastructure at both locations
  • Establish mirrored media transfers from location to location
  • Use existing (non-dedicated) IT-supplied 10Gb circuit between the facilities
  • 10Gb pipe will support mirroring of assets at night and weekends (avoiding IT bandwidth impacts)
  • End Result:
  • Editors at either location can access all media assets
  • Automatic Disaster Recovery solution for both facilities

43

slide-43
SLIDE 43

Benefits & Features

  • Key Benefits
  • Increased capacity
  • Higher bandwidth
  • Greater flexibility
  • Disaster Recovery protection of Stadium and Practice Facility editorial assets

44

slide-44
SLIDE 44

45

slide-45
SLIDE 45

SF GIANTS-Cloud Editorial

  • Demo MC Cloud, Central UX

46