StorMagic SvSAN 6.1 Product Announcement Webinar and Live - - PowerPoint PPT Presentation

stormagic svsan 6 1
SMART_READER_LITE
LIVE PREVIEW

StorMagic SvSAN 6.1 Product Announcement Webinar and Live - - PowerPoint PPT Presentation

StorMagic SvSAN 6.1 Product Announcement Webinar and Live Demonstration Mark Christie Senior Systems Engineer Introducing StorMagic What do we do? StorMagic SvSAN eliminates the need for physical SANs by exposing the storage of an industry


slide-1
SLIDE 1

Product Announcement Webinar and Live Demonstration Mark Christie – Senior Systems Engineer

StorMagic SvSAN 6.1

slide-2
SLIDE 2

Introducing StorMagic

What do we do?

StorMagic SvSAN eliminates the need for physical SANs by exposing the storage of an industry standard server as a virtual SAN thereby dramatically reducing CAPEX and OPEX.

How does SvSAN achieve this?

StorMagic’s virtual SAN converts the internal disk, flash and memory of industry standard servers into robust, cost effective and flexible shared-storage.

Where is this most applicable?

SvSAN is deployed for hyperconverged infrastructure for multi-site enterprises and SMEs and server- based storage arrays as an alternative to a traditional physical SAN.

slide-3
SLIDE 3

1 to Thousands

  • f Sites

Large and small deployments from enterprises with 1000s of sites to SMEs with a single site

Global Customer Adoption

Within 72 countries, organisations depend on StorMagic for sever and storage infrastructure

Global Partner Network

Wherever you are, StorMagic has resellers, integrators, and server partners to meet your needs

Across Many Verticals

Retail, health, government, industrial, education, finance, pharma and many more

Introducing StorMagic

slide-4
SLIDE 4

What is a Storage Array?

A disk array is a hardware element that contains a large group of hard disk drives (HDDs). It may contain several disk drive trays and has an architecture which improves speed and increases data

  • protection. The system is run via a storage controller, which

coordinates activity within the unit. Wikipedia Definition:

slide-5
SLIDE 5

Storage Array Components

  • SAN Presentation

̶ iSCSI ̶ Fibre Channel

  • SAN Switch

̶ Ethernet (iSCSI) ̶ Fibre channel

  • Physical Storage Controller

̶ CPU ̶ Memory ̶ Dedicated Storage Hardware

  • Enterprise Drives

̶ 10K or 15K SAS ̶ SSD

SSD

Storage Controller

slide-6
SLIDE 6

Utilise industry standard server as a Virtual SAN

SSD

Storage Controller

SSD

Virtual Storage Appliance

Hypervisor

slide-7
SLIDE 7

Use Cases: Hyperconverged or Server-based Storage Array

  • Hyperconverged

̶ Shared storage and compute platform ̶ Possible appliance compute/storage scale lock-in ̶ Simplified management & scale-out

  • Server-based Storage Array

̶ Dedicated shared storage ̶ Always scale compute & storage independently ̶ Flexible/cost effective physical SAN alternative

slide-8
SLIDE 8

StorMagic SvSAN: Overview

“SvSAN turns the internal disk, SSD and memory of industry standard servers into highly available shared storage”

slide-9
SLIDE 9
  • Tie-breaker service for SvSAN mirrors
  • Prevents data inconsistency AKA ‘split brain’
  • This ensures

̶ In the event of a single failure there is no interruption in service ̶ In the event of multiple failures there is no corruption or loss of data

  • Local or Remote

̶ Supported as Windows service, Linux daemon, packaged VM, Raspbian (Raspberry Pi) ̶ Withstands 3000ms latency ̶ Up to 20% packet loss ̶ 9kbs bandwidth required per SvSAN mirror

  • Single NSH instance for 1000s of mirrors across clusters

SvSAN - Neutral Storage Host (witness)

slide-10
SLIDE 10
  • Tie-breaker service for SvSAN mirrors
  • Prevents data inconsistency AKA ‘split brain’
  • This ensures

̶ In the event of a single failure there is no interruption in service ̶ In the event of multiple failures there is no corruption or loss of data

  • Local or Remote

̶ Supported as Windows service, Linux daemon, packaged VM, Raspbian (Raspberry Pi) ̶ Withstands 3000ms latency ̶ Up to 20% packet loss ̶ 9kbs bandwidth required per SvSAN mirror

  • Single NSH instance for 1000s of mirrors across clusters

SvSAN - Neutral Storage Host (witness)

slide-11
SLIDE 11
  • Tie-breaker service for SvSAN mirrors
  • Prevents data inconsistency AKA ‘split brain’
  • This ensures

̶ In the event of a single failure there is no interruption in service ̶ In the event of multiple failures there is no corruption or loss of data

  • Local or Remote

̶ Supported as Windows service, Linux daemon, packaged VM, Raspbian (Raspberry Pi) ̶ Withstands 3000ms latency ̶ Up to 20% packet loss ̶ 9kbs bandwidth required per SvSAN mirror

  • Single NSH instance for 1000s of mirrors across clusters

SvSAN - Neutral Storage Host (witness)

slide-12
SLIDE 12

SvSAN - Management & Integration

  • Centralized Management & Monitoring from the Datacenter
  • vCenter StorMagic Integration

̶ StorMagic Dashboard ̶ Single/Multi VSA Deploy ̶ NSH Deploy ̶ VSA Restore ̶ Create, expand and migrate storage

  • Hyper-V Integration

̶ StorMagic Deployment Wizard

  • Monitoring

̶ SNMP v2 & v3 ̶ SMTP ̶ System Center Operations Manager

  • Scripting tool box

̶ Powershell module ̶ Deployment, configuration, firmware upgrades ̶ Plugin script generation

slide-13
SLIDE 13

Introducing SvSAN 6.1

Robust

Any Site, Any Network

Cost Effective

Lightest Footprint, Lowest Cost

Flexible

Today’s Needs, Future Proofed

I/O Performance statistics Crucial I/O statistics now at your finger-tips to better understand your workloads. Multiple VSA GUI deployment & upgrade Deploy and upgrade multiple VSAs through a single wizard and Out-of-the-box Experience PowerShell Auto-script generation Deploy VSAs through a GUI and automatically generate a custom PowerShell script. SSD Read/Write caching Enable hybrid storage configurations combining the performance of SSDs with the capacity of HDDs. Memory-based read caching Blazing fast for most common reads. Modes: most frequently used, read ahead and data pinning. Predictive auto-tiering Data dynamically cached between storage tiers depending

  • n frequency of access.
slide-14
SLIDE 14

SSD SSD

Host architecture flexibility

  • Underlying RAID configuration

̶ RAID 0,1,5,6,10

  • Create multiple storage pools for shared storage

̶ NL-SATA, NL-SAS, SAS, SSD, NVMe ̶ RDM, VMDK, VHDX

  • Local storage for Hypervisor and VSA

̶ Hypervisor system files ̶ VSA system drives

  • SSD Caching – Read Write

̶ Optional, Advanced Feature ̶ 1 or more SSD per server for SSD Caching

  • Memory Caching - Read

̶ Optional, Advanced Feature ̶ As little as 1GB RAM per server

Hypervisor SvSAN

slide-15
SLIDE 15

Host architecture flexibility

  • Underlying RAID configuration

̶ RAID 0,1,5,6,10

  • Create multiple storage pools for shared storage

̶ NL-SATA, NL-SAS, SAS, SSD, NVMe ̶ RDM, VMDK, VHDX

  • Local storage for Hypervisor and VSA

̶ Hypervisor system files ̶ VSA system drives

  • SSD Caching – Read Write

̶ Optional, Advanced Feature ̶ 1 or more SSD per server for SSD Caching

  • Memory Caching - Read

̶ Optional, Advanced Feature ̶ As little as 1GB RAM per server

LUN3 600GB Local DS 50GB RAID Pool Storage (0,1,5,6,10) Cache Storage (0,1,5,6,10) LUN1 50GB LUN2 5.95TB

RDM, VMDK 600GB RDM,VMDK 5.95TB

SSD SSD

Hypervisor SvSAN

slide-16
SLIDE 16

Host architecture flexibility

Synchronous Mirror LUN3 600GB Local DS 50GB RAID Pool Storage (0,1,5,6,10) Cache Storage (0,1,5,6,10) LUN1 50GB LUN2 5.95TB

RDM, VMDK 600GB RDM,VMDK 5.95TB

SSD SSD

Hypervisor SvSAN

100,000+ IOps with 2x industry standard servers!

SSD SSD

Hypervisor SvSAN

slide-17
SLIDE 17

SvSAN – SSD Caching

SSD Read/Write Caching

  • Data acknowledged once written to Flash storage enabling for high random IO

performance

  • Sequentially written back to magnetics at a later time to minimize disk head

movements Hot Blocks

  • Clean flushed data persists until space is required enabling read performance increase

Increase Performance and Efficiency

  • IO tracked to promote into cache tiers over time
  • Less workload on spinning disks enables greater efficiency

Sizing and Configuration

  • Single SSD or hardware and software RAID for protection
  • Recommended cache capacity 10% of pool storage
  • Example - 200GB cache, 2TB pool storage.
slide-18
SLIDE 18

SvSAN 6: Memory based caching

Modern servers now have vast amounts of memory

  • Additional memory can be used to dramatically improve

storage performance

  • Frequently read data is cached in memory
  • Read operations are served from memory without ever

accessing a disk drive Read-ahead mode

  • Detects sequential read streams to allow read ahead
  • i.e. when this block is read it is highly likely adjacent

blocks will be read

  • Enable for targeted workloads

Most frequently used mode (default)

  • SvSAN algorithm identifies and stores data based on

access patterns

  • Frequently accessed data blocks are stored in memory
  • Default mode benefits all workloads

Data pinning mode

  • Enter learning to create identical access pattern
  • Delivers most efficient read performance
  • Enable for targeted workloads
slide-19
SLIDE 19

SvSAN - Predictive automated read caching & tiering

Intelligent read caching algorithm

  • All read I/Os are monitored and analyzed
  • Most frequently used data – “Hot” data
  • Cache tiers are populated based on access frequency

Tiering

  • RAM: Most frequently accessed data
  • SSD/Flash: Next most frequently accessed data
  • HDD: Infrequently accessed data – “Cold” data

Sizing

  • Assign cache sizes to meet requirements
  • Grow caches as working sets change
  • Use any combination of Memory, SSD/Flash and Disk

Play to the strengths

  • Play to the strengths of all mediums
  • Memory Highest IOPS
  • SSD/Flash Magnetic drives providing lower price per GB
slide-20
SLIDE 20

SSD 3x1.2TB in RAID5 2.13TB VD1 & 50GB VD0 200GB SSD SSD 3x1.2TB in RAID5 2.13TB VD1 & 50GB VD0 200GB SSD 2.13TB Shared mirrored datastore 32GB SD 32GB SD

SvSAN SvSAN

SvSAN – Demo Hardware Configuration

slide-21
SLIDE 21

Customer data analysis: On-demand consumer service (US)

Read Write Read/Write Ratio 40% 60% Average Per Day 9GB 13GB Average Block Size 30KB 11KB Average IOPS 5 15

Workloads

  • Network monitoring for on-demand service
  • Back office apps

Challenge

  • Customer was evaluating hyperconverged solutions
  • Was considering full flash

StorMagic analysis

  • Enabled I/O meta-data collection of a period of time in a live POC

̶ Distribution of I/O sizes ̶ Throughput and IOPS ̶ Locality of access

50 100 150 200 250 300 350 400 16:13 16:53 17:33 18:13 18:53 19:33 20:13 20:53 21:33 22:13 22:53 23:33 00:13 00:53 01:33 02:13 02:53 03:33 04:13 04:53 05:33 06:13 06:53 07:33 08:13 08:53 09:33 IOPS Time of Day (UTC)

Throughput IOPS

Read Write 0.001 0.01 0.1 1 10 100 1000 10000 42 GB 87 GB 132 GB 177 GB 222 GB 267 GB 312 GB 357 GB 402 GB 447 GB 492 GB 536 GB 581 GB 626 GB 671 GB 716 GB 761 GB 806 GB 851 GB 896 GB 941 GB 986 GB Number of accesses Thousands

Locality of access

Read Write

slide-22
SLIDE 22

Estimates

  • SSD & memory: 94% of I/O being satisfied from read cache, when using 2GB

memory and 120GB SSD

  • Memory only: 94% of I/O being satisfied from read cache when using 2GB of

memory Testing

  • Replay the exact workload collected from live environment
  • No best guess synthetic workload but exact patterns from data collection

Conclusion

  • Environment sufficient for workloads
  • Allocate a small amount of memory to satisfy almost all reads

6% 94% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Read Data Serviced by Tiers

Memory SSD Disk 0.37GB 67.85GB 0.00GB 10.00GB 20.00GB 30.00GB 40.00GB 50.00GB 60.00GB 70.00GB 80.00GB

% of Read Data Serviced by Tiers

Customer data analysis: On-demand consumer service (US)

slide-23
SLIDE 23

SvSAN 6: Management

Multiple VSA GUI deployment ̶ Deploy multiple VSAs through a single wizard ̶ Incremental naming and IP addressing Auto PowerShell script generation ̶ Deploy VSAs through a GUI and automatically generate a custom PowerShell script ̶ Script variables clearly marked to enable easy editing ̶ Scale custom scripts for mass deployments VSA OOBE (Out-of-box Experience) ̶ Stage VSA deployments on hardware ̶ Configure VSAs when installed onsite ̶ Configure through GUI or scripting Upgrade multiple VSAs through GUI

  • Use a repository for VSA firmware's
  • Select multiple VSAs through the StorMagic dashboard for upgrading
  • Upgrade VSAs immediately or stage firmware for overnight upgrade
  • StorMagic handles VSA health check to not impact environment
slide-24
SLIDE 24

SvSAN 6.1: Standard & Advanced Editions

Minimum cluster size 2 Synchronous Mirroring/High Availability ✔ Stretched/Metro Cluster Support ✔ Volume Migration ✔ VSA Restore

1

✔ VMware vSphere Storage API (VAAI) Support ✔ Centralized management and monitoring ✔ Remote shared quorum ✔ I/O performance statistics ✔ Multiple VSA GUI deployment & upgrade ✔ PowerShell Shell script generation ✔ SSD Caching

  • Memory-based caching - most frequently used mode
  • Memory-based caching - read ahead mode
  • Memory-based caching - data pinning mode
  • Predictive cached tiering
  • 2

✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔

1 Vmware vSphere only

Standard Edition

  • Rich features for Software-Defined Storage

Advanced Edition

  • Adds powerful cached auto-tiering
  • Disk, Flash and Memory

License, Maintenance & Support (M&S)

  • Perpetual License based on capacity
  • 1 year M&S mandated
  • M&S can be extended to 3 or 5 years

Capacity Levels

  • 2TB
  • 6TB
  • 12TB
  • Unlimited TB
slide-25
SLIDE 25

SvSAN - Hardware Requirements

Hardware Requirements VMware vSphere Microsoft Hyper-V (2012R2 and 2016) Any x86 hardware platform supported by the chosen hypervisor SATA, SAS, SSD Hypervisor supported storage controller Flexible RAID configuration Switched traffic or direct connection

SSD

slide-26
SLIDE 26

SvSAN - VSA Requirements

VSA – Virtual Storage Appliance Linux Kernel 500MB Boot Drive 20GB Journal Drive 1 vCPU 1GB RAM (Standard) 2GB RAM (Advanced) Up to 32GB per VSA for Memory Caching 1Gb, 10Gb, 40Gb networking

slide-27
SLIDE 27

SvSAN 6.1 Predictive Cached Tiering Deployment Examples

  • I am selecting a new server and storage solution

̶ Consider 3 tier architecture or hyperconverged and the best balance of storage tiers for capacity and performance ̶ Solution 1: Hyperconverged with tiering across HDD, SSD and Memory. ̶ Solution 2: Server-based storage array with tiering across HDD, SSD and Memory and separate compute

  • I am considering an all flash array

̶ Know your actual IOps requirement and decide how important low latency is ̶ Solution 1: Memory caching with HDDs for lower capacity and lowest cost ̶ Solution 2: Memory and SSD caching with HDDs for higher capacity and balanced cost ̶ Solution 3: Memory caching with a full Flash array for highest performance and highest cost

  • I am an existing SvSAN user with only HDD

̶ Minimal invasive upgrade for maximum ROI ̶ Solution 1: Leverage spare memory for Memory caching ̶ Solution 2: Upgrade memory for Memory caching

  • I am an existing SvSAN user with SSD caching or all Flash

̶ Increase IOps and reduce latency with SSD and memory read/write caching ̶ Solution 1: Add SSD read caching. ̶ Solution 2: Add SSD and Memory read caching through spare memory or a minor upgrade

slide-28
SLIDE 28

What’s the scoop on General Availability?

StorMagic SvSAN 6.1 will be on or before the 27th April 2017. However, you can get your hands on it next week! Second technical preview window now open: http://stormagic.com/6-1-tech-preview/

slide-29
SLIDE 29

Q&A and Next Steps

SvSAN Product Information

Product Options SvSAN license 2, 6, 12 and unlimited TBs License entitlement 2 mirrored servers Maintenance and support Platinum - 24x7 / Gold - 9x5

For further information, please contact: sales@stormagic.com Further Reading: An overview of SvSAN - http://stormagic.com/svsan/ SvSAN Data Sheet - http://stormagic.com/svsan-data-sheet/ SvSAN White Paper - http://stormagic.com/svsan-6/ Download your free trial of SvSAN

stormagic.com/trial