branch offices and smbs choosing the right hyperconverged
play

Branch offices and SMBs: choosing the right hyperconverged solution - PowerPoint PPT Presentation

Branch offices and SMBs: choosing the right hyperconverged solution Presenters: Howard Marks Chief Scientist, DeepStorage.net Luke Pruen Director of Technical Services, StorMagic Infrastructures for the Remote Office Howard Marks -


  1. Branch offices and SMBs: choosing the right hyperconverged solution Presenters: Howard Marks – Chief Scientist, DeepStorage.net Luke Pruen – Director of Technical Services, StorMagic

  2. Infrastructures for the Remote Office Howard Marks - @DeepStorageNet

  3. Your not so Humble Speaker • 30 years of consulting and writing for trade press – Now at TechTarget storage sites • Chief Scientist DeepStorage, LLC. – Independent test lab and analysts • Co-Host Greybeards on Storage podcast • Executive director The Other Other Operation Hmarks@DeepStorage.Net @DeepStorageNet

  4. Remote Office IT Challenges • Not just offices: – Plants – Stores – Restaurants – Bank/brokerage branches • Everything’s limited – IT Staff – Budget – Connectivity – Space • Outside the range of 4hr response

  5. Let’s Send Everything to the Cloud • Limited connectivity – Low bandwidth – Not redundant – Now critical to mission of office • Hardware interfaces – Time clocks – POS terminals – Bar code/inventory • Line of business apps not web

  6. Conventional Infrastructure • Typically 4-5 workloads – Plus some batch/transfer jobs • Two virtualization hosts – Plenty of horsepower for failover • Too complex – External disk array – Fibre Channel switches • Too expensive • Too Fragile

  7. Enter Hyperconvergence • Software turns host disk into shared volume • Replicate for resilience • SSD for acceleration • Lower cost • Major vendors in the data center

  8. Remote Office Requirements • Lightweight storage VM – Run on 1 socket server – Hypervisor, Windows etc. $5000/socket • Scale down – 2 nodes • Enhanced resiliency – 3-4 day MTTR common – Don’t expose data for that long • Management options

  9. Split Brains and Witnesses (go out in the noonday sun) • In split-brain both nodes of a 2-Node cluster remain active • Some HCI solutions have required 3 nodes – Any 2 nodes make a quorum • Witness makes quorum but isn’t node – Witness should not run on a node – Witness at HQ, on PC, other options

  10. Summary • Remote sites have unique needs • Conventional infrastructure – too many too’s • HCI could be a good solution – Be careful of Datacenter products • Look for: – Lightweight Software – High resiliency after failures – Witness flexibility – Management flexibility

  11. SvSAN Overview Luke Pruen – Director of Technical Services, StorMagic

  12. StorMagic SvSAN: Overview SvSAN turns the internal disk, SSD and memory of industry standard servers into highly available shared storage

  13. ̶ ̶ ̶ ̶ ̶ ̶ ̶ SvSAN - Features • SvSAN VSA – Virtual Storage Appliance Lightweight software defined storage platform • Synchronous Mirroring Synchronously mirror your storage between as little as two hosts for high availability and protection of your storage • Stretch Cluster Support Mirror storage across separate sites to protect against major outages • Centralized, Simplified Management Control all your SvSAN clusters from one place with simplified management tools • Remote Shared Witness - NSH Flexible cluster witness keeps your mirrored storage in sync and highly available • Performance Caching Features Utilized SSD and system memory to boost your performance • Scale Flexibly Scale-Up and Scale-Out

  14. ̶ ̶ ̶ ̶ ̶ ̶ SvSAN - Neutral Storage Host (witness) • Tie-breaker service for SvSAN mirrors • Prevents data inconsistency AKA ‘split brain’ VSA VSA • This ensures VM VM VM VM SvSAN In the event of a single failure there is no interruption in service Synchronous Hypervisor Hypervisor Mirroring In the event of multiple failures there is no corruption or loss of data • Local or Remote Supported as Windows service, Linux daemon, packaged VM, Raspbian (Raspberry Pi) Withstands 3000ms latency WAN Up to 20% packet loss 9kbs bandwidth required per SvSAN mirror NSH • Single NSH instance for 1000s of mirrors across clusters

  15. ̶ ̶ ̶ ̶ ̶ ̶ SvSAN - Neutral Storage Host (witness) • Tie-breaker service for SvSAN mirrors • Prevents data inconsistency AKA ‘split brain’ • This ensures In the event of a single failure there is no interruption in service In the event of multiple failures there is no corruption or loss of data • Local or Remote Supported as Windows service, Linux daemon, packaged VM, Raspbian (Raspberry Pi) Withstands 3000ms latency Up to 20% packet loss 9kbs bandwidth required per SvSAN mirror • Single NSH instance for 1000s of mirrors across clusters

  16. ̶ ̶ ̶ ̶ ̶ ̶ SvSAN - Neutral Storage Host (witness) • Tie-breaker service for SvSAN mirrors • Prevents data inconsistency AKA ‘split brain’ • This ensures In the event of a single failure there is no interruption in service In the event of multiple failures there is no corruption or loss of data • Local or Remote Supported as Windows service, Linux daemon, packaged VM, Raspbian (Raspberry Pi) Withstands 3000ms latency Up to 20% packet loss 9kbs bandwidth required per SvSAN mirror • Single NSH instance for 1000s of mirrors across clusters

  17. ̶ ̶ ̶ ̶ ̶ ̶ SvSAN - Neutral Storage Host (witness) • Tie-breaker service for SvSAN mirrors • Prevents data inconsistency AKA ‘split brain’ • This ensures In the event of a single failure there is no interruption in service In the event of multiple failures there is no corruption or loss of data • Local or Remote Supported as Windows service, Linux daemon, packaged VM, Raspbian (Raspberry Pi) Withstands 3000ms latency Up to 20% packet loss 9kbs bandwidth required per SvSAN mirror • Single NSH instance for 1000s of mirrors across clusters

  18. Witness - SvSAN vs vSAN 100 site example StorMagic SvSAN VMware vSAN ROBO Number of remote nodes 2 2 1:1000 1:1 Number of witness nodes required 100 sites: 1 witness 100 sites: 100 witnesses 1vCPU per witness 2vCPU per witness Witness node vCPU 100 sites: 1vCPU required 100 sites: 200 vCPU required 512MB per witness 8GB per witness Witness node memory 100 sites: 512MB 100 sites: 800GB Witness latency allowance < 3000 ms RTT < 500 ms RTT Witness Bandwidth 9 Kbps per mirrored datastore Per 10 VMs 0.24 Mbps 100 sites: 0.9 Mbps 100 sites: 24 Mbps Virtual SAN CPU 1 vCPU per host 10% of hosts total CPU 1GB per host Depends on the number of disk groups. Each host Virtual SAN Memory 2GB per host using SSD caching must contain a minimum of 32GB to support 7 disk Optional memory for memory caching groups Reference: VMware Virtual SAN Bandwidth Sizing Guide https://www.vmware.com/files/pdf/products/vsan/vmware-virtual-san-6.1-stretched-cluster-bandwidth-sizing.pdf

  19. SvSAN - Intelligent automated read caching & tiering Intelligent read caching algorithm • All read I/Os are monitored and analyzed • Most frequently used data – “Hot” data • Cache tiers are populated based on access frequency Tiering • RAM: Most frequently accessed data • SSD/Flash: Next most frequently accessed data • HDD: Infrequently accessed data – “Cold” data Sizing • Assign cache sizes to meet requirements • Grow caches as working sets change • Use any combination of Memory, SSD/Flash and Disk Play to the strengths • Play to the strengths of all mediums • Memory Highest IOPS • SSD/Flash Magnetic drives providing lower price per GB

  20. ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ SvSAN - management & integration • Centralized Management & Monitoring from the Datacenter • vCenter StorMagic Integration StorMagic Dashboard Single/Multi VSA Deploy NSH Deploy VSA Restore Create, expand and migrate storage • Hyper-V Integration StorMagic Deployment Wizard • Monitoring SNMP v2 & v3 SMTP System Center Operations Manager • Scripting tool box Powershell module Deployment, configuration, firmware upgrades Plugin script generation

  21. ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ ̶ SvSAN - Summary • Lightweight Solution Architected for smallest possible footprint Minimal resources required for VSAs and Witness Powerful features to drive higher performance requirements • Eliminate Downtime Synchronously mirrored storage across multiple servers No single point of failure or maintenance downtime Upgrade & replace hardware with no impact • Flexible witness Run remotely or locally Supported on a wide options of platforms Lowest requirements of any solution • Centrally Deploy, Manage & Monitor Centrally deployed Automated though scripting Central management of thousands of locations

  22. Q&A and Next Steps Further Reading: Download your free An overview of SvSAN - http://stormagic.com/svsan/ trial of SvSAN SvSAN Data Sheet - http://stormagic.com/svsan-data-sheet/ stormagic.com/trial SvSAN White Paper - http://stormagic.com/svsan-6/ SvSAN Product Information Product Options SvSAN license 2, 6, 12 and unlimited TBs License entitlement 2 mirrored servers Maintenance and support Platinum - 24x7 / Gold - 9x5 For further information, please contact: sales@stormagic.com

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend