monthly webinar series
play

Monthly Webinar Series: Understanding storage performance for - PowerPoint PPT Presentation

Monthly Webinar Series: Understanding storage performance for hyperconverged infrastructure Luke Pruen Technical Services Director Virtual SANs made simple Introducing StorMagic Enabling HCI Partner Network Global footprint 30 +


  1. Monthly Webinar Series: Understanding storage performance for hyperconverged infrastructure Luke Pruen – Technical Services Director “ Virtual SANs made simple”

  2. Introducing StorMagic Enabling HCI Partner Network Global footprint 30 + verticals Large and Small Pre-configured, certified and Wherever you are, StorMagic has Large and small deployments In 72 countries, customers Including retail, financial supported by major vendors resellers, integrators, and server from enterprises with 1000s of depend on StorMagic for sever services, healthcare, partners to meet your needs site to SME’s with a single site and storage infrastructure government, education, energy, professional services, pharma, and manufacturing

  3. What is Hyperconverged Infrastructure? “Tightly coupled compute, network and storage hardware that dispenses with the need for a regular storage area network (SAN).” Magic Quadrant for Integrated Systems: Published October 10 th 2016

  4. There’s a lot of choice out there The hyperconverged market • Market is maturing with many options now available • Be careful of pursuing a “one size fits all” approach Customers • Few customers understand their requirements • Often blindly deploy over-spec’d solutions Our take • Customers need to be able to measure their needs more accurately • Real world data often provides a surprising insight

  5. ̶ ̶ Hyperconverged Architectures: Kernel based • Storage “software” is within the hypervisor • Pools local server storage where the hypervisor is installed • Presents storage over a proprietary mechanism Shared Storage • Claims to be more efficient and able to deliver higher performance More efficient because its where the hypervisor runs Less “hops” to the storage Storage Storage Storage Hypervisor Hypervisor Hypervisor SW SW SW SSD SSD SSD

  6. ̶ ̶ ̶ Hyperconverged Architectures: VSA based • A virtual storage appliance resides on the hypervisor • Host storage assigned to the local VSA • Storage is generally presented as iSCSI or NFS Shared Storage • Claims to be more flexible than kernel based models Hypervisor agnostic More storage flexibility Easier to troubleshoot storage issues vs hypervisor issues SSD SSD SSD

  7. StorMagic SvSAN: Overview “SvSAN turns the internal disk, SSD and memory of 2 or more servers into highly available shared storage ”

  8. StorMagic SvSAN: Benefits Availability Cost-Effective Flexible Robust Today’s needs, future proofed Lightest footprint, lowest cost Any site, any network Performance and scale No more physical SANs Data & operations protected  Leverage any CPU and storage type  Converge compute and storage  No single point of failure  Active/Active synchronous mirroring  Utilize power of commodity servers  Local and stretched cluster capable  Scale-up performance with 2 node cluster  Eliminate storage networking components  Split-brain risk eliminated Build-your-own Hyperconverged Lowest CAPEX Proven at the IT edge and the datacenter  Eliminate appliance overprovisioning  Start with only 2 servers - existing or new  From harshest to the most controlled  Configure to precise IOPS & capacity  Supports mission critical applications  Significantly less CPU and memory  Auto-tier disk, SSD and memory  Tolerates poor, unreliable networks  One lightweight quorum for all clusters Flexibility and growth Enterprise-class management Lowest OPEX  Integrates with standard tools  Hyperconverged or storage-only  Reduced power, cooling and spares  Automated deployment and recovery scripts  Lower costs with centralized management  Hypervisor and server agnostic  Designed for use by any IT professional  Eliminate planned and unplanned  Non-disruptive upgrades downtime

  9. Optimizing Storage: All storage is not equal Magnetic drives provide poor random performance • SATA 7.2k rpm 75 – 100 IOPS • SAS 10k/15k rpm 140 – 210 IOPS • Lower cost per GB • Higher cost per IOPS • Flash and SSDs have good random performance • SSD/Flash 8.6K to 10 millions IOPS • Lower cost per IOPS • High cost per GB compared to magnetic Memory has even better performance • • Orders of magnitude faster than Flash/SSD • Much higher cost per GB compared to SSD/Flash • Memory is volatile and typical low in capacity *https://en.wikipedia.org/wiki/IOPS 9 *https://en.wikipedia.org/wiki/RAM_drive

  10. Optimising Storage: The importance of caching Virtualized environments suffer from the ‘I/O blender’ effect • Multiple Virtual Machines sharing a set of disks • Resulting in predominantly random I/O • Magnetic drives provide poor random performance • SSD & Flash storage ideal for workloads but expensive Working sets of data • Driven by workloads which are ever changing • Refers to the amount of data most frequently accessed • Always related to a time period • Working sets sizes evolve as workloads change Caching • Combat the I/O blender effect without the expense of all Flash or SSD • Working sets of data can be identified and elevated to cache 10

  11. Optimising Storage: SSD/Flash caching SSD/Flash Caching • Significantly improves overall I/O performance • Reduces the number of I/Os going directly to disk • Dynamic cache sizing read/write ratio Writes operations • Data is written as variable sized extents • Extents are merged and coalesced in the background • Data in cache is flushed to hard disk regularly in small bursts Read operations • SvSAN algorithm identifies and promotes data, based on access patterns • Frequently accessed data blocks are elevated on SSD/Flash • Least frequently accessed blocks are aged out

  12. Optimising Storage: Cold & Hot data Intelligent read caching algorithm • All read I/Os are monitored and analyzed • Most frequently used data – “Hot” data • Cache tiers are populated based on access frequency Tiering • RAM: Most frequently accessed data • SSD/Flash: Next most frequently accessed data • HDD: Infrequently accessed data – “Cold” data Sizing • Assign cache sizes to meet requirements • Grow caches as working sets change • Use any combination of Memory, SSD/Flash and Disk Play to the strengths • Play to the strengths of all mediums • Memory Highest IOPS • SSD/Flash Magnetic drives providing low price per GB

  13. Industry performance numbers Lab produced • Numbers produced under strict conditions representing peak IOPS • Random workloads focus on small block sizes to produce BIG numbers • Sequential workloads focus on large block sizes to show BIG throughput • Set unrealistic expectations Operation IO Block Size Example • All Read: 4KiBs 100% random read Transaction log write 512 bytes – 60 KB • Mixed Read/Write: 4KiBs 70/30 random read Checkpoint/Lazywrite 8KB – 1MB • Sequential Read: 256KiB Read-Ahead Scans 128KB – 512KB • Sequential Write: 256KiB Bulk Loads 256KB The real world Backup/Restore 1MB • Multiple VMs running numerous mixed workloads ColumnStore Read-Ahead 8MB • AD, DNS, DHCP: low IOPS requirement File Initialization 8MB • Database, email and application servers: higher IOP requirement • Generally sharing the same storage subsystem In-Memory OLTP Checkpoint 1MB *SQL Server I/O block size ref table

  14. How much performance is enough? What do you need? • Understand and document your storage requirements • Current IOP and latency requirements of the current environment • Lifecycle of solution? How do you choose? • Don’t base your decision on a 4KiB 100% random read workload • When evaluating use realistic workloads • Does the management and functionality meet your needs? What matters • Meets your current performance & capacity requirements • Meets your future performance & capacity requirements • Meets your deployment, management and availability requirements

  15. Customer data analysis Real life data • Real customer data collected and analysed • Exact data patterns simulated and replayed • Accurate expectations performance under their workloads Results • Average customer would benefit from caching/tiering • Up to 70% of I/O being satisfied from read cache • A small amount of cache makes a big difference Conclusion • Few customers have performed this exercise • Over provision hardware is common • Significant cost savings were identified Customer examples • UK - Oil & Gas • US - On-demand consumer service • US - National retailer

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend