interface sdxi
play

Interface (SDXI) Shyamkumar Iyer, Distinguished Member of Technical - PowerPoint PPT Presentation

Introducing Smart Data Acceleration Interface (SDXI) Shyamkumar Iyer, Distinguished Member of Technical Staff Dell Technologies Interim Chair, SNIA SDXI TWG 10-28-2020 What is SNIA? Who is SNIA? A community of storage professionals and SNIA


  1. Introducing Smart Data Acceleration Interface (SDXI) Shyamkumar Iyer, Distinguished Member of Technical Staff Dell Technologies Interim Chair, SNIA SDXI TWG 10-28-2020

  2. What is SNIA? Who is SNIA? A community of storage professionals and SNIA is a non-profit global organization dedicated to developing standards and technical experts education programs to advance storage and information technology. snia.org @SNIA

  3. Work Accomplished Through SNIA Standards Development and Adoption • Accepted and Ratified spec development process • Submissions for International Standard ratification (ISO/IEC) • Develop open source software to accelerate adoption Technology Acceleration and Promotion • Special Interest Groups to promote emerging technologies • Multi-vendor collaboration to accelerate adoption • Cross-Industry alliances and engagements Global Vendor-Neutral Education • Host worldwide storage developer conferences • Organize storage technology summits • Deliver vendor-neutral webcasts and technical podcasts • Publish technology white papers, articles and blogs • Vendor neutral plugfests, hack-a-thons, conformance and interoperability testing • SNIA GitHub open source repositories

  4. SNIA’s Technical Work is in Eight Focus Areas

  5. Agenda The problem and the need for a solution Introducing SDXI 5

  6. The problem and the need for a solution

  7. Trends • Core counts increasing to enable Compute scaling • Compute density is on the rise • Converged and Hyperconverged Storage appliances are enabling new workloads on server class systems • Data locality is important • Single threaded performance is under pressure. • I/O intensive workloads can take away compute CPU cycles available. • Network and Storage workloads can take compute cycles • Data Movement, Encryption, Decryption, Compression

  8. Need for Accelerated Intra-host Data Movement Remote Intra-Host Workload Congestion Storage Each intra-host exchange can comprise Local Reducing Storage Storage Storage Cluster multiple memory buffer copies (or Network Latency Network Reducing Storage & Increasing BW demands transformations) PMEM latencies Increasing Capacity Foot- • Generally implemented with layers of Print of local storage 10GbE TCP/IP Storage Stack (Eg Eg: : Storage VMs) s) software stacks: RoCE 25GbE vSwit itch + Hyperviso visor • Kernel-to-I/O can leverage I/O-specific Netwo work/Sto /Storage Stack iWarp 40GbE Host/Hy /Hypervis rvisor hardware memory copy NVMe 100GbE • But, SW-to-SW usually relies on per-core vSwit itch NVDIMM Compute Stack ck (eg eg: : synchronous software (CPU-only) memory Compute VMs) copies New Memory vSwit itch + N Network rk Technologies Stack Host Network rk Uplin link Accelerating Intra-Host traffic is now Critical to Server Performance Application Workload demands

  9. Current data movement standard: DRAM Application(Context A) Application(Context B) DRAM (Context A) (Context A) SW context isolation layers DRAM DRAM (Context B) (Context B ) CPU Stable CPU ISA for SW based memory copies • Takes away from application performance System Physical Address • Software overhead to provide context isolation space • Synchronous SW copies stall applications • Less portable to different ISAs(Instruction Set Architectures) • Finely tuned CPU data movement algorithms can break with new microarchitectures

  10. Offload DMA engines: A new concept ? • Fast DMA offload engines are - • Vendor-specific HW • Vendor specific drivers, APIs • Vendor specific work submission/completion models • Direct access by user level software is difficult • Limited Usage Models • Vendor specific DMA states – Makes it harder to abstract/virtualize and migrate the work to other hosts

  11. Solution Requirements 1. Need to offload I/O from Compute CPU cycles 2. Need Architectural Stability 3. Enable Application/VM acceleration but, • Help migration from existing SW Stacks 4. Create abstractions in Control Path for scale and management 5. Enable performance in data path with offloads

  12. Emerging Server & Storage Architectures Looking into the horizon … 1. Memory-centric architectures. 2. New memory interconnects. a. CXL b. Gen-Z 3. Varied memory types. 4. Heterogenous architectures are becoming main stream. 5. The need to democratize data movement. 12

  13. Emerging Needs: New Memory Architectures Architectural Stability Application(Context A) Application(Context B) DRAM System Physical Address DRAM (Context A) (Context A) space Direct SW context isolation layers User mode Data mover DRAM Acceleration Accelerator CPU Family A CPU DRAM (Context B) (Context B ) (CPU offloaded) Security

  14. Emerging Needs: New Memory Architectures Architectural Stability Application(Context A) Application(Context B) DRAM System Physical Address DRAM (Context A) space (Context A) Direct SW context isolation layers User mode Data mover DRAM Acceleration Accelerator CPU Family A CPU DRAM (Context B) (Context B ) (CPU offloaded) Security SCM (Storage Class We are entering Memory) a tiered Memory world ! MMIO (Memory Mapped I/O) CXL/Fabric Attached Memory/Gen-Z

  15. Architectural Stability Architectural Stability Application(Context A) Application(Context B) DRAM System Physical Address DRAM (Context A) (Context A) space Direct SW context isolation layers User mode Data mover DRAM Acceleration Accelerator CPU CPU Arch A DRAM (Context B) (Context B ) (CPU offloaded) Security Accelerator CPU Arch B SCM (Storage Class Memory) Standard CPU-agnostic interface MMIO (Memory Mapped I/O) CXL/Fabric Attached Memory/Gen-Z

  16. Enabling Accelerators Application(Context A) Application(Context B) DRAM System Physical Address DRAM (Context A) (Context A) space Direct SW context isolation layers User mode DRAM Accelerator CPU Family A CPU DRAM (Context B) (Context B ) CPU Family B Accelerator SCM (Storage Class Memory) Accelerator GPU Standard interface MMIO (Memory for different Mapped I/O) accelerators Accelerator FPGA CXL/Fabric Attached Memory/Gen-Z Accelerator SMART IO

  17. The need for an industry standard Architectural Stability Application(Context A) Application(Context B) Security DRAM System Physical Address DRAM (Context A) space (Context A) Direct SW context isolation layers User mode Data mover DRAM Accelerator Acceleration CPU Family A CPU DRAM (Context B) (Context B ) (CPU offloaded) Security 1. Leverage a standard We are entering CPU Family B Accelerator SCM (Storage Class specification a tiered Memory Memory) 2. Innovate around world ! the spec Accelerator GPU 3. Add incremental MMIO (Memory Data acceleration Mapped I/O) features Accelerator FPGA CXL/Fabric Attached Memory/Gen-Z Accelerator SMART IO

  18. Agenda The problem and the need for a solution Introducing SDXI 18

  19. Introducing SNIA SDXI

  20. Introducing SNIA SDXI TWG SDXI Charter • Develop and Standardize a Memory to Memory Data Movement and Acceleration interface that is – • Extensible • Forward-compatible • Independent of I/O interconnect technology • Dell, AMD, VMware contributed the starting point for the spec • 13 TWG member companies and growing…

  21. Design Tenets • Data movement between different address spaces. - Includes user address spaces, different virtual machines • Data movement without mediation by privileged software. - Once a connection has been established. • Allows abstraction or virtualization by privileged software. • Capability to quiesce, suspend, and resume the architectural state of a per-address-space data mover. Enable “live” workload or virtual machine migration between servers. - • Enables forwards and backwards compatibility across future specification revisions . - Interoperability between software and hardware • Incorporate additional offloads in the future leveraging the architectural interface. • Concurrent DMA model.

  22. Baremetal Stack View Framework-Specific Interface User Mode to enable a User Mode App • Producer Context’s Application with a Descriptor ring, Context Descriptor Ring in User specific structures Address Space User Mode • Direct, Secure Access OS-Specific Interface Driver(Library) to enable a User Mode with hardware Driver Kernel Mode Application Producer Context’s Descriptor Ring Kernel Mode in Kernel Driver Address Space 1. Initialize 2. Discover Capabilities SDXI HW

  23. Direct HW access, Tier across Memory Tiers DRAM PMEM MMIO Fabric Mem Source and Destination Memory Targets for Data transfer in System Physical Address Space

  24. Scale Baremetal Apps – Multi-Address Space User Mode Application Address Space User Mode B Application Address Space A User Mode Driver(Library) Kernel Mode Application Kernel Mode Driver PF VF VF VF SDXI HW

  25. Scale with Compute Virtualization – Multi-VM address space User User Mode App Mode App VM A VM B User Mode User Mode Driver Driver (Library) (Library) Guest Kernel Guest Kernel Mode Mode Application Application Guest Kernel Mode Guest Kernel Mode Driver Driver SDXI Virtual Device SDXI Virtual Device Connection Manager Connection Manager Hypervisor Kernel Mode Driver SDXI Device

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend