the direct access file system dafs
play

The Direct Access File System (DAFS) Matt DeBergalis, Peter Corbett, - PowerPoint PPT Presentation

The Direct Access File System (DAFS) Matt DeBergalis, Peter Corbett, Steve Kleiman, Arthur Lent, Dave Noveck, Tom Talpey, Mark Wittle Network Appliance, Inc. Usenix FAST 03 Tom Talpey tmt@netapp.com 1 Outline DAFS DAT / RDMA


  1. The Direct Access File System (DAFS) Matt DeBergalis, Peter Corbett, Steve Kleiman, Arthur Lent, Dave Noveck, Tom Talpey, Mark Wittle Network Appliance, Inc. Usenix FAST ’03 Tom Talpey tmt@netapp.com 1

  2. Outline � DAFS � DAT / RDMA � DAFS API � Benchmark results 2 Usenix FAST ‘03

  3. DAFS – Direct Access File System � File access protocol, based on NFSv4 and RDMA, designed specifically for high- performance data center file sharing (local sharing) � Low latency, high throughput, and low overhead � Semantics for clustered file sharing environment 3 Usenix FAST ‘03

  4. DAFS Design Points � Designed for high performance Minimize client-side overhead – Base protocol: remote DMA, flow control – Operations: batch I/O, cache hints, chaining – � Direct application access to transport resources Transfers file data directly to application buffers – Bypasses operating system overhead – File semantics – � Improved semantics to enable local file sharing Superset of CIFS, NFSv3, NFSv4 (and local file systems!) – Consistent high-speed locking – Graceful client and server failover, cluster fencing – � http://www.dafscollaborative.org 4 Usenix FAST ‘03

  5. DAFS Protocol � Session-based � Strong authentication � Message format optimized � Multiple data transfer models � Batch I/O � Cache hints � Chaining 5 Usenix FAST ‘03

  6. DAFS Protocol Enhanced Semantics � Rich locking � Cluster fencing � Shared key reservations � Exactly-once failure semantics � Append mode, Create-unlinked, Delete-on-last- close 6 Usenix FAST ‘03

  7. DAT – Direct Access Transport � Common requirements and an abstraction of services for RDMA - Remote Direct Memory Access Portable, high-performance transport underpinning for – DAFS and applications Defines communications endpoints, transfer semantics, – memory description, signalling, etc. � Transfer models: Send (like traditional network flow) – RDMA Write (write directly to advertised peer memory) – RDMA Read (read from advertised peer memory) – � Transport independent 1 Gb/s VI/IP, 10 Gb/s InfiniBand, future RDMA over IP – � http://www.datcollaborative.org 7 Usenix FAST ‘03

  8. DAFS Inline Read Client Server 1 Send Descriptor READ_INLINE READ_INLINE Receive Descriptor Application Buffer Server 3 Buffer REPLY 2 Receive Descriptor REPLY Send Descriptor 8 Usenix FAST ‘03

  9. DAFS Direct Read Client Server 1 Send Descriptor READ_DIRECT READ_DIRECT Receive Descriptor Application Buffer RDMA Write 2 Server Buffer REPLY 3 Receive Descriptor REPLY Send Descriptor 9 Usenix FAST ‘03

  10. DAFS Inline Write Client Server 1 Send Descriptor WRITE_INLINE WRITE_INLINE Receive Descriptor Application Buffer Server 3 Buffer REPLY 2 Receive Descriptor REPLY Send Descriptor 10 Usenix FAST ‘03

  11. DAFS Direct Write Client Server 1 Send Descriptor WRITE_DIRECT WRITE_DIRECT Receive Descriptor Application Buffer RDMA Read 2 Server Buffer REPLY 3 Receive Descriptor REPLY Send Descriptor 11 Usenix FAST ‘03

  12. DAFS-enabled Applications Raw Device Adapter Kernel File System User Library Application Application Application (unchanged) (unchanged) (modified) User Buffers Buffers Buffers Space Disk I/O File I/O DAFS API Device Driver File System Syscalls Syscalls Calls DAFS Library DAFS Library DAFS Library User Space DAT Provider Library DAT Provider Library DAT Provider Library OS OS NIC Driver NIC Driver NIC Driver Kernel Kernel RDMA NIC RDMA NIC RDNA NIC H/W H/W • • • Kernel-level plug-in Kernel-level plug-in User-level library • • • Looks like raw disk Peer to local FS Best performance • • • App uses standard App uses standard Full application disk I/O calls file I/O semantics access to DAFS • • semantics Very limited access to Limited access to • DAFS features DAFS features Paper focuses on • • this style Performance similar Performance similar to direct-attached disk to local FS 12 Usenix FAST ‘03

  13. DAFS API � File based: exports DAFS semantics � Designed for highest application performance � Lowest client CPU requirements of any I/O system � Rich semantics that meet or exceed local file system capabilities � Portable and consistent interface and semantics across platforms No need for different mount options, caching policies, – client-side SCSI commands, etc. DAFS API interface is completely specified in an open – standard document, not in OS-specific documentation � Operating system avoidance 13 Usenix FAST ‘03

  14. The DAFS API � Why a new API? Backward compatibility with POSIX is fruitless – File descriptor sharing, signals, fork()/exec() • Performance – RDMA (memory registration), completion groups • New semantics – Batch I/O, cache hints, named attributes, open with • key, delete on last close Portability – OS independence and semantic consistency • 14 Usenix FAST ‘03

  15. Key DAFS API Features � Asynchronous High performance interfaces support native asynchronous – file I/O Many I/Os can be issued and awaited concurrently – � Memory registration Efficiently prewires application data buffers, permitting – RDMA (direct data placement) � Extended semantics Batch I/O, delete on last close, open with key, cluster – fencing, locking primitives � Flexible completion model Completion groups segregate related I/O – Applications can wait on specific requests, any of a set, or – any number of a set 15 Usenix FAST ‘03

  16. Key DAFS API Features � Batch I/O Essentially free I/O: amortizes costs of I/O issue over many – requests Asynchronous notification of any number of completions – Scatter/gather file regions and memory regions – independently Support for high-latency operations – Cache hints – � Security and authentication Credentials for multiple users – Varying levels of client authentication: none, default, – plaintext password, HOSTKEY, Kerberos V, GSS-API � Abstraction server discovery, transient failure and recovery, failover, – multipathing 16 Usenix FAST ‘03

  17. Benchmarks � Microbenchmarks to measure throughput and cost per operation of DAFS versus traditional network I/O � Application benchmark to demonstrate value of modifying application to use DAFS API 17 Usenix FAST ‘03

  18. Benchmark Configuration � User-space DAFS library, VI provider � NetApp F840 Server, fully cached workload Adapters (GbE): – Intel PRO/1000 • Emulex GN9000 VI/TCP • NFSv3/UDP, DAFS – � Sun 280R client Adapters: – Sun “Gem 2.0” • Emulex GN9000 VI/TCP • � Point-to-point connections 18 Usenix FAST ‘03

  19. Microbenchmarks � Measures read performance � NFS kernel versus DAFS user � Asynchronous and Synchronous � Throughput versus blocksize � Throughput versus CPU time � DAFS advantages are evident: Increased throughput – Constant overhead per operation – 19 Usenix FAST ‘03

  20. Microbenchmark Results 20 Usenix FAST ‘03

  21. Application (GNU gzip) � Demonstrates benefit of user I/O parallelism � Read, compress, write 550MB file � Gzip modified to use DAFS API Memory preregistration, asynchronous read and – write � 16KB blocksize � 1 CPU, 1 process: DAFS advantage � 2 CPUs, 2 processes: DAFS 2x speedup 21 Usenix FAST ‘03

  22. GNU gzip Runtimes 22 Usenix FAST ‘03

  23. Conclusion � DAFS protocol enables high-performance local file sharing � DAFS API leverages benefit of user space I/O � The combination yields significant performance gains for I/O intensive applications 23 Usenix FAST ‘03

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend