client side direct i o for nfs
play

Client-Side Direct I/O for NFS Mike Kupfer kupfer@Eng.Sun.COM 28 - PDF document

Client-Side Direct I/O for NFS Mike Kupfer kupfer@Eng.Sun.COM 28 February 1997 1 Client-Side Direct I/O for NFS Connectathon 1997 Disclaimer This is not a product announcement. 2 Client-Side Direct I/O for NFS Connectathon 1997


  1. Client-Side Direct I/O for NFS Mike Kupfer kupfer@Eng.Sun.COM 28 February 1997 1 Client-Side Direct I/O for NFS Connectathon 1997

  2. Disclaimer This is not a product announcement. 2 Client-Side Direct I/O for NFS Connectathon 1997

  3. Overview • Background • Changes • Performance Results • Future Work, Issues 3 Client-Side Direct I/O for NFS Connectathon 1997

  4. Background 4 Client-Side Direct I/O for NFS Connectathon 1997

  5. The Benchmark what • sequential I/O: mkfile a 60 MB file, then dd it to /dev/null • unmounts on client and server to flush caches why • LADDIS (SPEC SFS) doesn’t measure client • LADDIS measures aggregate, not point-to- point - expect 6+ MB/s on SS10/20 with FastEthernet, only getting 5 MB/s (up from 3.4 MB/s) 5 Client-Side Direct I/O for NFS Connectathon 1997

  6. Direct I/O • bypass page cache - best for large files, no locality of reference - avoid page cache overhead - avoid polluting page cache • UFS Direct I/O project in 2.6 - databases, decision support software - might help NFS server; what about client? 6 Client-Side Direct I/O for NFS Connectathon 1997

  7. Direct I/O (cont’d) • SGI’s Bulk Data Service - O_DIRECT flag combined with NFS file - stuff bytes into a TCP socket connection - 60 MB/s over HIPPI (March 1996) - uses private protocol, requires client and server changes 7 Client-Side Direct I/O for NFS Connectathon 1997

  8. Changes 8 Client-Side Direct I/O for NFS Connectathon 1997

  9. Overview of Changes • API support: make look like UFS • add array of buffers to rnode - kmem_alloc , kmem_free buffers as needed • use buffers instead of VM segment • keep the pipe full - use readahead and write-behind - large transfer sizes - safe asynchronous writes • transparent to server except for larger transfer size 9 Client-Side Direct I/O for NFS Connectathon 1997

  10. Client Structure nfs3_read , nfs3_write VM (page cache) direct I/O code code (including VNOCACHE) async threads nfs3read , nfs3write 10 Client-Side Direct I/O for NFS Connectathon 1997

  11. Performance Results 11 Client-Side Direct I/O for NFS Connectathon 1997

  12. Issues, Future Work 12 Client-Side Direct I/O for NFS Connectathon 1997

  13. Issues • to productize or not to productize - verify on UltraSPARC, other benchmarks • API for determining transfer size • tuning - how many buffers - when to issue COMMIT • MT support too hairy? - less arcane scheme for iterating over buffers 13 Client-Side Direct I/O for NFS Connectathon 1997

  14. Things To Do • failover support • cache management, error handling - make direct I/O consistent with VM- based code (such as it is) • misc. cleanup - API for enabling/disabling direct I/O - code organization - plug into kmem reclaim logic - coexistence with mmap - etc. 14 Client-Side Direct I/O for NFS Connectathon 1997

  15. Futures • application-directed readahead? • page flipping? • server-side direct I/O - assume client cache takes most hits for NFS - use UFS direct I/O 15 Client-Side Direct I/O for NFS Connectathon 1997

  16. Conclusions • bypassing page cache is a win for sequential access, no locality of reference • the win gets bigger if the file doesn’t fit in memory • keeping the pipe full is more work, but necessary 16 Client-Side Direct I/O for NFS Connectathon 1997

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend