Revisiting Virtual File System for Metadata Optimized Non-Volatile - - PowerPoint PPT Presentation
Revisiting Virtual File System for Metadata Optimized Non-Volatile - - PowerPoint PPT Presentation
Revisiting Virtual File System for Metadata Optimized Non-Volatile Main Memory File System Ying Wang , Dejun Jiang, Jin Xiong Institute of Computing Technology, CAS University of
Outline
- Background & Motivation
- Design
– Cachelet for metadata caching – Global hash based metadata index – Metadata scalability
- Evaluation
- Summary
2
Background
- Non-Volatile Main Memories(NVMMs) provide low latency,
high bandwidth, byte-addressable and persistent storage
– PCM, MRAM, RRAM, 3D Xpoint[1]
- Intel releases Optane DC Persistent Memory (Optane PMM)
3 [6] R lat. W lat. R BW. W BW. DRAM 60ns 69ns 20 GB/s ~15 GB/s Optane PMM 305ns 81ns ~6GB/s ~2GB/s NVMe SSD 120us 30us 2GB/s 500MB/s HDD 10ms 10ms 0.1GB/s 0.1GB/s
[1] What is Intel Optane DC Persistent Memory. Intel. [2] Condit, SIGOPS 2009 [3] Wu, SC 2011 [4] Dulloor, EuroSys 2014 [5] Haris, EuroSys 2014 [6] The data from our evaluation and the paper of “Basic Performance Measurements of the Intel Optane DC Persistent Memory Module”
Background
- Non-Volatile Main Memories(NVMMs) provide low latency,
high bandwidth, byte-addressable and persistent storage
– PCM, MRAM, RRAM, 3D Xpoint[1]
- Intel releases Optane DC Persistent Memory (Optane PMM)
- File system can be directly built on memory
– Software has become the main factor affecting the file system performance and scalability[2,3,4,5] 3
NVMM CPU Memory bus File system
[6] R lat. W lat. R BW. W BW. DRAM 60ns 69ns 20 GB/s ~15 GB/s Optane PMM 305ns 81ns ~6GB/s ~2GB/s NVMe SSD 120us 30us 2GB/s 500MB/s HDD 10ms 10ms 0.1GB/s 0.1GB/s
I/O
[1] What is Intel Optane DC Persistent Memory. Intel. [2] Condit, SIGOPS 2009 [3] Wu, SC 2011 [4] Dulloor, EuroSys 2014 [5] Haris, EuroSys 2014 [6] The data from our evaluation and the paper of “Basic Performance Measurements of the Intel Optane DC Persistent Memory Module”
Background
- Existing kernel-level NVMM file systems
- , 320
, , 11334 43 , 40 43
Background
- Existing kernel-level NVMM file systems
– Remove page cache, generic block layer and I/O scheduler layer
- , 320
, , 11334 43 , 40 43
Background
- Existing kernel-level NVMM file systems
– Remove page cache, generic block layer and I/O scheduler layer – Retain virtual file system(VFS)[1,2,3,4]
- , 320
, , 11334 43 , 40 43
Application VFS NVMM FS NVMM
USER Kernel
Background
- Existing kernel-level NVMM file systems
– Remove page cache, generic block layer and I/O scheduler layer – Retain virtual file system(VFS)[1,2,3,4]
- dentry -> dcache
– Speed up path lookup and maintain a unified namespace
- inode -> icache
– Speed up file metadata access
- , 320
, , 11334 43 , 40 43
Application VFS NVMM FS NVMM
USER Kernel
Background
- File metadata operation type
– Lookup – Update
Background
- File metadata operation type
– Lookup
- VFS warm cache (Cache hit)
- VFS cold cache (Cache miss)
– Update
Background
- File metadata operation type
– Lookup
- VFS warm cache (Cache hit)
- VFS cold cache (Cache miss)
– Update
- Only lookup in VFS
Background
- File metadata operation type
– Lookup
- VFS warm cache (Cache hit)
- VFS cold cache (Cache miss)
– Update
- Only lookup in VFS
Lookup in both VFS and physical FS, and builds VFS cache
Background
- File metadata operation type
– Lookup
- VFS warm cache (Cache hit)
- VFS cold cache (Cache miss)
– Update
- Only lookup in VFS
Lookup in both VFS and physical FS, and builds VFS cache Update both VFS and PFS
Background
- File metadata operation type
– Lookup
- VFS warm cache (Cache hit)
- VFS cold cache (Cache miss)
– Update
- Only lookup in VFS
Lookup in both VFS and physical FS, and builds VFS cache Types Syscalls Example Lookup 20
- pen(lookup), stat, access
Update 29
- pen(create), remove,
rename, chown Update both VFS and PFS
Motivation
- The latency of NVMM is close to DRAM and supports high
concurrent access
Motivation
- The latency of NVMM is close to DRAM and supports high
concurrent access
– The metadata performance of physical file system is close to VFS
Motivation
- The latency of NVMM is close to DRAM and supports high
concurrent access
– The metadata performance of physical file system is close to VFS – File system requires high concurrent software support
Motivation
- The latency of NVMM is close to DRAM and supports high
concurrent access
– The metadata performance of physical file system is close to VFS – File system requires high concurrent software support
- Traditional metadata management are not suitable for NVMM file
system
Motivation
- The latency of NVMM is close to DRAM and supports high
concurrent access
– The metadata performance of physical file system is close to VFS – File system requires high concurrent software support
- Traditional metadata management are not suitable for NVMM file
system
– Two-layer metadata lookup and maintenance overhead – Low-scalability metadata operations
Motivation
- Tow layer metadata lookup and maintenance
– The latency of NVMM is closed to DRAM, VFS and physical file system maintain a copy of metadata respectively
- 0%
10% 20% 30% 40% 50% 60% 70% 4KB 16KB 4KB 16KB 4KB 16KB 4KB 16KB read_ext4-dax read_NOVA write_ext4-dax write_NOVA Metadata % total execution time
VFS PFS
Motivation
- Tow layer metadata lookup and maintenance
– The latency of NVMM is closed to DRAM, VFS and physical file system maintain a copy of metadata respectively
- In ext4-dax, metadata overhead accounts for 49.1%, which
VFS lookup overhead accounts for 21.2%
- 0%
10% 20% 30% 40% 50% 60% 70% 4KB 16KB 4KB 16KB 4KB 16KB 4KB 16KB read_ext4-dax read_NOVA write_ext4-dax write_NOVA Metadata % total execution time
VFS PFS
Motivation
- Low-scalability metadata operations
- 5
10 15 20 1 4 8 12 16 20 24 Throughput (M ops/s) create file delete file lookup file Cold lookup file Warm
Motivation
- Low-scalability metadata operations
– All metadata operations that need to be in the physical file system are locked in the parent directory
- 5
10 15 20 1 4 8 12 16 20 24 Throughput (M ops/s) create file delete file lookup file Cold lookup file Warm
Motivation
- Low-scalability metadata operations
– All metadata operations that need to be in the physical file system are locked in the parent directory
- Limit the scalability of file system metadata operations
- Create file, delete file
- 5
10 15 20 1 4 8 12 16 20 24 Throughput (M ops/s) create file delete file lookup file Cold lookup file Warm
Motivation
- Low-scalability metadata operations
– All metadata operations that need to be in the physical file system are locked in the parent directory
- Limit the scalability of file system metadata operations
- Create file, delete file
– When metadata is added/deleted in VFS, the VFS lock limits scalability
- 5
10 15 20 1 4 8 12 16 20 24 Throughput (M ops/s) create file delete file lookup file Cold lookup file Warm
Motivation
- VFS results in two copy metadata overhead and limits
metadata scalability on NVMM file system
- 2
4 6 8 10 12 14
- pen
stat remove
- pen
stat remove Cold cache Warm cache Time (us) VFS PFS
Motivation
- VFS results in two copy metadata overhead and limits
metadata scalability on NVMM file system
– Can we directly delete the metadata cache in VFS?
- 2
4 6 8 10 12 14
- pen
stat remove
- pen
stat remove Cold cache Warm cache Time (us) VFS PFS
Motivation
- VFS results in two copy metadata overhead and limits
metadata scalability on NVMM file system
– Can we directly delete the metadata cache in VFS?
- Compared to VFS having cached metadata, removing VFS
cache has low performance
- 2
4 6 8 10 12 14
- pen
stat remove
- pen
stat remove Cold cache Warm cache Time (us) VFS PFS
Motivation
- VFS results in two copy metadata overhead and limits
metadata scalability on NVMM file system
– Can we directly delete the metadata cache in VFS?
- Compared to VFS having cached metadata, removing VFS
cache has low performance
- 2
4 6 8 10 12 14
- pen
stat remove
- pen
stat remove Cold cache Warm cache Time (us) VFS PFS
Contribution
- DirectFS
Contribution
- DirectFS
– Cachelet: a small metadata cache in VFS
Contribution
- DirectFS
– Cachelet: a small metadata cache in VFS – A global hash based metadata index
Contribution
- DirectFS
– Cachelet: a small metadata cache in VFS – A global hash based metadata index – Using fine-grained flag and atomic write to improve metadata scalability
Contribution
- DirectFS
– Cachelet: a small metadata cache in VFS – A global hash based metadata index – Using fine-grained flag and atomic write to improve metadata scalability
- mnt1
index icache dcache Global mindex cachelet index dentry inode dentry inode
Physical file system Ext4, NOVA DirectFS
VFS
Interface: e.g. create, unlink
mnt2 /
Unified FS interfaces
System calls
S_DIRECTFS
Contribution
- DirectFS
– Cachelet: a small metadata cache in VFS – A global hash based metadata index – Using fine-grained flag and atomic write to improve metadata scalability
- mnt1
index icache dcache Global mindex cachelet index dentry inode dentry inode
Physical file system Ext4, NOVA DirectFS
VFS
Interface: e.g. create, unlink
mnt2 /
Unified FS interfaces
System calls
S_DIRECTFS
Outline
- Background & Motivation
- Design
– Cachelet for metadata caching – Global hash based metadata index – Metadata scalability
- Evaluation
- Summary
11
Cachelet for metadata caching
- VFS cachelet
Cachelet for metadata caching
- VFS cachelet
- dcache
192B icache 592B File Metadata Access status Security … File name Inode addr. Lock …
Cachelet for metadata caching
- VFS cachelet
- dcache
192B icache 592B File Metadata Access status Security … File name Inode addr. Lock … VFS cachelet 128B Frequently read metadata Simplified access status Security
Cachelet for metadata caching
- VFS cachelet
– Reducing metadata maintain overhead and keeping metadata lookup performance
- dcache
192B icache 592B File Metadata Access status Security … File name Inode addr. Lock … VFS cachelet 128B Frequently read metadata Simplified access status Security
Outline
- Background & Motivation
- Design
– Cachelet for metadata caching – Global hash based metadata index – Metadata scalability
- Evaluation
- Summary
13
Global hash based metadata index
- Global mindex
Global hash based metadata index
- Global mindex
- VFS index
dcache icache PFS index dentry inode DRAM NVMM
Inode number
Global hash based metadata index
- Global mindex
– A global hash based metadata index indexes metadata cache and metdata
- Reduce metadata lookup overhead
- VFS index
dcache icache PFS index dentry inode Global mindex cachelet dentry inode DRAM NVMM
Inode number Inode number
DRAM NVMM
Global hash based metadata index
- File lookup
– VFS warm cache – VFS cold cache
- File metadata update
15
Global hash based metadata index
- File lookup
– VFS warm cache – VFS cold cache
- File metadata update
15
Lookup VFS
Global hash based metadata index
- File lookup
– VFS warm cache – VFS cold cache
- File metadata update
15
Lookup Global mindex Lookup VFS
Global hash based metadata index
- File lookup
– VFS warm cache – VFS cold cache
- File metadata update
15
Lookup Global mindex Lookup VFS Lookup VFS Lookup physical FS Build cache in VFS
Global hash based metadata index
- File lookup
– VFS warm cache – VFS cold cache
- File metadata update
15
Lookup Global mindex Lookup VFS Lookup VFS Lookup physical FS Build cache in VFS Lookup Global mindex Build cachelet
Global hash based metadata index
- File lookup
– VFS warm cache – VFS cold cache
- File metadata update
15
Lookup Global mindex Lookup VFS Lookup VFS Lookup physical FS Build cache in VFS Lookup Global mindex Build cachelet Update two index Update two metadata
Global hash based metadata index
- File lookup
– VFS warm cache – VFS cold cache
- File metadata update
15
Lookup Global mindex Lookup VFS Lookup VFS Lookup physical FS Build cache in VFS Lookup Global mindex Build cachelet Update two index Update two metadata Update one index Update two small metadata
Outline
- Background & Motivation
- Design
– Cachelet for metadata caching – Global hash based metadata index – Metadata scalability
- Evaluation
- Summary
16
Metadata scalability
- VFS index
dcache icache PFS index dentry inode DRAM NVMM
Inode number
Global mindex cachelet dentry inode
Inode number
DRAM NVMM
Metadata scalability
- VFS lock limits the concurrency of metadata
- perations in directory
- VFS index
dcache icache PFS index dentry inode DRAM NVMM
Inode number
Lock Global mindex cachelet dentry inode
Inode number
DRAM NVMM
Metadata scalability
- VFS lock limits the concurrency of metadata operations in
directory
- Fine-grained flags and atomic write to remove VFS lock
- VFS index
dcache icache PFS index dentry inode DRAM NVMM
Inode number
Lock Global mindex cachelet dentry inode
Inode number
DRAM NVMM
flag
Metadata scalability
- Case study: creating a file
- Global mindex
DRAM NVMM
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and
Cachelet
- Mark Cachelet as unreadable
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and
Cachelet
- Mark Cachelet as unreadable
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
unread
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and
Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global
mindex
- Insert cachelet, inode and
dentry
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
unread
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and
Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global
mindex
- Insert cachelet, inode and
dentry
- Unreadable flag prevents
concurrent reader finding the creating file
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
unread
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- Unreadable flag prevents concurrent
reader finding the creating file
- Unreadable flag prevents concurrent
creation of the same file
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
unread
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- 3. Updating inode and cachelet of
parent directory
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
unread
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- 3. Updating inode and cachelet of
parent directory
- 4. Mark Cachelet as readable
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
unread read
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- 3. Updating inode and cachelet of
parent directory
- 4. Mark Cachelet as readable
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
read
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- 3. Updating inode and cachelet of
parent directory
- 4. Mark Cachelet as readable
- How to guarantee consistency
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
read
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- 3. Updating inode and cachelet of
parent directory
- 4. Mark Cachelet as readable
- How to guarantee consistency
– Extending dentry
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
read
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- 3. Updating inode and cachelet of
parent directory
- 4. Mark Cachelet as readable
- How to guarantee consistency
– Extending dentry
- Recording the update of parent
directory
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
read
Edentry
Metadata scalability
- Case study: creating a file
- 1. Creating inode, dentry and Cachelet
- Mark Cachelet as unreadable
- 2. Atomically update Global mindex
- Insert cachelet, inode and dentry
- 3. Updating inode and cachelet of
parent directory
- 4. Mark Cachelet as readable
- How to guarantee consistency
– Extending dentry
- Recording the update of parent
directory
– Atomically records dentry address into log
- Reducing contention of log write
- Global mindex
cachelet dentry inode
Inode number
DRAM NVMM
read addr
… Log Edentry
Metadata scalability
- Case study: deleting a file
Global mindex cachelet dentry inode
Inode number
DRAM NVMM
read addr
… Log Edentry
Metadata scalability
- Case study: deleting a file
– Mark the file as deleting
- Prevent concurrent delete operations
Global mindex cachelet dentry inode
Inode number
DRAM NVMM
read addr
… Log Edentry
deleting
Metadata scalability
- Case study: deleting a file
– Mark the file as deleting
- Prevent concurrent delete operations
– Mark the file as deletion persistently
- Guarantee consistency
Global mindex cachelet dentry inode
Inode number
DRAM NVMM
read addr
… Log Edentry
deleting
D
Metadata scalability
- Case study: deleting a file
– Mark the file as deleting
- Prevent concurrent delete operations
– Mark the file as deletion persistently
- Guarantee consistency
– Mark the file as invalid
- The file cannot be found by other
concurrent metadata operations
Global mindex cachelet dentry inode
Inode number
DRAM NVMM
read addr
… Log Edentry
deleting
D
invalid
Metadata scalability
- Case study: deleting a file
– Mark the file as deleting
- Prevent concurrent delete operations
– Mark the file as deletion persistently
- Guarantee consistency
– Mark the file as invalid
- The file cannot be found by other
concurrent metadata operations
– Update inode and cachelet of parent directory
Global mindex cachelet dentry inode
Inode number
DRAM NVMM
read addr
… Log Edentry
deleting
D
invalid
Metadata scalability
- Case study: deleting a file
– Mark the file as deleting
- Prevent concurrent delete operations
– Mark the file as deletion persistently
- Guarantee consistency
– Mark the file as invalid
- The file cannot be found by other
concurrent metadata operations
– Update inode and cachelet of parent directory – Asynchronous recycles the file data
- Support concurrent threads to read and
write the file
Global mindex DRAM NVMM
addr
… Log
NULL
Other design issues
- Support hard link
- Support getcwd
- The design of Global mindex
– How to index metadata and cache – How to collection inaccessible metadata
- More concurrency control
– File lookup, delete and rename
- Please refer to the paper
Outline
- Background & Motivation
- Design
– Cachelet for metadata caching – Global hash based metadata index – Metadata scalability
- Evaluation
- Summary
23
Evaluation
- Platform
– Two NUMA nodes
- Intel Gold 6271 CPU24 CPU core
- 64G DRAM, 512GB Optane PMM
- Only running evaluation on NUMA node 0 to avoid the effect of
NUMA architecture
- Compared system
– Ext4-dax, NOVA
- Benchmark
– System call, small file read/write, filebench
24
System call
- For metadata read operation (stat)
25 5 10 15 20
stat create rename delete stat create rename delete cold cache warm cache
Execution time (us) ext4-dax NOVA DirectFS
System call
- For metadata read operation (stat)
– Cold cache: reduce lookup times and maintain overhead; 56%
25 5 10 15 20
stat create rename delete stat create rename delete cold cache warm cache
Execution time (us) ext4-dax NOVA DirectFS
System call
- For metadata read operation (stat)
– Cold cache: reduce lookup times and maintain overhead; 56% – Warm cache: a small cachelet and Global mindex; similar
25 5 10 15 20
stat create rename delete stat create rename delete cold cache warm cache
Execution time (us) ext4-dax NOVA DirectFS
System call
- For metadata read operation (stat)
– Cold cache: reduce lookup times and maintain overhead; 56% – Warm cache: a small cachelet and Global mindex; similar
- For metadata write operation
– Low metadata maintain overhead; 47%
25 5 10 15 20
stat create rename delete stat create rename delete cold cache warm cache
Execution time (us) ext4-dax NOVA DirectFS
System call
- Scalability
– Fine-grained flags and atomic write
- Improving metadata scalability
26 5 10 15 20 25 1 4 8 12 16 20 24 Throughput (M ops/s) Threads ext4-dax NOVA DirectFS 2 4 6 8 1 4 8 12 16 20 24 Throughput (M ops/s) Threads ext4-dax NOVA DirectFS
Lookup in warm cache Delete
System call
- Scalability
– Fine-grained flags and atomic write
- Improving metadata scalability
26 5 10 15 20 25 1 4 8 12 16 20 24 Throughput (M ops/s) Threads ext4-dax NOVA DirectFS 2 4 6 8 1 4 8 12 16 20 24 Throughput (M ops/s) Threads ext4-dax NOVA DirectFS
Lookup in warm cache Delete
Small file read/write
- Optimizing file metadata lookup and updates
27
20 40 60 80 100 120 140 1KB 4KB 8KB 16KB 32KB Throughput (K ops/s) R: read W: write ED: ext4-dax N: NOVA D: DirectFS W_ED W_N W_D R_ED R_N R_D
Small file read/write
- Optimizing file metadata lookup and updates
– Compared with ext4-dax and NOVA, the throughput of small file
- perations is increased by 35.6% and 38.3% respectively.
27
20 40 60 80 100 120 140 1KB 4KB 8KB 16KB 32KB Throughput (K ops/s) R: read W: write ED: ext4-dax N: NOVA D: DirectFS W_ED W_N W_D R_ED R_N R_D
Filebench
- Varmail
- Fileserver
28
2 4 6 8 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS 5 10 15 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS
Varmail Fileserver
Filebench
- Varmail
– For single thread, DirectFS increases throughput by 27%
- Fileserver
28
2 4 6 8 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS 5 10 15 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS
Varmail Fileserver
Filebench
- Varmail
– For single thread, DirectFS increases throughput by 27% – For multiple threads, DirectFS increases throughput by 66.9%
- Fileserver
28
2 4 6 8 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS 5 10 15 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS
Varmail Fileserver
Filebench
- Varmail
– For single thread, DirectFS increases throughput by 27% – For multiple threads, DirectFS increases throughput by 66.9%
- Fileserver
– NVMM bandwidth has become the main factor affecting performance when running multiple threads
28
2 4 6 8 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS 5 10 15 1 4 8 16 Throughput (100 K ops/s) ext4-dax NOVA DirectFS
Varmail Fileserver
Outline
- Background & Motivation
- Design
– Cachelet for metadata caching – Global hash based metadata index – Metadata scalability
- Evaluation
- Summary
29
Summary
30
Summary
- The features of NVMM enable FS to be built on the
memory bus, improving the performance of FS
30
Summary
- The features of NVMM enable FS to be built on the
memory bus, improving the performance of FS
- Existing NVMM file systems retain VFS
– Two-layer metadata lookup and maintenance overhead – Low-scalability metadata operations
30
Summary
- The features of NVMM enable FS to be built on the
memory bus, improving the performance of FS
- Existing NVMM file systems retain VFS
– Two-layer metadata lookup and maintenance overhead – Low-scalability metadata operations
- DirectFS: A metadata optimized high performance and
scalability file system for NVMM
30
Summary
- The features of NVMM enable FS to be built on the
memory bus, improving the performance of FS
- Existing NVMM file systems retain VFS
– Two-layer metadata lookup and maintenance overhead – Low-scalability metadata operations
- DirectFS: A metadata optimized high performance and
scalability file system for NVMM
– A small metadata cache in VFS
30
Summary
- The features of NVMM enable FS to be built on the
memory bus, improving the performance of FS
- Existing NVMM file systems retain VFS
– Two-layer metadata lookup and maintenance overhead – Low-scalability metadata operations
- DirectFS: A metadata optimized high performance and
scalability file system for NVMM
– A small metadata cache in VFS – A global hash based metadata index
30
Summary
- The features of NVMM enable FS to be built on the
memory bus, improving the performance of FS
- Existing NVMM file systems retain VFS
– Two-layer metadata lookup and maintenance overhead – Low-scalability metadata operations
- DirectFS: A metadata optimized high performance and
scalability file system for NVMM
– A small metadata cache in VFS – A global hash based metadata index – Using fine-grained flag and atomic write to improve metadata scalability
30
Summary
- The features of NVMM enable FS to be built on the
memory bus, improving the performance of FS
- Existing NVMM file systems retain VFS
– Two-layer metadata lookup and maintenance overhead – Low-scalability metadata operations
- DirectFS: A metadata optimized high performance and
scalability file system for NVMM
– A small metadata cache in VFS – A global hash based metadata index – Using fine-grained flag and atomic write to improve metadata scalability – Increase the application throughput by up to 66.9%
30
Thanks
31