mirror file system
play

Mirror File System A Multiple Server File System John Wong CTO - PowerPoint PPT Presentation

Mirror File System A Multiple Server File System John Wong CTO John.Wong@TwinPeakSoft.com Twin Peaks Software Inc. Page 1 of Multiple Server File System Conventional File System EXT3/UFS and NFS Manage files on a single server


  1. Mirror File System A Multiple Server File System John Wong CTO John.Wong@TwinPeakSoft.com Twin Peaks Software Inc. Page 1 of

  2. Multiple Server File System • Conventional File System – EXT3/UFS and NFS – Manage files on a single server and its storage devices • Multiple Server File system – Manage files on multiple servers and their storage devices Page 2 of

  3. Problems • Single resource is vulnerable • Redundancy provides a safety net – Disk level => RAID – Storage level => Storage Replication – TCP/IP level => SNDR – File System level => CFS, MFS – System level => Clustering system – Application => Database Page 3 of

  4. Why MFS? • Many advantages over existing technologies Page 4 of

  5. Local File System Application Application 2 1 User Space EXT3 Disk Driver Kernel Space Data EXT3 manages file on the local server’s storage devices Page 5 of

  6. Network File System Client Server Application Application Application Application EXT3/UFS NFSD NFS (Client mount) Data NFS manages file on remote server’s storage devices Page 6 of

  7. EXT3 | NFS Server Client rsync, tar Application Application Application EXT3/UFS NFS (Client EXT3/UFS NFSD mount) Data B Data B Applications can only use either one, not both. Page 7 of

  8. EXT3 + NFS ?? • Combine these two file systems to manage file on both local and remote servers storage devices -- at the same time -- in real time Page 8 of

  9. MFS = EXT3 + NFS Active MFS Server Passive MFS Server Application Application Application Application User Space Kernel Space MFS NFS EXT3/UFS EXT3/UFS Data Data Page 9 of

  10. Building Block Approach • MFS is a kernel loadable module - loaded on top of EXT3/UFS and NFS • Standard VFS interface • Provide Complete Transparency - to users and applications - to underlining file systems Page 10 of

  11. Q & A Application Application Application Application MFS MFS Data B Data A Page 11 of

  12. Advantages • Building block approach -- Building upon existing EXT3, NFS, NTFS, CIFS infrastructures • No metadata is replicated -- Superblock, Cylinder group, file allocation map are not replicated. • Every file write operation is checked by file system -- file consistency, integrity • Live file, not raw data replication -- The primary and backup copy both are live files Page 12 of

  13. Advantages • Interoperability -- Two nodes can be different systems -- Storage systems can be different • Small granularity -- Directory level, not entire file system • One to many or many to one replication Page 13 of

  14. Advantages • Fast replication -- Replication in Kernel file system module • Immediate failover -- No need to fsck and mount operation • Geographically dispersed clustering -- Two nodes can be separated by hundreds of miles • Easy to deploy and manage -- Only one copy of MFS running on primary server is needed for replication Page 14 of

  15. Why MFS? Better Data Protection • Better Disaster Recovery • Better RAS • Better Scalability • Better Performance • Better Resources Utilization • Page 15 of

  16. File System Framework User Applications File System Operation calls System Call Interface File System Operation calls File Operation System Calls Other System calls umount () mount () Statfs() rmdir () lseek () open () sync () link () creat () write () ) mkdir () ioctl () read () ( e s o l c VFS interfaces Vnode interfaces PCFS NFS (2) NFS (1) UFS (1) UFS (2) HSFS PCFS VxFS QFS Data Data Data Data Optical Network Page 16 of drive SOLARIS Internal, Core Kernel Architecture, Jim Mauro. Richard McDougall, PRENTICE HALL

  17. MFS Framework User Applications System Call Interface File System Operation calls File Operation System Calls Other System calls umount () mount () mkdir () lseek () rmdir () ) creat () open () Statfs() write () ( ioctl () read sync e () link () s () o l c Vnode interfaces VFS interfaces MFS PCFS UFS(1) NFS (1) VxFS Vnode VFS interface PCFS QFS HSFS NFS UFS (2) (2) Data Data Data Data Optical drive Network Page 17 of

  18. Transparency • Transparent to users and applications - No re-compilation or re-link needed • Transparent to existing file structures - Same pathname access • Transparent to underlying file systems - UFS, NFS Page 18 of

  19. Mount Mechanism • Conventional Mount - One directory, one file system • MFS Mount - One directory, two or more file systems Page 19 of

  20. Mount Mechanism # mount –F mfs host:/ndir1/ndir2 /udir1/udir2 - First mount the NFS on a UFS directory - Then mount the MFS on top of UFS and NFS - Existing UFS tree structure /udir1/udir2 becomes a local copy of MFS - Newly mounted host:/ndir1/ndir2 becomes a remote copy of MFS - Same mount options as NFS except no ‘-o hard’ option Page 20 of

  21. MFS mfsck Command # /usr/lib/fs/mfs/mfsck mfs_dir - After MFS mount succeeds, the local copy may not be identical to the remote copy. - Use mfsck (the MFS fsck) to synchronize them. - The mfs_dir can be any directory under MFS mount point. - Multiple mfsck commands can be invoked at the same time. Page 21 of

  22. READ/WRITE Vnode Operation • All VFS/vnode operations received by MFS • READ related operation: read, getattr,…. those operations only need to go to local copy (UFS). • WRITE related operation: write, setattr,….. those operations go to both local (UFS) and remote (NFS) copy simultaneously (using threads) Page 22 of

  23. Mirroring Granularity • Directory Level - Mirror any UFS directory instead of entire UFS file system - Directory A mirrored to Server A - Directory B mirrored to Server B • Block Level Update - Only changed block is mirrored Page 23 of

  24. MFS msync Command # /usr/lib/fs/mfs/msync mfs_root_dir - A daemon that synchronizes MFS pair after a remote MFS partner fails. - Upon a write failure, MFS: - Logs name of file to which the write operation failed - Starts a heartbeat thread to verify the remote MFS server is back online - Once the remote MFS server is back online, msync uses the log to sync missing files to remote server. Page 24 of

  25. Active/Active Configuration Server Server Active MFS Server Active MFS Server Application Application Application Application MFS MFS UFS NFS UFS NFS Data B Data A Page 25 of

  26. MFS Locking Mechanism MFS uses UFS, NFS file record lock. Locking is required for the active-active configuration. Locking enables write-related vnode operations as atomic operations. Locking is enabled by default. Locking is not necessary in active-passive configuration. Page 26 of

  27. Real -Time and Scheduled • Real-time -- Replicate file in real-time • Scheduled -- Log file path, offset and size -- Replicate only changed portion of a file Page 27 of

  28. Applications • Online File Backup • Server File Backup, active  passive • Server/NAS Clustering, active  Active Page 28 of

  29. MFS = NTFS + CIFS Window Desktop/Laptop Remote Server Application Application Application Application MFS CIFS NTFS NTFS Data Data Page 29 of

  30. Online File Backup Real-time or Scheduled time MFS MFS Folder Folder LAN or Wan LAN or Wan Folder MFS User ISP Server Desktop/Laptop Page 30 of

  31. Server Replication Primary Secondary Heartbeat App Email Mirror File Mirror File Mirror File System System System Mirroring Path : /home : /var/spool/mail Page 31 of

  32. Enterprise Clusters Central Mirroring Path App App App App App App App Mirror File Mirror File Mirror File Mirror File Mirror File System System System System System Page 32 of

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend