distributed file storage in multi tenant clouds using
play

Distributed File Storage in Multi-Tenant Clouds using CephFS - PowerPoint PPT Presentation

Distributed File Storage in Multi-Tenant Clouds using CephFS Openstack Vancouver 2018 May 23 Patrick Donnelly Tom Barron Ramana Raja CephFS Engineer Manila Engineer CephFS Engineer Red Hat, Inc. Red Hat, Inc. Red Hat, Inc. How do we


  1. Distributed File Storage in Multi-Tenant Clouds using CephFS Openstack Vancouver 2018 May 23 Patrick Donnelly Tom Barron Ramana Raja CephFS Engineer Manila Engineer CephFS Engineer Red Hat, Inc. Red Hat, Inc. Red Hat, Inc.

  2. How do we solve this?

  3. Ceph Higher Level Components OBJECT BLOCK FILE RGW RBD CEPHFS S3 and Swift compatible object A virtual block device with A distributed POSIX file system storage with object versioning, snapshots, copy-on-write clones, with coherent caches and multi-site federation, and and multi-site replication snapshots on any directory replication LIBRADOS A library allowing apps to direct access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomic, distributed object store comprised of self-healing, self-managing, intelligent storage nodes (OSDs) and lightweight monitors (Mons) 3 Openstack Sydney: 2017 November 06

  4. What is CephFS? CephFS is a POSIX-compatible distributed ● Client Client file system! 3 moving parts: the MDS(s), the clients, and Metadata RPC ● RADOS. File I/O Files, directories, and other metadata are ● stored in RADOS. The MDS has no local state. Mount using: ● FUSE: ceph-fuse ... ○ ceph-mds Kernel: mount -t ceph ... ○ RADOS Journal Coherent caching across clients. MDS issues ● Metadata inode capabilities which enforce synchronous or buffered writes. Clients access data directly via RADOS. ● 4 Openstack Sydney: 2017 November 06

  5. CephFS native driver deployment Storage (Ceph public) network Storage Provider Ceph MGR Network Ceph MDS Ceph OSD Ceph OSD Tenant B Controller Tenant A Ceph MON Nodes Ceph OSD Manila Manila API Storage nodes Share service Tenant VMs with 2 nics service Compute Nodes Ceph MDS placement: External With MONs/python services/ Provider Router Router dedicated? Network Public OpenStack Service API (External) network https://docs.openstack.org/manila/ocata/devref/cephfs_native_driver.html

  6. CephFS NFS driver (in data plane) OpenStack client/Nova VM Clients connected to NFS-Ganesha ● gateway. Better security. NFS No single point of failure (SPOF) in Ceph ● storage cluster (HA of MON, MDS, OSD) NFS gateway NFS-Ganesha needs to be HA for no SPOF ● Data updates in data plane. Native Ceph Metadata ● NFS-Ganesha active/passive HA WIP updates (Pacemaker/Corosync) Monitor server Metadata daemons Server OSD Daemon

  7. CephFS NFS driver deployment Storage (Ceph public) network Storage NFS Network Ceph MDS Ceph OSD Ceph OSD Controller Tenant B Tenant A Ceph MON Nodes Ceph OSD Manila Manila API Storage nodes Share service Tenant VMs with 2 nics service Compute Nodes Ceph MDS placement: External With MONs/python services/ Provider Router Router dedicated? Network Public OpenStack Service API (External) network https://docs.openstack.org/manila/queens/admin/cephfs_driver.html

  8. Future: Ganesha per Tenant Ceph public network Ceph OSD Ceph OSD Ceph MDS Ceph MON Ceph OSD Ceph MGR kubernetes Controller Tenant B Tenant A Nodes Manila Manila API Share service Tenant VMs service Compute Nodes External Provider Router Router Network Public OpenStack Service API (External) network

  9. /usr/bin Share: Manila CephFS Name /ceph Export Paths Network Share (e.g. Neutron ID+CIDR) Share Server Count REST API: Get/Put Shares (Publish Intent) Scale-out & shares managed OSD MGR MDS OSD OSD by mgr HA managed Get Share/Config Data IO + Advertise to by Kubernetes Metadata IO ServiceMap Spawn Container Get/Put in NW Share Client State (in RADOS) Push config Start grace period Rook/Kubernetes + NFSGW Ganesha Kuryr (net driver) 9 Kubernetes Pod (HA Managed by Kubernetes)

  10. Managing Scale-Out: “CephFSShareNamespace” One IP to access NFS Cluster for tenant; only via tenant network # of pods equal to Pod #1 scale-out for NFS “CephFSShareNames NFSGW Ganesha Client pace”; dynamically grow/shrink?? Service Pod #2 NFSGW Ganesha NFS Client Stable network StatefulSet #1 identifiers for pods! 10

  11. Thanks! Patrick Donnelly pdonnell@redhat.com Thanks to the CephFS team: John Spray, Greg Farnum, Zheng Yan, Ramana Raja, Doug Fuller, Jeff Layton, and Brett Niver. Homepage: http://ceph.com/ Mailing lists/IRC: http://ceph.com/IRC/ 11 Openstack Sydney: 2017 November 06

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend