THE POWER OF RED HAT CEPH STORAGE
Jean-Charles Lopez
- S. Technical Instructor, Global Storage Consulting Practice
Red Hat, Inc. jcl@redhat.com May 2017 – OpenStack Summit, Boston
THE POWER OF RED HAT CEPH STORAGE And how its essential to your - - PowerPoint PPT Presentation
THE POWER OF RED HAT CEPH STORAGE And how its essential to your OpenStack environment Jean-Charles Lopez S. Technical Instructor, Global Storage Consulting Practice Red Hat, Inc. jcl@redhat.com May 2017 OpenStack Summit, Boston STORAGE
Jean-Charles Lopez
Red Hat, Inc. jcl@redhat.com May 2017 – OpenStack Summit, Boston
3
FILE STORAGE
File systems allow users to
using hierarchical folders and files.
OBJECT STORAGE
Object stores distribute data algorithmically throughout a cluster of media, without a rigid structure.
BLOCK STORAGE
Physical storage media appears to computers as a series of sequential blocks of a uniform size.
4
OBJECT OBJECT
COPY COPY COPY
REPLICATED POOL CEPH STORAGE CLUSTER CEPH STORAGE CLUSTER
1 2 3 4
ERASURE CODED POOL
X Y
FULL COPIES OF STORED OBJECTS
ONE COPY PLUS PARITY
6
RBD
A reliable, fully distributed block device with cloud platform integration
RGW
A web services gateway for object storage, compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby)
RADOS
A software-based reliable, autonomous, distributed object store comprised of self- healing, self-managing, intelligent storage nodes and lightweight monitors
CEPHFS*
A distributed file system with POSIX semantics & scale-out metadata
CLIENT
* CephFS is Tech Preview in RHCS2
7
RBD
A reliable, fully distributed block device with cloud platform integration
RGW
A web services gateway for object storage, compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby)
Reliable Autonomous Distributed Object Store
Software-based, comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors
CEPHFS*
A distributed file system with POSIX semantics & scale-out metadata
CLIENT
* CephFS is Tech Preview in RHCS2
RADOS CLUSTER
9
OSD FS DISK OSD FS DISK OSD FS DISK OSD FS DISK
10
APPLICATION
OBJECTS
RADOS CLUSTER
12
10 10 10 10 10 01 01 01 01 10 01 01 01 11 01 11
POOL A POOL B POOL C POOL D
10 11 10 01 10 11 10 01 10 11 10 01 10 11 10 01 10 11 10 01 01 10 11 10 10 01 01 01 01 10 11 10
OBJECTS
CLUSTER
10 01 11
PLACEMENT GROUPS
10 01
10 10 10 10 10 01 01 01 01 10 01 01 01 11 01 11
10 01 01
10 10 10 10 10 01 01 01 01 10 01 01 01 11 01 11
14
OBJECTS
CLUSTER
16
RBD
A reliable, fully distributed block device with cloud platform integration
RGW
A web services gateway for object storage, compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby)
RADOS
A software-based reliable, autonomous, distributed object store comprised of self- healing, self-managing, intelligent storage nodes and lightweight monitors
CEPHFS*
A distributed file system with POSIX semantics & scale-out metadata
CLIENT
* CephFS is Tech Preview in RHCS2
LIBRADOS APPLICATION
OBJECTS
Socket
RADOS CLUSTER
18
RBD
A reliable, fully distributed block device with cloud platform integration
RGW
A web services gateway for object storage, compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby)
RADOS
A software-based reliable, autonomous, distributed object store comprised of self- healing, self-managing, intelligent storage nodes and lightweight monitors
CEPHFS*
A distributed file system with POSIX semantics & scale-out metadata
CLIENT
* CephFS is Tech Preview in RHCS2
RADOSGW RADOSGW LIBRADOS REST Socket LIBRADOS
RADOS CLUSTER
20
RBD
A reliable, fully distributed block device with cloud platform integration
RGW
A web services gateway for object storage, compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby)
RADOS
A software-based reliable, autonomous, distributed object store comprised of self- healing, self-managing, intelligent storage nodes and lightweight monitors
CEPHFS*
A distributed file system with POSIX semantics & scale-out metadata
CLIENT
* CephFS is Tech Preview in RHCS2
21
VM
HYPERVISOR LIBRBD
RADOS CLUSTER
22
HYPERVISOR LIBRBD HYPERVISOR LIBRBD RADOS CLUSTER
23
LINUX HOST
KRBD
RADOS CLUSTER
24
RBD
A reliable, fully distributed block device with cloud platform integration
RGW
A web services gateway for object storage, compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby)
RADOS
A software-based reliable, autonomous, distributed object store comprised of self- healing, self-managing, intelligent storage nodes and lightweight monitors
CEPHFS*
A distributed file system with POSIX semantics & scale-out metadata
CLIENT
* CephFS is Tech Preview in RHCS2
25
KERNEL MODULE LINUX HOST
DATA METADATA
RADOS CLUSTER
* CephFS is Tech Preview in RHCS2
KEYSTONE SWIFT CINDER NOVA GLANCE RADOSG W
LIBRADOS
LIBRBD
OPENSTACK
HYPERVISOR
RADOS CLUSTER MANILA
CEPHFS*
* CephFS is Tech Preview in RHCS2
On ceph admin node, run: ceph osd pool create {pool_name} 2x ceph auth get-or-create {user_name} ... –o {keyring_file} scp {keyring_file} {unix_user}@{glance_node}:{path} <- Provide read permission for Glance scp /etc/ceph/ceph.conf {unix_user}@{glance_node}:{path} <- Provide read permission for Glance Add the following to /etc/ceph/ceph.conf on Glance node [{user_name}] keyring = {path} Edit /etc/glance/glance-api.conf on Glance node ... [glance_store] stores = rbd default_store = rbd show_image_direct_url = true rbd_store_user = {user_id} <- If user name is client.{id}, use {id} rbd_store_pool = {pool_name} rbd_store_ceph_conf = {Ceph configuration file path} rbd_store_chunk_size = {integer} <- Uses 8 by default for 8MB object RBDs flavor = keystone Restart Glance services
On ceph admin node, run: ceph osd pool create {pool_name} 2x ceph auth get-or-create {user_name} ... –o {keyring_file} scp {keyring_file} {unix_user}@{cinder_node}:{path} <- Provide read permission for Cinder scp /etc/ceph/ceph.conf {unix_user}@{cinder_node}:{path} <- Provide read permission for Cinder Add the following to /etc/ceph/ceph.conf on Cinder node [{user_name}] keyring = {path} Edit /etc/cinder/cinder.conf on Cinder node. Note that you can create multiple storage backends ... [cinder_backend_name] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf = {Ceph configuration file path} rbd_pool = {pool_name} rbd_secret_uuid = {UUID} rbd_user = {ceph_userid} Restart Cinder services
Create a file with on compute node <secret ephemeral=”no” private=”no”> <uuid>{UUID}</uuid> <usage type=”ceph”> <name>{username} secret</name> </usage> </secret> Run command virsh secret-define --file ceph.xml virsh secret-set-value --secret {UUID} -base64 $(cat {ceph_user_name}.key)* Synchronize libvirt secrets across compute nodes
Edit /etc/nova/nova.conf on Nova nodes [libvirt] libvirt_images_type = rbd libvirt_images_rbd_pool = {pool_name} libvirt_images_rbd_ceph_conf = {Ceph configuration file path} libvirt_disk_cachemodes = "network=writeback" rbd_secret_uuid = {UUID} rbd_user = {ceph_userid} Restart Nova services
On each compute node, make sure your /etc/ceph/ceph.conf file has [client.{user_name}] admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/qemu/qemu-guest-$pid.log VMs need restart for changes to take effect ceph –admin-daemon /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok {command} Useful commands help <- List available commands perf dump <- Dump performance counters config show <- View all run time parameters config get {parameter} <- View the specific run time parameter config set {parameter} {value} <- Modify the specific run time parameter
On OpenStack controler node, create a Swift service and endpoint
On your Keystone server, create an NSS database mkdir {certificate_directory}
On your Keystone server, copy the NSS database to the RADOS Gateway nodes scp –R {certificate_directory} {ceph_linux_user}@{rgw_node}:{certificate_directory}
On your RADOS Gateway servers [{username}] rgw_keystone_url = http://a.b.c.d:{port} rgw_keystone_admin_user = {admin-user} rgw_keystone_admin_password = {admin-password} rgw_keystone_admin_tenant = {admin-tenant} rgw_keystone_accepted_roles = admin member swiftoperator rgw_keystone_token_cache_size = 200 rgw_keystone_revocation_interval = 300 nss_db_path = {certificate_directory} Restart your RADOS Gateways
NORTH AMERICA: 1 (888) REDHAT-1; LATIN AMERICA: 54 (11) 4329-7300; EMEA: 00800 7334 2835 APJ: 65 6490 4200; Brazil: 55 (11) 3529-6000,; Australia: 1800 733 428; New Zealand: 0800 733 428
Knowledgebase and deployment resources, security and accountability