Agenda Openstack CEPH Storage Dream team: CEPH and Openstack - - PowerPoint PPT Presentation

agenda
SMART_READER_LITE
LIVE PREVIEW

Agenda Openstack CEPH Storage Dream team: CEPH and Openstack - - PowerPoint PPT Presentation

One for all! CEPH and Openstack: A Dream Team Udo Seidel Agenda Openstack CEPH Storage Dream team: CEPH and Openstack Summary GUUG FFG 2015 Me :-) Teacher of mathematics and physics PhD in experimental physics


slide-1
SLIDE 1

One for all!

CEPH and Openstack: A Dream Team

Udo Seidel

slide-2
SLIDE 2

GUUG FFG 2015

Agenda

  • Openstack
  • CEPH Storage
  • Dream team: CEPH and Openstack
  • Summary
slide-3
SLIDE 3

GUUG FFG 2015

Me :-)

  • Teacher of mathematics and physics
  • PhD in experimental physics
  • Started with Linux in 1996
  • Linux/UNIX trainer
  • Solution engineer in HPC

and CAx environment

  • @Amadeus → Head of
  • Linux Strategy
  • Server Automation
slide-4
SLIDE 4

GUUG FFG 2015

My setup :-D

  • Raspberry Pi2
  • Fedora 21 with custom kernel
  • HDMI2VGA
  • Mini Bluetooth keyboard
  • 10 Ah battery
slide-5
SLIDE 5

GUUG FFG 2015

Openstack

slide-6
SLIDE 6

GUUG FFG 2015

What?

  • Infrastructure as a Service (IaaS)
  • 'Open source' version of AWS
  • New versions every 6 months
  • Current called Juno
  • Next called Kilo
  • Managed by Openstack Foundation
  • API, API, API!
slide-7
SLIDE 7

GUUG FFG 2015

Openstack – High level

Network Compute Storage

slide-8
SLIDE 8

GUUG FFG 2015

Openstack architecture

slide-9
SLIDE 9

GUUG FFG 2015

Openstack Components

  • Keystone – identity
  • Glance - image
  • Nova - compute
  • Cinder - block
  • Swift - object
  • Neutron - network
  • Horizon - dashboard
slide-10
SLIDE 10

GUUG FFG 2015

About Glance

  • There since almost the beginning
  • Image store
  • Server
  • Disk
  • Several formats
  • Different storage back-ends available
slide-11
SLIDE 11

GUUG FFG 2015

Behind Default Glance

  • File Back-end
  • Local or shared file system
  • POSIX ?!?
  • Scalability
  • High availability
slide-12
SLIDE 12

GUUG FFG 2015

About Cinder

  • Later than Glance
  • Part of Nova before
  • Separate since Folsom
  • Block storage
  • Different storage back-ends possible
slide-13
SLIDE 13

GUUG FFG 2015

Behind Default Cinder

  • Logical Volume Manager
  • 'Glance-like' challenges
  • Scalability
  • High availability
slide-14
SLIDE 14

GUUG FFG 2015

About Swift

  • Since the beginning
  • Replace Amazon S3
  • cloud storage
  • Scalable
  • Redundant
  • Object store
slide-15
SLIDE 15

GUUG FFG 2015

Behind Swift

  • RESTful API
  • No POSIX like access
  • No Block level access
slide-16
SLIDE 16

GUUG FFG 2015

Openstack Storage Questions

  • Unification of storage types
  • High availability
  • Scalability
  • Access/APIs
  • Vendor (lock-in)
slide-17
SLIDE 17

GUUG FFG 2015

CEPH Storage

slide-18
SLIDE 18

GUUG FFG 2015

CEPH – what?

  • Distributed storage system
  • Started as part of PhD studies at UCSC
  • Public announcement: 2006 at 7th OSDI
  • File system: Linux kernel since 2.6.34
  • Cephalopods
slide-19
SLIDE 19

GUUG FFG 2015

CEPH – Releases

  • Like Linux Kernel
  • 'normal'
  • Long Term Support
  • LTS
  • Since 2012
  • Firefly → 0.80.x
  • Giant → 0.87.x
  • Hammer → 0.93.x
slide-20
SLIDE 20

GUUG FFG 2015

CEPH – Commercial

  • Past: Inktank Inc.
  • Acquisition by Red Hat in 2014
  • ICE – Inktank CEPH Enterprise
  • Server: RHEL/CentOS, Ubuntu
  • Client:

– RHEL – S3 compatible application – ...

  • SUSE Storage
slide-21
SLIDE 21

GUUG FFG 2015

CEPH – the full architecture

slide-22
SLIDE 22

GUUG FFG 2015

OSD failure approach

  • Failure is normal
  • Data distributed and replicated
  • Dynamic OSD landscape
slide-23
SLIDE 23

GUUG FFG 2015

Data replication

  • N-way
  • Placement group
  • Failure domains
  • Replication traffic
  • Within OSD network
  • Timing
slide-24
SLIDE 24

GUUG FFG 2015

Data distribution

  • File stripped
  • File pieces → Object IDs
  • Object ID → Placement groups
  • Placement groups → list of OSDs
slide-25
SLIDE 25

GUUG FFG 2015

CRUSH

slide-26
SLIDE 26

GUUG FFG 2015

CEPH cluster monitors

  • CEPH components status
  • First contact point
  • Monitor cluster landscape
slide-27
SLIDE 27

GUUG FFG 2015

CEPH cluster map

  • Objects
  • computers and containers
  • ID and weight
  • Container → bucket
  • Maps physical conditions
  • Reflects data rules
  • Known by all OSD’s
slide-28
SLIDE 28

GUUG FFG 2015

CEPH - RADOS

  • Reliable Autonomic Distributed Object Storage
  • OSD cluster access
  • Via librados
  • C, C++, Java, Python, Ruby, PHP
  • POSIX layer
  • 'Visible' to all CEPH cluster members
slide-29
SLIDE 29

GUUG FFG 2015

CEPH Block Device

  • Aka RADOS block device (RBD)
  • Upstream since kernel 2.6.37
  • RADOS storage exposed via
  • Simple block device
  • Interface library
slide-30
SLIDE 30

GUUG FFG 2015

The RADOS picture

slide-31
SLIDE 31

GUUG FFG 2015

CEPH Object Gateway

  • Aka RADOS Gateway (RGW)
  • RESTful API
  • Amazon S3
  • SWIFT APIs!!
  • Proxy HTTP to RADOS
  • Tested with apache, nginx and lighthttpd
slide-32
SLIDE 32

GUUG FFG 2015

CEPH File System

  • Yes ..
  • But …
  • Skipped here!
slide-33
SLIDE 33

GUUG FFG 2015

CEPH Take Aways

  • Scalable
  • Flexible configuration
  • No SPOF
  • Built on commodity hardware
  • Different interfaces
  • Language
  • Protocols
slide-34
SLIDE 34

GUUG FFG 2015

Dream Team CEPH and Openstack

slide-35
SLIDE 35

GUUG FFG 2015

Remember: Openstack Storage

  • Unification of storage types
  • High availability
  • Scalability
  • Access/APIs
  • Vendor (lock-in)
slide-36
SLIDE 36

GUUG FFG 2015

Why CEPH in the first place?

  • One solution for different storage needs
  • Full blown storage solution
  • Support
  • Operational model
  • Cloud'ish
  • Separation of duties
slide-37
SLIDE 37

GUUG FFG 2015

Integration

  • Focus: RADOS/RBD
  • Two parts
  • Authentication
  • Technical access
  • Both parties must be aware
  • Independent for each of the storage

components

slide-38
SLIDE 38

GUUG FFG 2015

Authentication

  • CEPH part
  • Key rings
  • Configuration
  • For Glance and Cinder
  • Openstack part
  • Glance and Cinder (and Nova)
  • Keystone

– Only for Swift – Needs RGW

slide-39
SLIDE 39

GUUG FFG 2015

Access to RADOS/RBD I

  • Via API/libraries
  • CEPHFS
  • Easy for Glance/Cinder
  • CEPH keyring configuration
  • Update of ceph.conf
  • Update of API configuration

– Cinder – Glance

slide-40
SLIDE 40

GUUG FFG 2015

Access to RADOS/RBD II

  • Swift → more work
  • CEPHFS
  • CEPH Object Gateway
  • Web server
  • RGW software
  • Keystone certificates
  • Keystone authentication
  • Endlist configuration →RGW
slide-41
SLIDE 41

GUUG FFG 2015

Integration the full picture

RADOS qemu/kvm CEPH Object Gateway CEPH Block Device Swift Keystone Cinder Glance Nova

slide-42
SLIDE 42

GUUG FFG 2015

Integration pitfalls

  • CEPH versions not in sync
  • Authentication
  • CEPH Object Gateway setup
  • Openstack version specifics
slide-43
SLIDE 43

GUUG FFG 2015

CEPH Openstack - Commercial

  • RHEL Openstack Platform
  • SUSE Openstack Cloud
  • Mirantis Openstack
  • Ubuntu Openstack
slide-44
SLIDE 44

GUUG FFG 2015

Why CEPH - reviewed

  • Previous arguments still valid :-)
  • High integration
  • Modular usage
  • No need for POSIX compatible interface
  • Works even with other IaaS implementations
slide-45
SLIDE 45

GUUG FFG 2015

Summary

slide-46
SLIDE 46

GUUG FFG 2015

Take Aways

  • Openstack storage challenges
  • CEPH
  • Sophisticated storage engine
  • Mature
  • Can be used elsewhere
  • CEPH + Openstack = <3
slide-47
SLIDE 47

GUUG FFG 2015

References

  • http://ceph.com
  • http://www.openstack.org
slide-48
SLIDE 48

GUUG FFG 2015

Thank you!

slide-49
SLIDE 49

GUUG FFG 2015

All for one!

CEPH and Openstack: A Dream Team

Udo Seidel