Providing Hybrid Block Storage for Virtual Machines using - - PowerPoint PPT Presentation

providing hybrid block storage for virtual machines using
SMART_READER_LITE
LIVE PREVIEW

Providing Hybrid Block Storage for Virtual Machines using - - PowerPoint PPT Presentation

Providing Hybrid Block Storage for Virtual Machines using Object-based Storage Sixiang Ma*, Haopeng Chen*, Heng Lu*, Bin Wei , Pujiang He *Shanghai Jiao Tong University Email: { masixiang, chen-hp, lu007heng } @sjtu.edu.cn Intel


slide-1
SLIDE 1

Providing Hybrid Block Storage for Virtual Machines using Object-based Storage

Sixiang Ma*, Haopeng Chen*, Heng Lu*, Bin Wei§, Pujiang He§ *Shanghai Jiao Tong University Email: {masixiang, chen-hp, lu007heng}@sjtu.edu.cn §Intel Asia-Pacific R&D Ltd. Email: {bin.wei, pujiang.he}@intel.com

slide-2
SLIDE 2

REliable, INtelligent & Scalable Systems

Trends: Virtualization

2

l Virtualization

Ø Key technology to increase resource sharing Ø 70% x86 server are virtualized

l Virtual Block Devices

Ø Network-based storage Ø Amazon EBS, Ceph RBD, Sheepdog, GlusterFS, etc. Ø Higher scalability, availability, manageability than direct-attached disks

slide-3
SLIDE 3

REliable, INtelligent & Scalable Systems

Trends: SSDs and Hybrid Storage

3

l SSDs play a critical role in storage landscape

Ø Superior random I/O performance than HDDs Ø VMs demanding high storage performance benefit Ø Higher unit capacity cost than HDDs

l Hybrid storage systems provide eclectic solutions

Ø Cost saving by HDDs Ø Performance improvement by SSDs

slide-4
SLIDE 4

REliable, INtelligent & Scalable Systems

Issues: Hybrid Storage System for VMs

4

l Virtualized workload

Ø Virtual Machine Disk Images, VMDIs Ø Most I/Os accessing the unstructured data

l High Availability Guarantee

Ø Service Level Agreements, SLAs Ø Offline methods are unfeasible

l Data migration hurts scalability

Ø Ideally cloud service is expected to expand unlimitedly Ø Suffer from resource bottleneck by data migration

slide-5
SLIDE 5

REliable, INtelligent & Scalable Systems

Address Issues using Object-based storage

5

l Object-based Storage

Ø Objects are logical storage entities with file-like access Ø Object Storage Devices provide higher-level interface than block storage Ø Direct data accessing from clients to OSDs -> high performance Ø Data (e.g., VMDIs) are stripped and randomly stored among OSDs for load balancing and parallelism Ø No meta-data nodes like file systems -> higher scalability

slide-6
SLIDE 6

REliable, INtelligent & Scalable Systems

What our research focuses

6

Hybrid Storage System

Virtual Block Device

Hybrid VBDs using Object- based Storage

Object- Based Storage

slide-7
SLIDE 7

REliable, INtelligent & Scalable Systems

Background: I/O Virtualization

7

Local- Attached Block Device

Virtual Machine Operating System Hyperviosr

Emulated I/O Controller

Controller Driver Local File System App App Virtual Block Device

Drives Underlying Storage System Drives

Disk Image File

Network File Systtem

  • 2b. Files on local file

systems

  • 2a. Direct-Attached

Storage (DAS)

  • 1a. Network-based

file system

  • 2c. Block-based
  • 2d. File-based
  • 2e. Object-based

Block Layer Local solutions Network based solutions

2e is what our work focuses

slide-8
SLIDE 8

REliable, INtelligent & Scalable Systems

System Architecture of MOBBS

8

Mapper Analyzer Client Migrater Migration Command Object I/O Request Sub-Migration Command Block I/O Request Hypervisor Emulated I/O Controller VM Object Client Component

Extent Table

Block I/O Request Object OSD Interface OSD Migrater File System OSD Object Client Component Object I/O Request Clients Monitors OSDs Failure detection extent id value SSD POOL 1 HDD POOL N SSD POOL MOBBS

Extent Table Object-based Storage

slide-9
SLIDE 9

REliable, INtelligent & Scalable Systems

The Hybrid Pool

9

l Static object placement in current object-based systems Ø One disk image, One Storage pool Ø Can not take advantage of I/O locality

Object Object Object Object

Extent Hybrid Pool

Object

HDD OSDs SSD OSDs VMDI

hdd pool ssd pool Object

l MOBBS stripes a VMDI into Extents (multiple of objects) and stores into different pools Ø Reorganize Extents between different pools dynamically Ø Monitor real-time workloads

slide-10
SLIDE 10

REliable, INtelligent & Scalable Systems

Placement Model: SSDs vs. HDDs

10

20 40 60 80 100 120 140 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M

Banwidth (MB/s

ssd-pool-seq ssd-pool-ran hdd-pool-seq hdd-pool-ran

Request Size (B)

100 200 300 400 500 600 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M

Banwidth (MB/s

ssd-pool-seq ssd-pool-ran hdd-pool-seq hdd-pool-ran

Request Size (B)

read write SSDs excel: ü Small I/Os ü Random I/Os

slide-11
SLIDE 11

REliable, INtelligent & Scalable Systems

Placement Model: Pool Identification

11

ü Maximize the rate of small and random I/Os on SSDs ü Calculate beneficial score (BS) of each I/O ü Calculate beneficial rate (BR) of each extent ü The higher BR is, more beneficial to be stored with SSDs

slide-12
SLIDE 12

REliable, INtelligent & Scalable Systems

Migration Distribution

12

l Stripping Extent Migration into Object Migration l OSD where object is stored takes the responsibility of real data migration ü Read locally instead of network I/O, only one write operation ü Concurrent object migration ü Little burden for VMs, absorb data migration across the OSD cluster

Extent Migration

Object Migration

OSD OSD OSD OSD

Object Migration Object Migration Object Migration

Data I/O Control I/O

slide-13
SLIDE 13

REliable, INtelligent & Scalable Systems

Implementation Issues

13

lCeph 0.72

ü 2,500 lines among the librbd module ü No modification required among the OSD module ü OSD Migraters are user-level daemons

l KVM-QEMU

ü Avoidance of large changes ü Only 12 lines are modified

slide-14
SLIDE 14

REliable, INtelligent & Scalable Systems

Evaluation: Methodology

14

Filebench OSDs = 6 SSDs + 6 HDDs Ceph SSD Pool Ceph Hybrid Pool MOBBS Pool VBD Client Ext4 Fio VM CREATE ATTCH 3 Pools Evaluate VBDs

slide-15
SLIDE 15

REliable, INtelligent & Scalable Systems

Evaluation: Block I/O Workloads

15

Increasing skewness of random 4k writes

1 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1 1.2 1.25 1.5 1.75 2 2.25

Throughput (MB/s) Zipf Distribution

ceph-ssd-vbd ceph-hybrid-vbd mobbs-vbd ssd ratio

SSD Ratio (%)

ü MOBBS provides higher throughput than Hybrid Ceph ü MOBBS Close to SSD Ceph with workload becomes skewer ü SSD usage drops

slide-16
SLIDE 16

REliable, INtelligent & Scalable Systems

Evaluation: Block I/O Workloads

16

Different I/O size of Zipf1.5 random writes

1 2 3 4 5 6 0.2 0.4 0.6 0.8 1 1.2 16KB 32KB 64KB 128KB 256KB 512KB

Throughput (MB/s) I/O Requst Size

ceph-ssd-vbd ceph-hybrid-vbd mobbs-vbd ssd ratio

SSD Ratio (%)

ü Throughput of Hybrid Ceph increase when I/O size becoming larger ü MOBBS outperforms Hybrid Ceph with small I/Os and equalizes with large I/Os ü SSD usage drops with I/O size increasing, while both Hybrid Ceph and MOBBS approaching SSD Ceph

slide-17
SLIDE 17

REliable, INtelligent & Scalable Systems

Evaluation: File System Ext4

17

ü IOPS of four applications: fileserver, varmail, webserver, viderserver ü No ssd usage for videoserver, equivalent performance

500 1000 1500 2000 2500 3000 fileserver varmail webserver videoserver

IOPS (op/s) Applications

ceph-ssd-vbd mobbs-vbd ceph-hybrid-vbd

41% 22% 28% 0%

slide-18
SLIDE 18

REliable, INtelligent & Scalable Systems

Evaluation: File System Ext4

18

50 100 150 200 250 300 350 400 450 fileserver varmail webserver videoserver

Average Latency (ms) Applications

ceph-ssd-vbd mobbs-vbd ceph-hybrid-vbd

Average latencies of 4 applications

slide-19
SLIDE 19

REliable, INtelligent & Scalable Systems

Evaluation: File System XFS

19

ü IOPS of four applications: fileserver, varmail, webserver, viderserver ü No ssd usage for videoserver, equivalent performance

500 1000 1500 2000 2500 3000 fileserver varmail webserver videoserver

IOPS (op/s) Applications

ceph-ssd-vbd mobbs-vbd ceph-hybrid-vbd

37% 15% 25% 0%

slide-20
SLIDE 20

REliable, INtelligent & Scalable Systems

Evaluation: File System XFS

20

200 400 600 800 1000 1200 1400 fileserver varmail webserver videoserver

Average Latency (ms) Applications

ceph-ssd-vbd mobbs-vbd ceph-hybrid-vbd

Average latencies of 4 applications

slide-21
SLIDE 21

Thank You!Q/A