outline
play

Outline Motivation Opportunities and challenges O t iti d h ll - PDF document

8/27/2013 Dependability and Security with Dependability and Security with Clouds of Clouds lessons learned from n years of research Miguel Correia WORKSHOP ON DEPENDABILITY AND INTEROPERABILITY IN WORKSHOP ON DEPENDABILITY AND


  1. 8/27/2013 Dependability and Security with Dependability and Security with Clouds ‐ of ‐ Clouds lessons learned from n years of research Miguel Correia WORKSHOP ON DEPENDABILITY AND INTEROPERABILITY IN WORKSHOP ON DEPENDABILITY AND INTEROPERABILITY IN HETEROGENEOUS CLOUDS (DIHC13) August 27 th 2013, Aachen, Germany Outline • Motivation • Opportunities and challenges O t iti d h ll • Storage – DepSky • Processing – BFT MapReduce 3 example clouds ‐ of ‐ clouds • Services – EBAWA • Conclusions 2 1

  2. 8/27/2013 Outline Motivation • Opportunities and challenges • Storage – DepSky • Processing – BFT MapReduce • Services – EBAWA • Conclusions C l i 3 Clouds are complex so they fail These faults can stop services, corrupt state and execution: Byzantine faults 4 2

  3. 8/27/2013 Cloud ‐ of ‐ Clouds • Consumer runs service on a set of clouds forming a virtual cloud what we call a cloud of clouds virtual cloud, what we call a cloud ‐ of ‐ clouds • Related to the notion of federation of clouds – “Federation of clouds” suggests a virtual cloud created by providers – “Cloud ‐ of ‐ clouds” suggests a virtual cloud created by consumers, possibly for improving dep&sec , p y p g p User 5 Cloud ‐ of ‐ Clouds dependability+security • There is cloud redundancy and diversity • so even if some clouds fail a cloud ‐ of ‐ clouds that if l d f il l d f l d th t implements replication can still guarantee: – Availability – if some stop, the others are still there – Integrity – they can vote which data is correct – Disaster ‐ tolerance – clouds can be geographically far – No vendor lock ‐ in – several clouds anyway No vendor lock in several clouds anyway User 6 3

  4. 8/27/2013 Outline • Motivation Opportunities and challenges • Storage – DepSky • Processing – BFT MapReduce • Services – EBAWA • Conclusions 7 Replication / geo ‐ replication in clouds • Provides opportunities and challenges • Some data from Amazon EC2 S d t f A EC2 – Not different clouds but close enough – Data collected ~hourly during August 2 ‐ 15, 2013 – One micro instance (virtual server) per Amazon region 8 4

  5. 8/27/2013 Geographical redundancy and diversity Amazon EC2 regions and availability zones • Each region is completely independent • Each availability zone (AZ) is isolated • Note: personal map, positions may not be accurate 9 Network redundancy and diversity 2 labels mean different path per direction; label that counts is the closest to the destination • ASs provide another level of diversity (most ISPs have more than one) ISPs observed on the August 2 nd (a few changes were observed in 2 weeks) • • This is not the complete graph, several edges are missing 10 5

  6. 8/27/2013 Latency: high and variant 700 600 600 500 si ‐ sp ir ‐ sy RTT (ms) 400 ir ‐ to ir ‐ sp 300 ir ‐ nc ir ‐ nv 200 nc ‐ or 100 0 02 ‐ 08 ‐ 2013 17:1604 ‐ 08 ‐ 2013 22:2107 ‐ 08 ‐ 2013 03:2609 ‐ 08 ‐ 2013 08:3611 ‐ 08 ‐ 2013 13:4113 ‐ 08 ‐ 2013 18:46 Compare with 0.2ms in an Ethernet LAN… 11 Throughput: low and variant 100 90 80 nc ‐ or 70 Throughput (Mbit/s) ir ‐ nv 60 ir ‐ nc 50 ir ‐ sp 40 ir ‐ to 30 ir ‐ sy y si ‐ sp 20 10 0 02 ‐ 08 ‐ 2013 17:1104 ‐ 08 ‐ 2013 22:1107 ‐ 08 ‐ 2013 03:1209 ‐ 08 ‐ 2013 08:1211 ‐ 08 ‐ 2013 13:1213 ‐ 08 ‐ 2013 18:14 • Same pairs as in previous slide but opposite order • Important: the throughput is higher with better instances (we used micro ) 12 6

  7. 8/27/2013 Economic cost (data transfer) • Cost for data transfer IN to EC2 from Internet: 0 $ • Cost for data transfer OUT from EC2 to Internet: • Cost for data transfer OUT from EC2 to Internet: – Vertical axis is data transferred and has logarithmic scale 1000000 fered (1 GB ‐ 611 TB) 100000 10000 1000 Data transf 100 10 1 0 1 12 120 1200 8300 29300 Cost (US $) 13 Data obtained on Aug. 2013 at http://aws.amazon.com/ec2/pricing/ CAP theorem • It is impossible for a web service to provide the following three guarantees: following three guarantees: – Consistency – Availability – Partition ‐ tolerance • Network diversity suggests partitions are unlikely – Nodes may get isolated but not sets of nodes from others Nodes may get isolated but not sets of nodes from others – But relaxed consistency may be offered in they happen – Current research topic; we won’t address it 14 7

  8. 8/27/2013 Outline • Motivation • Opportunities and challenges O t iti d h ll Storage – DepSky • Processing – BFT MapReduce • Services – EBAWA • Conclusions 15 DepSky • (Client ‐ side) library for cloud ‐ of ‐ clouds storage – File storage, similar to Amazon S3: read/write data, etc. Fil t i il t A S3 d/ it d t t • Use storage clouds as they are: – No specific code in the cloud • Data is updatable Amazon S3 – Byzantine quorum replication replication Nirvanix Ni i protocols for consistency Rackspace Windows Azure 16 8

  9. 8/27/2013 Write protocol D time qwjda sjkhd ahsd WRITE WRITE ACK ACK FILE METADATA D qwjda Cloud A sjkhd ahsd qwjda D Cloud B sjkhd ahsd D qwjda Cloud C sjkhd ahsd D qwjda Cloud D sjkhd ahsd 17 Read protocol highest version number (+fastest or cheapest cloud) time D qwjda sjkhd ahsd REQUEST REQUEST METADATA FILE FILE METADATA D qwjda Cloud A sjkhd ahsd qwjda D Cloud B sjkhd ahsd qwjda D Cloud C sjkhd ahsd qwjda D Cloud D sjkhd ahsd File is fetched from other clouds if signature doesn’t match the file 18 9

  10. 8/27/2013 Limitations of the solution so far • Data is accessible by cloud providers by cloud providers Data Data • Requires n × |Data| storage space Cloud A Cloud B Cloud C Cloud D Data Data Data Data Data Data Data Data 19 Combining erasure codes and secret sharing Only for data, not metadata encrypt Data Data K key share disperse F 1 F 2 F 3 F 4 S 1 S 2 S 3 S 4 Cloud A Cloud B Cloud C Cloud D F 1 S 1 F 2 S 2 F 3 S 3 F 4 S 4 Inverse process for reading Encrypted so data can’t be read at a cloud! from f+1 shares/fragments Only twice the size of storage, not 4 times! 20 10

  11. 8/27/2013 DepSky latency 100KB files, clients in PlanetLab nodes DepSky read latency is close to the cloud with the best latency DepSky write latency is close to the cloud with the worst latency 21 Lessons from Depsky • Provides: availability, integrity, disaster ‐ tolerance, no vendor lock ‐ in, confidentiality vendor lock in confidentiality • Insights: – Some clouds can be faulty so we need Byzantine quorum system protocols (to reason about subsets of clouds) – Signed data allows reading from a single cloud, so faster or cheaper than average p g – Erasure codes can reduce the size of data stored – Secret sharing can be used to store cryptographic keys in clouds (avoiding the need of a key distribution service) 22 11

  12. 8/27/2013 Outline • Motivation • Opportunities and challenges O t iti d h ll • Storage – DepSky Processing – BFT MapReduce • Services – EBAWA • Conclusions 23 What is MapReduce? • Programming model + execution environment – Introduced by Google in 2004 I t d d b G l i 2004 – Used for processing large data sets in clusters of servers • Hadoop MapReduce, an open ‐ source MapReduce – The most used, the one we have been using – Includes HDFS, a file system for large files 24 12

  13. 8/27/2013 MapReduce basic idea servers servers 25 Job submission and execution 26 13

  14. 8/27/2013 The problem • The original Hadoop MR tolerates the most common faults faults – Job tracker detects and recovers crashed/stalled map/reduce tasks – Detects corrupted files (a hash is stored with each block) • But execution can be corrupted, tasks can return wrong output wrong output • and clouds can suffer outages 27 BFT MapReduce • Basic idea: to replicate tasks in different clouds and vote the results returned by the replicas vote the results returned by the replicas – Inputs initially stored in all clouds Cloud 1 Cloud 2 Cloud 3 Cloud 4 28 14

  15. 8/27/2013 Original MR – Map perspective 29 BFT MR – Map perspective rent clouds Replicas in differ Vote 30 15

  16. 8/27/2013 Original MR – Reduce perspective 31 BFT MR – Reduce perspective erent clouds Replicas in diffe Vote 32 16

  17. 8/27/2013 Deferred execution • Faults are uncommon; consider max. of f faults • JT creates only f+1 replicas in f+1 clouds ( f in standby) • JT creates only f+1 replicas in f+1 clouds ( f in standby) • If results differ or one cloud stops, request 1 more (up to f ) 33 Distributed job tracker • Job tracker controls all task executions in the task trackers (e.g., start task, detect faults) trackers (e g start task detect faults) – If job tracker is in one cloud, separated from many task trackers by the internet • high latency to control operations • single point of failure • Distributed job tracker – Each cloud has one job tracker (JT) – Each JT controls the tasks in its cloud, no “remote control” 34 17

  18. 8/27/2013 WAN communication • All this communication through the WAN => high delay and $ cost – data transferred per pair can even be the size of the split (e.g., MBs) Solution: digest communication • Reduces fetch the map task outputs – Intra ‐ cloud fetch: output fetched normally Intra cloud fetch: output fetched normally – Inter ‐ cloud fetch: only hash of the output fetched e cloud same other clouds 36 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend