cern june 2008 large reliable and secure distributed
play

CERN, June 2008 large, reliable, and secure distributed online - PowerPoint PPT Presentation

CERN, June 2008 large, reliable, and secure distributed online storage harness idle resources of participating computers old dream of computer science The design of a world-wide, fully transparent distributed file system for simultaneous


  1. CERN, June 2008

  2. large, reliable, and secure distributed online storage harness idle resources of participating computers

  3. old dream of computer science

  4. “The design of a world-wide, fully transparent distributed file system for simultaneous use by millions of mobile and frequently disconnected users is left as an exercise for the reader.” A. Tanenbaum, Distributed Operating System, 1995

  5. lots of research projects OceanStore (UC Berkeley) Past (Microsoft Research) CFS (MIT)

  6. we were inspired by them wanted to make it work first step: closed alpha

  7. upload any file in any size access from anywhere share with friends and groups publish to the world

  8. free and simple application Win, Mac, Linux start from the web, no installation required start with 1 GB provided by us if you want more, you can trade or buy storage

  9. online storage with the “power of P2P” fast downloads no file size limit no traffic limit

  10. privacy all files are encrypted on your computer your password never leaves your computer so no one, not even we, can see your files

  11. how does it work?

  12. data stored in the p2p network users’s computer can be offline how to ensure availability (persistent storage)?

  13. two approaches 1. make sure the data is always in the network move the data when a computer goes offline bad idea for lots of data and high churn rate 2. introduce redundancy

  14. redundany = replication? p = node availability k = redundancy factor p rep = file availability

  15. redundany = replication? example p = 0.25 k = 5 not enough p rep = 0.763

  16. redundany = replication? example p = 0.25 k = 24 unrealistic p rep = 0.999

  17. erasure codes encode m fragments into n need any m out of n to reconstruct reed-solomon (optimal codes) RAID storage systems (vs. low-density-parity-check need (1+e) * m, where e is a fixed, small constant)

  18. availability p = 0.25 m = 100, n = 517, k = n/m = 5.17 p ec = 0.999 k = n/m = 5.17 vs. k = 24 using replication

  19. y d points x

  20. - 1

  21. - 1

  22. alice stores a file roadtrip.mpg

  23. alice drags roadtrip.mpg into wuala

  24. 1. encrypted on alice’s computer (128 bit AES)

  25. 1. encrypted on alice’s computer (128 bit AES) 2. encoded into redundant fragments

  26. 1. encrypted on alice’s computer (128 bit AES) 2. encoded into redundant fragments p2p network 3. uploaded into the p2p network

  27. 1. encrypted on alice’s computer (128 bit AES) 2. encoded into redundant fragments p2p network 4. m fragments uploaded onto our servers (boostrap, backup) 3. uploaded into the p2p network

  28. alice shares the file with bob alice and bob have friendship key alice encrypts file key and exchanges it with bob bob wants to download the file

  29. p2p network

  30. p2p network 1. download subset of fragments (m)

  31. p2p network if necessary, get the remaining fragments from our servers 1. download subset of fragments (m)

  32. 2. decode the file p2p network 1. download subset of fragments (m)

  33. 3. decrypt the file 2. decode the file p2p network 1. download subset of fragments (m)

  34. bob plays roadtrip.mpg 2. decode the file p2p network 1. download subset of fragments (m)

  35. p2p network

  36. maintenance p2p network

  37. maintenance alice’s computer checks and maintains her files p2p network

  38. maintenance alice’s computer checks and maintains her files if necessary, it constructs new fragments and uploads them p2p network

  39. maintenance alice’s computer checks and maintains her files if necessary, it constructs new fragments and uploads them p2p network

  40. maintenance alice’s computer checks and maintains her files if necessary, it constructs new fragments and uploads them p2p network

  41. p2p network

  42. put p2p network

  43. put get p2p network

  44. distributed hash table (DHT) put get p2p network

  45. super nodes

  46. storage nodes

  47. client nodes

  48. get

  49. get

  50. get

  51. get

  52. get

  53. download of fragments (in parallel)

  54. routing napster: centralized :-( gnutella: flooding :-( chord, tapestry: structured overlay networks O(log n) hops :-) n = # super nodes vulnerable to attacks (partitioning) :-(

  55. super node connected to direct neighbors plus some random links random links? piggy-pack routing information

  56. number of hops depends on size of the network (n) size of the routing table (R) which itself depends on the traffic we have lots of traffic due to erasure coding

  57. simulation results n = 10 6 R = 1,000: < 3 hops R = 100: ~ 5 hops reasonable already with moderate traffic

  58. small world effects (see milgram, watts & strogatz, kleinberg) regular graph high diameter :-( high clustering :-)

  59. small world effects (see milgram, watts & strogatz, kleinberg) regular graph random graph high diameter :-( low diameter :-) high clustering :-) low clustering :-(

  60. small world effects (see milgram, watts & strogatz, kleinberg) regular graph random graph mix high diameter :-( low diameter :-) low diameter :-) high clustering :-) low clustering :-( high clustering :-)

  61. routing table n = 10 9 , R = 10,000

  62. incentives, fairness prevent free-riding local disk space online time upload bandwidth

  63. online storage = local disk space * online time example: 10 GB disk space, 70% online --> 7 GB we have different mechanisms to measure and check these two variables

  64. trading storage only if you want to (you start with 1 GB) you must be online at least 17% of the time ( � 4 hours a day, running average) storage can be earned on multiple computers

  65. upload bandwidth the more upload bandwidth you provide, the more download bandwidth you get

  66. “client” storage node asymmetric interest tit-for-tat doesn’t work :-( believe the software? hack it (kazaa lite) :-(

  67. distributed reputation system that is not susceptible to false reports and other forms of cheating must scale well with number of transactions we have lots of small transactions due to erasure coding Havelaar, NetEcon 2006

  68. 1. lots of transactions “observations” Havelaar, NetEcon 2006

  69. 2. every round (e.g., a week) send observations to pre-determined neighbors (hash code) 1. lots of transactions “observations” Havelaar, NetEcon 2006

  70. 2. every round (e.g., a week) send observations to pre-determined neighbors (hash code) 3. discard ego-reports, median, etc. 1. lots of transactions “observations” Havelaar, NetEcon 2006

  71. 2. every round (e.g., a week) send observations to pre-determined neighbors (hash code) 3. discard ego-reports, median, etc. 4. next round, aggregate 1. lots of transactions “observations” Havelaar, NetEcon 2006

  72. 2. every round (e.g., a week) send observations to pre-determined neighbors (hash code) 3. discard ego-reports, median, etc. 4. next round, aggregate 1. lots of transactions 5. update reputation “observations” of storage nodes Havelaar, NetEcon 2006

  73. 2. every round (e.g., a week) send observations to pre-determined neighbors (hash code) 3. discard ego-reports, median, etc. 4. next round, aggregate 1. lots of transactions 5. update reputation “observations” of storage nodes rewarding: upload bandwidth proportional to reputation Havelaar, NetEcon 2006

  74. local approximation of contribution Havelaar, NetEcon 2006

  75. “client” storage node

  76. “client” storage node

  77. “client” storage node

  78. “client” storage node

  79. “client” storage node

  80. “client” storage node

  81. “flash crowd” “client” storage node

  82. content distribution similar to bittorrent “client” tit-for-tat some differences due to erasure codes

  83. encryption 128 bit AES for encryption 2048 bit RSA for authentication all data is encrypted (file + meta data) all cryptographic operations performed locally (i.e., on your computer)

  84. access control cryptographic tree structure untrusted storage doesn’t reveal who has access very efficient for typical operations (grant access, move, etc.) Cryptree, SRDS 2006

  85. vacation roadtrip.mpg switzerland.mpg videos europe.mpg root alice Cryptree, SRDS 2006

  86. bob doesn’t see that claire has also access and vice versa bob vacation roadtrip.mpg claire switzerland.mpg videos europe.mpg root alice Cryptree, SRDS 2006

  87. bob doesn’t see that granting access to this claire has also access and all subfolders takes and vice versa just one operation all subkeys can be bob derived from that parent key vacation roadtrip.mpg claire switzerland.mpg garfield videos europe.mpg root alice Cryptree, SRDS 2006

  88. demo

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend