incorporating p2p networks in service provider
play

Incorporating P2P Networks in Service Provider Infrastructure - PDF document

Incorporating P2P Networks in Service Provider Infrastructure Alberto Leon-Garcia & Shad Sharma University of Toronto Why P2P for Service Providers? Virtual distributed servers Autonomous execution of applications on commodity


  1. Incorporating P2P Networks in Service Provider Infrastructure Alberto Leon-Garcia & Shad Sharma University of Toronto Why P2P for Service Providers? � “Virtual distributed servers” � Autonomous execution of applications on commodity resources � P2P Innovations & Benefits � KaZaA, BitTorrent, Skype � Self-organizing, self-managing � Reliability � Scalability and Performance � Cost savings � P2P Broad Applicability � Not limited to rogue operators � Carrier Class Challenges � Reliability, Performance, Security 1

  2. Introduction Overlay Topology � Application layer routing � Nodes maintain logical neighbours to whom they forward messages � � P2P Applications Content Delivery � Lookups and Search � Service Virtualization � � E.g. P2P HTTP server Distributed Hashing � Hash table � Defines set of buckets that hold objects � Hash function � Distributes objects into buckets � Objects distributed “uniformly” among buckets � Distributed Hash Table � Nodes are the buckets that store objects � Objects: files/ resources/ things you want to find/ store � Structured overlays well suited to providing DHT services � Predefined positions assigned to peers � Peers assigned hash values (buckets) 2

  3. Introduction Unstructured Overlays Newscast + Robust, reliable, fast Epidem ic protocol based on insertion and removal gossiping – Broadcast based search Montressor O(m th root(n)) search time Dual layer approach: Newscast O(m x n) search messages substrate Structured Overlay Chord + Fast & efficient DHT search Structured DHT capable overlays O(log B (n)) search time Rigid finger tables O(log B (n)) search messages Routing table m aintenance Kademlia required Loosely consistent DHT overlay – Not robust under churn Relaxed finger tables Hybrid Overlay TrebleCast + Fast & efficient DHT search + Robust, reliable, fast insertion and removal + Resilient to churn TrebleCast (1) Peers inserted in order in � spiral-like fashion 80 49 50 51 52 53 54 55 56 Spiral - Notion of layers: � Provides data redundancy � 79 48 25 26 27 28 29 30 57 Data stored at each layer � 78 47 24 9 10 11 12 31 58 Peers maintain 4 neighbours: � In, out, left, right � 77 46 23 8 1 2 13 32 59 Successor: � 76 45 22 7 0 3 14 33 60 Peer responsible for replacing a � failed peer Successor moves “inwards” � 75 44 21 6 5 4 15 34 61 (closer to core) 74 43 20 19 18 17 16 35 62 Layer indicative of peer � reliability 73 42 41 40 39 38 37 36 63 Peers closer to core � considered more reliable 72 71 70 69 68 67 66 65 64 3

  4. TrebleCast (2) Dual layer approach: � Newscast substrate � Grid superstructure � � Adaptable to churn: Superstructure repaired through � gossip messages exchanged at Newscast substrate Fast adaptive search: � Search messages exchanged at � superstructure layer Lookups under static conditions: � O(log B (n)) Graceful search degradation � under increasing churn � Flexible data storage policy: Choose location of stored data (at � core for instance) Permits flexibility allowing data � redundancy and load balancing Robustness and reliability: � Build overlay around core of � reliable server-like peers Implementation � TrebleCast implemented in Java � Currently used for SIP virtualization � May implement any < key, value> pair storage based mechanism � Register, store, retrieve, delete: O(log(n)) time � TrebleCast simulator implemented in Java � P2P Monitor implemented in Java � Monitors peers in a P2P network � Allows basic interaction with peers through virtual console 4

  5. Pareto Turnover High Death Rate Reliable peers � move to overlay core Core “protected” � from churn Improved search � time (less routing table maintenance) Low Death Rate Fast Adaptive Search 5

  6. Static Search Comparison Chord Churn Search Comp. Search Time vs. Churn Rate for Chord Networks of Mean Size 10000 (16384 max) 55 Aggressive repair � Exponential Lifetime mechanism Pareto Lifetime 50 implemented to maintain Chord 45 structure 40 Average Search Time (# of Hops) 35 Search degrades � exponentially as 30 Churn rate increases past 10 25 peers/ sec 20 15 10 5 -1 0 1 2 3 10 10 10 10 10 Churn Rate (Arrival Rate - peers/sec) 6

  7. TrebleCast Churn Search Comp. Search Time vs. Churn Rate for TrebleCast Networks of Mean Size 10000 9 TrebleCast search � Exponential Lifetime degrades under Pareto Lifetime Exponential Lifetime w/ Bootstrap Server exponential 8 lifetime distribution 7 Average Search Time (# of Hops) Search remains � almost constant 6 under Pareto lifetime distribution 5 Note: Storage � 4 policy chosen so that a core set of 3 reliable peers are responsible for storage 2 -1 0 1 2 3 10 10 10 10 10 Churn Rate (Arrival Rate - peers/sec) Conclusions � Treblecast for service provider setting � Resilient to churn � Fast adaptive search: O(log(n)) � Inherent support for data redundancy � Flexible data storage & retrieval policy 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend