epichord parallelizing the chord lookup algorithm with
play

EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive - PowerPoint PPT Presentation

EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management Ben Leong, Barbara Liskov, and Eric D. Demaine MIT Computer Science and Artificial Intelligence Laboratory { benleong, liskov, edemaine } @mit.edu


  1. EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management Ben Leong, Barbara Liskov, and Eric D. Demaine MIT Computer Science and Artificial Intelligence Laboratory { benleong, liskov, edemaine } @mit.edu EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.1

  2. Structured Peer-to-Peer Systems Large scale dynamic network Overlay infrastructure : Scalable Self configuring Fault tolerant Every node responsible for some objects Find node having desired object Challenge: Efficient Routing at Low Cost EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.2

  3. Address Space N0 N6 N62 N10 N57 N15 N51 N17 N49 N20 N47 N25 N40 N30 N35 Most common — one-dimensional circular address space EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.3

  4. Mapping Keys to Nodes K2 N0 N6 K13 N62 N10 N57 N15 K52 K54 N51 N17 N49 N20 N47 N25 N40 K47 N30 N35 K32 successor of key is its owner EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.4

  5. Distributed Hash Tables (DHTs) A Distributed Hash Table (DHT) is a distributed data structure that supports a put/get interface. Store and retrieve {key, value} pairs efficiently over a network of (generally unreliable) nodes Keep state stored per node small because of network churn ⇒ minimize book-keeping & maintenance traffic ⇒ EpiChord explores the trade-offs in moving from sequential lookup to parallel lookup and from O (log n ) to O (log n ) + + state EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.5

  6. Chord N0 N6 N62 N10 N57 N15 N51 N17 N49 N20 N47 N25 N40 N30 N35 Each node periodically probes O (log n ) fingers Achieves O (log n ) -hop performance EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.6

  7. Our Goal We want to do better than O (log n ) -hop lookup without adding extra overhead. Use a combination of techiques: Piggyback information on lookup messages Allow cache to store more than O (log n ) routing state Issue parallel queries during lookup EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.7

  8. Outline Parallel Lookup Algorithm Reactive Cache Management Simulation Results Related Work Conclusion EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.8

  9. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 N51 N17 YOU ARE HERE N49 YOU WANT: K2 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.9

  10. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 N51 query for K2 N17 N49 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.10

  11. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 p−1 queries N51 N17 N49 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.11

  12. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 N51 N17 N57, N62, N0, N10 N49 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.12

  13. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 N51 N17 N49 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.13

  14. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 N51 N0, N6 N17 N49 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.14

  15. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 N51 N17 N0, N6 N49 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.15

  16. EpiChord Lookup Algorithm K2 N0 N6 N62 N10 N57 N15 N51 FOUND K2!! N17 N49 N20 N47 Known node N25 N40 Unknown node N30 N35 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.16

  17. EpiChord Lookup Algorithm Intrinsically iterative Learn about more nodes Avoid redundant queries – typically 2( p + h ) messages Additional policies to learn new routing entries: When a node first joins network, obtains a cache transfer from successor Nodes gather information by observing lookup traffic EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.17

  18. Reactive Cache Management Traditional (active) approach ⇒ Ping fingers periodically Our (reactive) approach: Cache entries have a fixed expiration period Divide address space into exponentially smaller slices Periodically check if each slice has sufficient ( j ) un-expired entries If not, make a lookup to the midpoint of the offending slice EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.18

  19. Division of Address Space Estimate number of slices from k successors and k predecessors j and k are system parameters ⇒ choose k ≥ 2 j EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.19

  20. Summary Piggyback extra information on lookups Allow cache to contain more than O (log n ) state Flush out old state with TTLs Use cache entries in parallel to avoid timeouts Check that cache entries are well-distributed. Fix if necessary. Now, let’s evaluate performance : (i) latency and (ii) cost EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.20

  21. Simulation Setup Compare EpiChord to the optimal sequential Chord lookup algorithm (base 2) What’s optimal? We ignore Chord maintenance costs and assume that the finger tables of nodes are perfectly accurate regardless of node failures The competing sequential lookup algorithm is thus a reasonably strong adversary and not just a straw man EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.21

  22. Simulation Setup The assumed workloads will affect comparisons (Li et al., 2004) Consider 2 types of workloads: Lookup-Intensive 1 600 ⇒ rn ≈ 0 . 3 to 2 200 to 1,200 nodes, r ≈ query rate, Q ≈ 2 per sec Churn-Intensive 1 600 to 9,000 nodes, r ≈ 600 ⇒ rn ≈ 1 . 0 to 15 query rate, Q ≈ 0 . 05 to 0 . 07 per sec EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.22

  23. Hop Count – Lookup-Intensive Chord 5 1-way EpiChord 2-way EpiChord 3-way EpiChord Average number of hops per lookup 4-way EpiChord 5-way EpiChord 4 3 2 1 0 200 300 400 600 800 1000 1200 1400 Network Size (Logscale) EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.23

  24. Latency – Lookup-Intensive 0.9 Chord 1-way EpiChord 2-way EpiChord 0.8 3-way EpiChord 4-way EpiChord 5-way EpiChord 0.7 Average lookup latency ( s ) 0.6 0.5 0.4 0.3 0.2 0.1 0 200 300 400 600 800 1000 1200 1400 Network Size (Logscale) EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.24

  25. Messages Sent Per Lookup 20 5-way EpiChord Average number of messages per lookup 4-way EpiChord 3-way EpiChord 2-way EpiChord 1-way EpiChord Chord 15 10 5 0 200 300 400 600 800 1000 1200 1400 Network Size (Logscale) EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.25

  26. Summary of Results Increasing p improves hop count and latency and reduces lookup failure rate Since our approach is iterative ⇒ about 2( p + h ) messages per lookup Higher lookup rates yield better overall performance due to caching Number of entries returned per query l > 3 does not affect performance much, so we set l = 3 EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.26

  27. Related Work Chord (Stoica et al., 2001) DHash++ (Dabek et al., 2004) Kademlia (Maymounkov and Mazieres, 2002) Kelips (Gupta et al., 2003) One-Hop (Gupta et al., 2004) EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.27

  28. Conclusion Parallel lookup and reactive routing state maintenance algorithm trades off storage with better lookup performance w/o increasing bandwidth consumption Reduce both lookup latencies and pathlengths over Chord by a factor of 3 by issuing only 3 queries asynchronously in parallel per lookup w/o using more messages A parallel lookup strategy is inherently more resilient to timeouts than a sequential one EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.28

  29. EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management Ben Leong, Barbara Liskov, and Eric D. Demaine MIT Computer Science and Artificial Intelligence Laboratory { benleong, liskov, edemaine } @mit.edu EpiChord: Parallelizing the Chord Lookup Algorithm with Reactive Routing State Management – p.29

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend