SLIDE 1
1 2 3 4 5 6 7 8 9 10 11 12 13 u Leechers A and B also - - PDF document
1 2 3 4 5 6 7 8 9 10 11 12 13 u Leechers A and B also - - PDF document
1 2 3 4 5 6 7 8 9 10 11 12 13 u Leechers A and B also announce to their peers which chunks they possess u Now we show BitTorrents incentive mechanism, which is also known as rate-based tit- for-tat In this case, leecher A makes
SLIDE 2
SLIDE 3
3
SLIDE 4
4
SLIDE 5
5
SLIDE 6
6
SLIDE 7
7
SLIDE 8
8
SLIDE 9
9
SLIDE 10
10
SLIDE 11
11
SLIDE 12
12
SLIDE 13
13
SLIDE 14
14
u Leechers A and B also announce to their peers which chunks they possess uNow we show BitTorrent’s incentive mechanism, which is also known as rate-based tit- for-tat In this case, leecher A makes the first step and offers to unconditionally upload for 10 seconds chunks to leecher B. In BT lingo, this step is called optimistic unchoking
SLIDE 15
15
u Leechers A and B also announce to their peers which chunks they possess uNow we show BitTorrent’s incentive mechanism, which is also known as rate-based tit- for-tat In this case, leecher A makes the first step and offers to unconditionally upload for 10 seconds chunks to leecher B. In BT lingo, this step is called optimistic unchoking
SLIDE 16
16
SLIDE 17
17
SLIDE 18
the rarest may be in only one peer so it picks a random instead which may be at many peers and downloads subpieces in SGM from multiple peers like in EGM but in the other modes he only downloads it subpieces from the same peer 18
SLIDE 19
The End Game is the name for the final download strategy – there is a tendency for the last few pieces of a torrent to download quite slowly. To avoid this, many BitTorrent implementations issue requests for the same remaining blocks to all its peers. When a block comes in from one peer, you send CANCEL messages to all the other peers requested from, in order to save
- bandwidth. Its cheaper to send a CANCEL message than to receive the full
block and just discard it. However, there is no formal definition of when to enter End Game Mode. I found two popular definitions:
- 1. All blocks have been requested
- 2. Number of blocks in transit is greater than number of blocks left, and no
more than 20
SLIDE 20
20
SLIDE 21
21 + less infrastructure requirement
- Single point of failure
- Node joins and leaves causing much chain
- one node may still be congested
- packets may traverse the same link twice
SLIDE 22
22
SLIDE 23
23
SLIDE 24
24
SLIDE 25
25
SLIDE 26
26
SLIDE 27
27
SLIDE 28
28
SLIDE 29
29
SLIDE 30
30
SLIDE 31
31
SLIDE 32
32
SLIDE 33
33
SLIDE 34
34
SLIDE 35
35
SLIDE 36
36
SLIDE 37
37
SLIDE 38
38
SLIDE 39
Ids live in a single circular space.
SLIDE 40
40
SLIDE 41
41
SLIDE 42
Always undershoots to predecessor. So never misses the real successor. Lookup procedure isn’t inherently log(n). But finger table causes it to be.
SLIDE 43
Small tables, but multi-hop lookup. Table entries: IP address and Chord ID. Navigate in ID space, route queries closer to successor. Log(n) tables, log(n) hops. Route to a document between ¼ and ½ …
SLIDE 44
Small tables, but multi-hop lookup. Table entries: IP address and Chord ID. Navigate in ID space, route queries closer to successor. Log(n) tables, log(n) hops. Route to a document between ¼ and ½ …
SLIDE 45
45
SLIDE 46
Always undershoots to predecessor. So never misses the real successor. Lookup procedure isn’t inherently log(n). But finger table causes it to be.
SLIDE 47
Look up key 2 at node 1. key 2 < successor. So send to successor directly. 47
SLIDE 48
48
SLIDE 49
49
SLIDE 50
50
SLIDE 51
Concurrent join and stabilization are provably consistent eventually.
SLIDE 52
SLIDE 53
No problem until lookup gets to a node which knows of no node < key. There’s a replica of K90 at N113, but we can’t find it.
SLIDE 54
54
SLIDE 55
55
SLIDE 56
Always undershoots to predecessor. So never misses the real successor. Lookup procedure isn’t inherently log(n). But finger table causes it to be.
SLIDE 57
57
SLIDE 58
58
SLIDE 59