1 2 3 4 5 6 7 8 9 10 11 12 13 u Leechers A and B also - - PDF document

1 2 3 4 5 6 7 8 9 10 11 12 13 u leechers a and b also
SMART_READER_LITE
LIVE PREVIEW

1 2 3 4 5 6 7 8 9 10 11 12 13 u Leechers A and B also - - PDF document

1 2 3 4 5 6 7 8 9 10 11 12 13 u Leechers A and B also announce to their peers which chunks they possess u Now we show BitTorrents incentive mechanism, which is also known as rate-based tit- for-tat In this case, leecher A makes


slide-1
SLIDE 1

1

slide-2
SLIDE 2

2

slide-3
SLIDE 3

3

slide-4
SLIDE 4

4

slide-5
SLIDE 5

5

slide-6
SLIDE 6

6

slide-7
SLIDE 7

7

slide-8
SLIDE 8

8

slide-9
SLIDE 9

9

slide-10
SLIDE 10

10

slide-11
SLIDE 11

11

slide-12
SLIDE 12

12

slide-13
SLIDE 13

13

slide-14
SLIDE 14

14

u Leechers A and B also announce to their peers which chunks they possess uNow we show BitTorrent’s incentive mechanism, which is also known as rate-based tit- for-tat In this case, leecher A makes the first step and offers to unconditionally upload for 10 seconds chunks to leecher B. In BT lingo, this step is called optimistic unchoking

slide-15
SLIDE 15

15

u Leechers A and B also announce to their peers which chunks they possess uNow we show BitTorrent’s incentive mechanism, which is also known as rate-based tit- for-tat In this case, leecher A makes the first step and offers to unconditionally upload for 10 seconds chunks to leecher B. In BT lingo, this step is called optimistic unchoking

slide-16
SLIDE 16

16

slide-17
SLIDE 17

17

slide-18
SLIDE 18

the rarest may be in only one peer so it picks a random instead which may be at many peers and downloads subpieces in SGM from multiple peers like in EGM but in the other modes he only downloads it subpieces from the same peer 18

slide-19
SLIDE 19

The End Game is the name for the final download strategy – there is a tendency for the last few pieces of a torrent to download quite slowly. To avoid this, many BitTorrent implementations issue requests for the same remaining blocks to all its peers. When a block comes in from one peer, you send CANCEL messages to all the other peers requested from, in order to save

  • bandwidth. Its cheaper to send a CANCEL message than to receive the full

block and just discard it. However, there is no formal definition of when to enter End Game Mode. I found two popular definitions:

  • 1. All blocks have been requested
  • 2. Number of blocks in transit is greater than number of blocks left, and no

more than 20

slide-20
SLIDE 20

20

slide-21
SLIDE 21

21 + less infrastructure requirement

  • Single point of failure
  • Node joins and leaves causing much chain
  • one node may still be congested
  • packets may traverse the same link twice
slide-22
SLIDE 22

22

slide-23
SLIDE 23

23

slide-24
SLIDE 24

24

slide-25
SLIDE 25

25

slide-26
SLIDE 26

26

slide-27
SLIDE 27

27

slide-28
SLIDE 28

28

slide-29
SLIDE 29

29

slide-30
SLIDE 30

30

slide-31
SLIDE 31

31

slide-32
SLIDE 32

32

slide-33
SLIDE 33

33

slide-34
SLIDE 34

34

slide-35
SLIDE 35

35

slide-36
SLIDE 36

36

slide-37
SLIDE 37

37

slide-38
SLIDE 38

38

slide-39
SLIDE 39

Ids live in a single circular space.

slide-40
SLIDE 40

40

slide-41
SLIDE 41

41

slide-42
SLIDE 42

Always undershoots to predecessor. So never misses the real successor. Lookup procedure isn’t inherently log(n). But finger table causes it to be.

slide-43
SLIDE 43

Small tables, but multi-hop lookup. Table entries: IP address and Chord ID. Navigate in ID space, route queries closer to successor. Log(n) tables, log(n) hops. Route to a document between ¼ and ½ …

slide-44
SLIDE 44

Small tables, but multi-hop lookup. Table entries: IP address and Chord ID. Navigate in ID space, route queries closer to successor. Log(n) tables, log(n) hops. Route to a document between ¼ and ½ …

slide-45
SLIDE 45

45

slide-46
SLIDE 46

Always undershoots to predecessor. So never misses the real successor. Lookup procedure isn’t inherently log(n). But finger table causes it to be.

slide-47
SLIDE 47

Look up key 2 at node 1. key 2 < successor. So send to successor directly. 47

slide-48
SLIDE 48

48

slide-49
SLIDE 49

49

slide-50
SLIDE 50

50

slide-51
SLIDE 51

Concurrent join and stabilization are provably consistent eventually.

slide-52
SLIDE 52
slide-53
SLIDE 53

No problem until lookup gets to a node which knows of no node < key. There’s a replica of K90 at N113, but we can’t find it.

slide-54
SLIDE 54

54

slide-55
SLIDE 55

55

slide-56
SLIDE 56

Always undershoots to predecessor. So never misses the real successor. Lookup procedure isn’t inherently log(n). But finger table causes it to be.

slide-57
SLIDE 57

57

slide-58
SLIDE 58

58

slide-59
SLIDE 59

59