1
Structured Overlays:
Eclipse Attacks on Overlay Networks
April 28th, 2006 Wyman Park 4th Floor Conference Room
Presentation by:
Dan Liu & Jay Zarfoss
Structured Overlays: Eclipse Attacks on Overlay Networks April - - PowerPoint PPT Presentation
Structured Overlays: Eclipse Attacks on Overlay Networks April 28th, 2006 Wyman Park 4 th Floor Conference Room Presentation by: Dan Liu & Jay Zarfoss 1 The idea of churn as shelter from route poisoning attacks is an interesting, if
1
Structured Overlays:
Eclipse Attacks on Overlay Networks
April 28th, 2006 Wyman Park 4th Floor Conference Room
Presentation by:
Dan Liu & Jay Zarfoss
2
“The idea of churn as shelter from route poisoning attacks is an interesting, if simple, idea.”
3
“The ID of a node can’t be tied to actual data like files that would have to be changed at every epoch.”
“On one hand, for distributed file systems and databases the cost of migrating data across nodes could be high, and induced churn may be inappropriate.”
4
“We would want the authors’ defensive scheme to be able to scale to the level of Kazaa and Napster.”
Structured vs Unstructured overlays is not a fair comparison
5
Timeserver, timeserver, timeserver…
Low hanging fruit “Each node randomly picks a fixed position in the epoch and computes everything (ID update, routing table removals, etc) related to this.”
Thou shall not let nodes pick their own identifiers!
6
“My greatest complaint with the analysis is that they evaluate their system exclusively with a very powerful adversary.”
7
“I would have liked to see a more detailed explanation of how the attack on periodic resets + update rate limitation works.” “…the major component of their approach is the rate limiting rather than the actual churn.”
8
9
Extensions?
“First, rather than storing only the first hops of queries, we store entire paths” “We would first want to dissect an application for patterns in finding
10
SimNet?
“It would have been believable if they had used an established simulator renowned for its real-world network modeling, such as SimNet.”
11
Motivation
routing table poisoning
use of a highly optimized routing table?
node?
12
Eclipse Attacks on Overlay Networks: Threats and Defenses
Atul Singh, Tsuen Ngan, Peter Druschel, Dan Wallach Rice University IEEE Infocom 2006
13
Pastry Node Review
– Contains node ids and IP addresses
are closest to the local node
14
Notion of Eclipse Attack
33333333 O Good nodes see a controlled view of the overlay and have no method to detect this!!!
15
Worst Case Scenario
– Arbitrary Denial of Service – Censorship Attack
this global attack on every neighbor set/routing table
16
Eclipse Defenses
17
More on Proximity Constraints
malicious nodes cannot be within a low network delay of all nodes (PNS Defense)
These routing tables will tend to have more good entries
18
Simple Observation
2160-1 O
will have a high in-degree in the
has an average in degree
~25
~10
19
Effect of Enforcing Degree Bounds
How Do We Enforce Bounds in the Overlay?
20
Enforcing Degree Bounds
service
– Dedicated service keeps track of each
– Single point of failure, availability, and scalability issues
mechanism where everyone checks each other’s back?
21
Every Node Maintains a Backpointer List
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
Routing Table
01aa2 02bb3 04de4 08f45 10667 2a534 4b99c
22
Checking Backpointer Lists
neighbors for its backpointer list
audit fails and the node is removed
backpointer list to make sure each node on the list has a correct neighbor set/routing table size
23
Fresh and Authentic Replies
to a public key
the response
accepting the reply How Can We Do This Anonymously?
24
Use an Anonymizer Node
– Case 1: z is malicious, y is correct – Case 2: z is malicious, y is malicious – Case 3: z is correct, y is correct – Case 4: z is correct, y is malicious
x y z y’ How do we know if z should pass
audit?
25
Dissing a Good Node
malicious (Binomial Distribution)
than k out of n challenges correctly:
i n i k i
f f i n
− − =
− ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛
) 1 (
1
26
What If We Vary k?
Probability that a Good Node is Considered Malicious
10 20 30 40 50 60 70 80 90 100 12 13 14 15 16 17 18 19 20 21 22 23 24 k value percent
27
Malicious Node Passing an Audit
malicious node passes
node passes
malicious node fails
k
−
⎞ ⎛
n i n i
n
∑
=
− − − + ⎟ ⎟ ⎠ ⎜ ⎜ ⎝
k i
c f r c f f i )] 1 )( 1 [( ] / ) 1 ( [
28
Malicious Node Passing an Audit
probability 0.034
f = .20, n = 24, k = 12, r = 1.2
probability 0.966
probability .9998 (as we previously saw)
29
Choosing the k Value
24 Too many good nodes fail and are considered malicious Good nodes tend to pass and are considered good k = 12 k = n/2 Too many mal nodes pass and are considered good Mal nodes tend to fail and are considered malicious k
30
Marking Malicious/Correct Suspicious
nodes make it harder to detect them
will also be marked as malicious
31
Picking the Anonymizer Node
32
Evaluation Questions
bounding node degrees?
33
Experimental Setup
– GT-ITM transit stub network topology – GT-ITM topology has a good separation of nodes in the delay space – Pair wise latency values for up to 10,000 real Internet nodes obtained with King tool – Pastry settings: b = 4, l = 16, f=0.2
34
Know Your Enemy
entries referring to malicious nodes
correct nodes to each other
to good nodes whenever possible
35
Effectiveness of PNS Defense
in top row drops from 78% to 41% for a 10,000 node overlay
effective in large
36
PNS with King Latencies
PNS less effective because a large amount of nodes are in the same delay band
37
Top Row Comparison
38
Auditing Parameters
2 minutes (staggered)
high churn
39
In-Degree Distribution
40
Reducing Fraction of Malicious Nodes
bound of 16 per row
41
Reducing Fraction of Malicious Nodes
– Higher churn requires more auditing
42
Communication Overhead
Searching for initial anonymizer nodes
43
How Did They Do?
Depends…but looks good
bounding node degrees? Yes!
44
Further Issues…
supernode structures?