Multiple Source Multiple Destination Topology Inference Destination - - PowerPoint PPT Presentation

multiple source multiple destination topology inference
SMART_READER_LITE
LIVE PREVIEW

Multiple Source Multiple Destination Topology Inference Destination - - PowerPoint PPT Presentation

Multiple Source Multiple Destination Topology Inference Destination Topology Inference using Network Coding Pegah Sattari EECS, UC Irvine Joint work with Athina Markopoulou, at UCI, Christina Fragouli at EPFL Lausanne Christina Fragouli,


slide-1
SLIDE 1

Multiple Source Multiple Destination Topology Inference Destination Topology Inference using Network Coding

Pegah Sattari EECS, UC Irvine

Joint work with Athina Markopoulou, at UCI, Christina Fragouli at EPFL Lausanne Christina Fragouli, at EPFL, Lausanne

slide-2
SLIDE 2

Outline Outline

N k T h

  • Network Tomography
  • Goal, Main Ideas, and Contributions
  • Proposed Approach
  • Conclusion
slide-3
SLIDE 3

Network Tomography Network Tomography

  • In general

g

– Goal: obtain a detailed picture of a network from end-to-end probes. – Infer what? Topology, Link-level (loss, delay).

O l

  • Our goal:

– “Topology inference”, multiple sources, multiple receivers, and intermediate nodes both network coding and multicast.

slide-4
SLIDE 4

Two bodies of related work Two bodies of related work

Network Tomography

  • Multicast trees using loss

correlations Inference with Network Coding

  • Passive

Failure patterns [Ho et al ISIT 05]

correlations

  • Unicast probes
  • Active probing, reliance on the

number, order, delay variance d l f d b d

– Failure patterns [Ho et al., ISIT 05] – Topology inference [Sharma et al., ITA 07] – Bottleneck discovery/overlay management in p2p [Jafarisiavoshani

and loss of received probes, and heuristic or statistical signal- processing approach.

  • Mostly related: Rabbat, Coates,

management in p2p [Jafarisiavoshani et al., Sigcomm INM 07] – Subspace properties [Jafarisiavoshani et al., ITW 07]

  • Active

y , , Nowak, “Multiple-Source Internet Tomography,” IEEE JSAC 06.

  • Active

– Loss tomography [Gjoka et al., IEEE Globecom 07] – Binary tree inference [Fragouli et al Allerton 06] al., Allerton 06]

slide-5
SLIDE 5

Main idea 1

Network coding: topology dependent correlation Network coding: topology-dependent correlation

  • Network coding introduces

[Fragouli et al., 2006], [Sharma et al., 2007]

x x

  • Network coding introduces

topology-dependent correlation among the x1 x2 content of probe packets, which can be reverse- engineered to infer the engineered to infer the topology.

– Network coding can make the packets “stay together” and packets stay together and reveal the coding point.

x1+x2

slide-6
SLIDE 6

Main idea 2

G l G hs (DAG) General Graphs (DAG)

  • An M-by-N DAG, with a given routing policy that has

three properties: three properties: – A unique path from each source to each destination. – All 1-by-2 components: “inverted Y”. All 2 b 1 ts: “Y” – All 2-by-1 components: “Y”.

  • Consistent with the routing in the Internet.
  • Logical topology.

g p gy

S S1 S2 branching R R R J B g point joining point R2 R1 R Not a logical topology!

slide-7
SLIDE 7

Main Idea 2, Cont’d ,

2-by-2 Components

  • A traditional multiple source, multiple receiver

Rabbat et al., 2006

m p , m p tomography problem can be decomposed into multiple two source, two receiver sub-problems.

  • Four 2-by-2 types.

y yp

S1 S2 S1 S2 B2 S1 S2 B2 S1 S2 J1=J2=J J1 B1 B2 J2 B1

2

B1 B2 B1=B2=B R1 R2

1

R1 R2 J2

1

R1 R2 J1

1

R1 R2 J1 J2 R1 R2 R1 R2 R1 R2 R1 R2 Type 1:shared Type 2:non-shared Type 3:non-shared Type 4:non-shared

slide-8
SLIDE 8

Main Idea 2, Cont’d ,

Decomposition into 2-by-2

S1 S2 B S1 S2 S1 S2 B S1 S2 B J1=J2 B B2 Decomposition J1=J2 B B2 J1=J2 B1 2 J1=J2 B2 B1,2 J3 B2,3 B1,2 B1,2 J3 B2,3 J3 R1 R2

3

R3 R1 R2 R1 R3 R2 R3

slide-9
SLIDE 9

Previous Work

2-by-2’s and Merging

Rabbat et al.,2006

y g g

1 1 2 2 Δ

Δ

S1 S2 S1 S2 Rabbat et al.,2006

1

random

  • ffset

1 2

J J8 B2 R1 R2 B B8,9 B1 7

1 2

Δ Δ

S1 S2

random

B1,7 B3,5 J1

1 2

J J

random

  • ffset

B5,6 B3,4 B1,2 J7 J1 J3 J5 R1 R3 J R9 R8 R7 R6 R5 R4 R3 R2 R1

slide-10
SLIDE 10

Weaknesses of Previous Work Weaknesses of Previous Work

  • In the 2 by 2 inference step they can only
  • In the 2-by-2 inference step, they can only

distinguish between type 1 (shared) and types 2 3 4 (non-shared) types 2,3,4 (non shared).

  • This results in inaccurate identification of

the joining point locations in the merging the joining point locations in the merging step.

– I.e., bounds within a sequence of several consecutive logical links.

slide-11
SLIDE 11

Our Contributions Our Contributions

  • At the 2 by 2 inference step:
  • At the 2-by-2 inference step:

– Network coding helps us distinguish among all four 2-by-2 types by looking at the content.

  • At the merging step:

– Under the same assumption as in prior work (S1 1 by N) we can localize each joining point for 1-by-N), we can localize each joining point, for each receiver, to a single logical link. – In addition, we can also design another merging l ith ith t h ti d b algorithm, without such an assumption, and by

  • nly using the 2-by-2 information.
slide-12
SLIDE 12

Outline Outline

  • Network Tomography
  • Network Tomography
  • Goal, Main Ideas, and Contributions

P d A h

  • Proposed Approach

– Assumptions, Node Operations Step 1: 2 by 2 Components (lossless/lossy) – Step 1: 2-by-2 Components (lossless/lossy) – Step 2: Merging Algorithms (two scenarios) – Simulation Results Simulation Results

  • Conclusion
slide-13
SLIDE 13

Assumptions Assumptions

  • Delay:
  • Delay:

– fixed part (propagation) and random part (queuing); independent across links.

  • Packet loss:

– both lossless and lossy cases.

  • Coarse synchronization (~5-10ms) across nodes
  • Coarse synchronization (~5-10ms) across nodes.

– achievable via a handshaking scheme, e.g., NTP.

  • We design active probing schemes, i.e., the
  • peration of sources, intermediate nodes and

receivers, which allow topology inference from the

  • bservations
  • bservations.
slide-14
SLIDE 14

Node Operations Node Operations

  • Sources: synchronized

– later relaxed by large time window W – in some algorithms, an artificial offset u – up to countMax experiments, spaced by time T. S1 S2 J B2

  • Joining point:

dd d

  • Branching point:

forwards the single received

x1=[1,0] x2=[0,1] J1 B1

adds and forwards packets within W single received packet to all interested links downstream

R1 R2 J2

within W (additions

  • ver Fq).

(the next hop for at least one source packet

2

in the network code).

slide-15
SLIDE 15

Node Operations Node Operations

  • Sources: synchronized

– later relaxed by large time window W – in some algorithms, an artificial offset u – up to countMax experiments, spaced by time T. S1 S2 J B2

  • Joining point:

dd d

  • Branching point:

forwards the single received

x1=[1,0] x2=[0,1] J1 B1

adds and forwards packets within W single received packet to all interested links downstream

R1 R2 J2

within W (additions

  • ver Fq).

(the next hop for at least one source packet

2

in the network code).

slide-16
SLIDE 16

Node Operations Node Operations

  • Sources: synchronized

– later relaxed by large time window W – in some algorithms, an artificial offset u – up to countMax experiments, spaced by time T. S1 S2 J B2

  • Joining point:

dd d

  • Branching point:

forwards the single received

x1=[1,0] x2=[0,1] J1 B1

adds and forwards packets within W single received packet to all interested links downstream

R1 R2 J2

within W (additions

  • ver Fq).

(the next hop for at least one source packet

2

in the network code).

c11x1 +c12x2 c21x1 +c22x2

slide-17
SLIDE 17

Outline Outline

  • Network Tomography
  • Network Tomography
  • Goal, Main Ideas, and Contributions

P d A h

  • Proposed Approach

– Assumptions, Node Operations Step 1: 2 by 2 Components (lossless/lossy) – Step 1: 2-by-2 Components (lossless/lossy) – Step 2: Merging Algorithms (two scenarios) – Simulation Results Simulation Results

  • Conclusion
slide-18
SLIDE 18

Inferring 2-by-2’s No Loss Inferring 2 by 2 s, No Loss

Distinguishing among {1,4}, 2 or 3

R1 R2 R1 R2 R1 R2 R1 R2

  • One probe distinguishes among Types: {1 4} 2 or 3

x1+x2 x1+x2 x1+x2 x1+2x2 x1+2x2 x1+x2 x1+x2 x1+x2

  • One probe distinguishes among Types: {1,4}, 2 or 3.
slide-19
SLIDE 19

Inferring 2-by-2’s No Loss Inferring 2 by 2 s, No Loss

Distinguishing between 1,4

  • Type 1: J1=J2=J
  • Type 1: J1=J2=J.
  • Type 4: J1,J2 different.
  • Can be achieved by

Appropriately selecting u Appropriately selecting u.

slide-20
SLIDE 20

Inferring 2-by-2’s No Loss Inferring 2 by 2 s, No Loss

Selecting the appropriate offset

S1 S2 B2 D

W-D2 W-D1 W-D1 W-D2

J1 B1 J2 D1 D2 W W R1: R1:

x1 x1+x2

J1 R1 R2 J2 R2: R2:

x1+x2 x1 x2 x1 Type (4) topology D1>D2, D1<D2,

  • 2-by-2’s: u є [W-D1 W-D2]

Type (4) topology

1 2,

  • ffset from [W-D1,W-D2]

D1 D2,

  • ffset from [W-D2,W-D1]
  • 2 by 2 s: u є [W D1,W D2]
  • More general: u є [0,W]
slide-21
SLIDE 21

Inferring 2-by-2’s Lossy Case Inferring 2 by 2 s, Lossy Case

R1 R2 R1 R2 R1 R2 R1 R2

  • meetings no longer guaranteed, observations no longer predictable!

x1 x1 x1+x2 x1+x2 x1 x1 x1+x2

  • There are common observations across all 4 types.
  • Each experiment might result in different outcomes.
slide-22
SLIDE 22

Inferring 2-by-2’s Lossy Case

  • There are three groups of observations: (i) at least one receiver

Inferring 2 by 2 s, Lossy Case

All possible observations does not receive any packet (-), (ii) R1 = R2, (iii) R1 ≠ R2.

slide-23
SLIDE 23

Inferring 2-by-2’s Lossy Case

  • E.g., c12-c22<0 can only occur for type 2 or 4!

Inferring 2 by 2 s, Lossy Case

Some observations of group (iii) help! y yp

  • c12-c22>0 can only occur for type 3 or 4, …
slide-24
SLIDE 24

Inferring 2-by-2’s Lossy Case

  • Either naturally (loss) or artificially (u).

Inferring 2 by 2 s, Lossy Case

Try to create group (iii) observations! y y

  • Especially for small loss rates and like the lossless case: u є [0,W]
slide-25
SLIDE 25

Inferring all 2-by-2’s in a 2-by-N g y y

  • Important for

S1 S2 x1=[1,0] x2=[0,1]

  • Important for

the merging algorithm.

  • 2 sources

J8 B2 x2

  • 2 sources

multicast to N receivers.

  • Additions over a

B8,9 B1 7

  • Additions over a

larger field.

  • Algorithms can

be applied to any

1,7

B3,5 J1

pp y pair of receivers among all “N choose 2” ssibl i s

B5,6 B3,4 B1,2 J7

1

J3 J5

possible pairs.

R9 R8 R7 R6 R5 R4 R3 R2 R1 x1+2x2 x1+x2

slide-26
SLIDE 26

Advantages over Prior Work Advantages over Prior Work

  • More accurate:
  • More accurate:

– we can distinguish among all four 2-by-2 types.

  • Faster
  • Faster

– One observation that uniquely characterizes the 2-by-2 type is sufficient. y yp ff . – Unlike [Rabbat et al.], we do not need many experiments for statistical significance.

  • Less Bandwidth overhead

– Duplicate packets crossing the same link.

slide-27
SLIDE 27

Outline Outline

  • Network Tomography
  • Network Tomography
  • Goal, Main Ideas, and Contributions

P d A h

  • Proposed Approach

– Assumptions, Node Operations Step 1: 2 by 2 Components (lossless/lossy) – Step 1: 2-by-2 Components (lossless/lossy) – Step 2: Merging Algorithms (two scenarios) – Simulation Results Simulation Results

  • Conclusion
slide-28
SLIDE 28

Merging Step Merging Step

  • Using the 2 by 2 information we design two
  • Using the 2-by-2 information, we design two

merging algorithms to infer the 2-by-N structure under two scenarios: structure under two scenarios:

  • 1. Assuming knowledge of a 1-by-N tree topology

(e.g., using classic tomography methods).

– We can solve exactly (previously approximately solved).

  • 2. No 1-by-N tree topology is given.

We can also solve (previously impossible) – We can also solve (previously impossible).

  • We then generalize our approach to the M-

by-N network by N network.

slide-29
SLIDE 29

Merging Algorithm 1 g g g

S1 S2 Given: 2-by-2’s and S1’s 1-by-N.

1-by-N given

J8 S1 S2 J1=J2 B8,9 B1 7 B1,2 R R

1,7

B3,5 J1 R1 R2 S1 S2 B5,6 B3,4 B1,2 J7

1

J3 J5 B1,7 J1 J7 R9 R8 R7 R6 R5 R4 R3 R2 R1 R1 R7

slide-30
SLIDE 30

Merging Algorithm 2

S2

g g g

no 1-by-N given

Only the 2-by-2’s are given. S2 S1 B1,3 J1 J8 J3 J5

1

B1,2 B8,9 B3,4

5

B5,6 J7 R1 R2 R3 R4 R9 R8 R7 R6 R5

slide-31
SLIDE 31

Comparison of the two algorithms Comparison of the two algorithms

Merging Alg 2

S

M i Al 1

S S

Merging Alg. 2

S1 J8 S2

Merging Alg. 1

S1 J S2 B8,9 J8 B8,9 J8 B1,7 B3,5 B B B J7 J1 J3 J5 J7 J1 J3 J5 B5,6 B3,4 B1,2 R9 R8 R7 R6 R5 R4 R3 R2 R1 B5,6 B3,4 B1,2 R9 R8 R7 R6 R5 R4 R3 R2 R1

slide-32
SLIDE 32

From 2-by-N to M-by-N From 2-by-N to M-by-N

  • 2 by N can be directly extended to M by N
  • 2-by-N can be directly extended to M-by-N.
  • Starting from a 2-by-N topology, we add one

source at a time, to connect the remaining source at a time, to connect the remaining M-2 sources.

– Assume we have constructed a k-by-N topology, 2 k M: 2<=k<M:

  • To add the (k + 1)th source, we perform k experiments:
  • At each experiment one different of the k sources and

th (k 1)th s s d k ts x d x the (k+1)th source send packets x1 and x2.

  • We then glue these topologies together by

following the topological rules previously f g p g p y described.

slide-33
SLIDE 33

Outline Outline

  • Network Tomography
  • Network Tomography
  • Goal, Main Ideas, and Contributions

P d A h

  • Proposed Approach

– Assumptions, Node Operations Step 1: 2 by 2 Components (lossless/lossy) – Step 1: 2-by-2 Components (lossless/lossy) – Step 2: Merging Algorithms (two scenarios) – Simulation Results Simulation Results

  • Conclusion
slide-34
SLIDE 34

Simulation Setup Simulation Setup

  • An Internet

Topology

S1 S2 Rabbat et al.,2006

  • An Internet

topology connecting

1

J8

hosts at academic institutions in

B8,9 B1 7

institutions in the US and Europe.

B1,7 B3,5 J1 B5,6 B3,4 B1,2 J7 J1 J3 J5 R9 R8 R7 R6 R5 R4 R3 R2 R1

slide-35
SLIDE 35

Simulation Results

45 50

Absence of loss Presence of loss

45 50 countMax=50

30 35 40 45

  • r

30 35 40 45

r

countMax=100 countMax=200

15 20 25

% erro

15 20 25

% erro

20 40 60 80 100 5 10

number of experiments (countMax)

1 2 3 4 5 6 5 10

% loss (same on every link)

number of experiments (countMax)

  • Error: type 4 as type 1.
  • Error prob.~0 in countMax~50
  • Prev. Work: type 1 (shared) vs.
  • Error: types 2,3,4 as type 1 or type 4

as type 2 or 3.

  • Error prob. decreases rapidly with

( y )

yp {2,3,4} (non-shared) p p y countMax.

  • Prev. work: 1000 probes (only type 1,

{2,3,4}), loss~2%, error 5-10%.

slide-36
SLIDE 36

Conclusion Conclusion

  • Summary
  • Summary

– Tomographic techniques for topology inference in a network with network coding. network with network coding.

  • Future directions

– Likelihood of the observations. L f . – Structures larger than 2-by-2:

  • More than two sources and two receivers.

E f h f

  • Expect a faster merging step at the cost of a more

complicated inference step.