Overcoming Delay, Synchronization and Cyclic Paths Meir Feder Tel - - PowerPoint PPT Presentation

overcoming delay synchronization and cyclic paths meir
SMART_READER_LITE
LIVE PREVIEW

Overcoming Delay, Synchronization and Cyclic Paths Meir Feder Tel - - PowerPoint PPT Presentation

Overcoming Delay, Synchronization and Cyclic Paths Meir Feder Tel Aviv University joint work with Elona Erez and Cyclic Paths 1 Root of the problem In much of network coding assume instantaneous coding


slide-1
SLIDE 1

✬ ✫ ✩ ✪

Overcoming Delay, Synchronization and Cyclic Paths Meir Feder Tel Aviv University joint work with Elona Erez

and Cyclic Paths 1

slide-2
SLIDE 2

✬ ✫ ✩ ✪

Root of the problem

  • In much of network coding assume “instantaneous coding”
  • Instantaneous coding cannot work with cycles
  • Node delay, which may be beneficial cycles, introduces a

synchronization problem in code implementation

  • How to deal with node delay in case of long input sequence?
  • What about decoding Delay?
  • Solution: Convolutional codes

Root of the problem 2

slide-3
SLIDE 3

Motivating Example

1

t

S

5

t

4

t

3

t

2

t

6

t

) (

1 n

x ) (

2 n

x

) (

1 n

x

) (

2 n

x ) ( ) (

2 1

n x n x + ) 1 ( ) (

2 1

− + n x n x

  • At n = 4 sink t1 receives x1(0) on both of its incoming links.
  • At time instant n = 5, t1 receives x1(1) and x1(1) + x2(0).
  • The effective decoding delay is 5.
  • Only a single memory element is required.

Motivating Example 2-1

slide-4
SLIDE 4

✬ ✫ ✩ ✪

Convolutional Network Codes - Definition

  • Let F(D) be the ring of polynomials over the binary field.
  • Link e is associated with b(e), whose elements are in F(D).

        1

        1

1

t

S

5

t

4

t

3

t

2

t

6

t

        1         1 1         D 1         1

a

v

Convolutional Network Codes - Definition 2

slide-5
SLIDE 5

✬ ✫ ✩ ✪

  • Input stream xi(n) can be represented by a power series:

Xi(D) = ∞

n=0 xi(n)Dn, i = 1, · · · , h

  • In linear convolutional network codes:

Ye(D) =

  • n=0

ye(n)Dn =

  • e′∈ΓI(v)

me(e′)Ye′(D) = b(e)T x(D) where ye(n) are the symbols transmitted on the link e.

  • To achieve rate h, the global coding vectors on the incoming

links to t have to span F[D]h, where F[D] is the field of rational function over the binary field.

Convolutional Network Codes - Definition 3

slide-6
SLIDE 6

✬ ✫ ✩ ✪

Dealing with Cycles

  • Previous Results
  • Precoding
  • Code Construction
  • Algorithm Complexity
  • Decoding Delay

Dealing with Cycles 4

slide-7
SLIDE 7

✬ ✫ ✩ ✪

Previous Results

  • Ahlswede et al (00): the cyclic network was unrolled into an

acyclic layered network. – The resulting scheme is time-variant, requires complex encoding/decoding and large delay

  • Koetter and M´

edard (02): if each edge has delay, then there exists a time-invariant linear code with optimal rate.

  • Li et al (03): a heuristic code construction for a linear

time-invariant code.

Previous Results 5

slide-8
SLIDE 8

✬ ✫ ✩ ✪

Line Graph

  • Originally the network is modeled as a directed graph G = (V, E)
  • L(V, E) is the line graph with:

– Vertex Set: V = E ∪ s ∪ T – Edge Set: E = {(e, e′) ∈ E2 : head(e) = tail(e′)} ∪ {(s, e) : e outgoing from s} ∪ {(e, ti) : e incoming to ti, 1 ≤ i ≤ d}

  • If there are h edge-disjoint paths between s and t in G, there are

corresponding h node-disjoint paths in L.

Line Graph 6

slide-9
SLIDE 9

✬ ✫ ✩ ✪ Recall the following:

  • h: the minimal min-cut between s and any of the sinks

T = {t1, · · · , td}

  • F(D): the ring of polynomials with binary coefficients
  • F[D]: the field of rational function over the binary field.
  • v(e): a global coding vector (whose components maybe in F[D])

assigned to node e ∈ L.

  • The code can be used for multicast if and only if for all t ∈ T,

the global coding vectors incoming into t span F[D]h.

Line Graph 7

slide-10
SLIDE 10

✬ ✫ ✩ ✪

Precoding

  • We find a set of nodes ED in L, such that if we eliminate them,

there will be no directed cycles.

  • To insure that each cycle will contain at least a single delay, the

coding coefficients for this set will be restricted to be polynomials with D as a common component.

  • To maintain the same number of possible coding coefficients, if

we choose polynomials with maximal degree M, then for e ∈ ED the maximal polynomial degree is M + 1.

Precoding 8

slide-11
SLIDE 11

✬ ✫ ✩ ✪

  • In order to minimize the delay, it is desired to minimize |ED|.
  • Finding the minimal ED is the long standing problem of finding

the minimal arc feedback set, which is NP-hard.

  • The best known approximation algorithm with polynomial

complexity achieves performance ratio O(log |V | log log |V |).

  • For our purposes - use any approximate solution to insert enough

delays in the cycles.

Precoding 9

slide-12
SLIDE 12

✬ ✫ ✩ ✪

Code Construction

  • The code construction goes in steps over the terminals:

Let Ll be the sub-graph consisting only of the nodes that participate in the flow from s to tl. Ll is acyclic. – Go over the nodes e ∈ Ll in a topological order. – Maintains a list of h nodes Cl = {e1,l, · · · , ei,l, · · · , eh,l}, each belongs to a different path.

Code Construction 10

slide-13
SLIDE 13

✬ ✫ ✩ ✪

  • Some definitions:

– Pj,l: the jth path of the flow from s to tl. – pj,l ⊂ Pj,l: the set of nodes following ej,l ∈ Cl (not including ej,l). – cj,l: the set of coding coefficients of edges with tail in pj,l and head in L \ pj,l. – rl: the union of these sets of coefficients: rl = ∪1≤j≤hcj,l

Code Construction 11

slide-14
SLIDE 14

✬ ✫ ✩ ✪

l j

p , from

  • utgoing

edges

l

t

l j

p , in edges

l

r m in edges for =

l

e ,

1

l h

e ,

l h

e

, 1

  • l

j

e ,

✁ ✁ ✁ ✂ ✂ ✂

n l

e ,

1 n l j

e ,

n l h

e

, 1

n l h

e ,

m

m

m

m

m

m

m

s

Code Construction 12

slide-15
SLIDE 15

✬ ✫ ✩ ✪

The partial coding vector - u(e)

  • u(e) is defined for all e ∈ Cl as the global coding vector of e

when all the coefficients in rl are set to zero.

  • Vl = {v(e) : e ∈ Cl}, Ul = {u(e) : e ∈ Cl}.
  • Requiring Vl to span F[D]h is not sufficient.
  • Requiring Ul to span F[D]h is sufficient.
  • At the end of step l, Vl = Ul.

The partial coding vector - u(e) 13

slide-16
SLIDE 16

✬ ✫ ✩ ✪ Requiring Vl to span F[D]h is not sufficient:

☛☞☛ ✌ ✍ ✎☞✎ ✏ ✑ ✒

1 ) (

, 1 l

e v

✓☞✓ ✔ ✕ ✖☞✖ ✗ ✘ ✙

1 1 ) (

, 1 n l

e v

s

✚☞✚ ✛ ✜ ✢☞✢ ✣ ✤ ✥

1 ) (

, 2 l

e v

✦☞✦ ✧ ★ ✩☞✩ ✪ ✫ ✬

1 1 ) ( 1 e v

✭☞✭ ✮ ✯ ✰☞✰ ✱ ✲ ✳

1 1 ) (

2

e v

✴✵✴ ✶ ✷ ✸✵✸ ✹ ✺ ✻

1 1 ) (

, 2 n l

e v

l

t

1 ) , (

, 1 1

n l

e e m 1 ) , (

, 2 2

n l

e e m ) , (

, 2 , 2

n l l e

e m 1 ) , (

2 , 1

e e m

n l

) , (

, 1 , 1

n l l e

e m

The partial coding vector - u(e) 14

slide-17
SLIDE 17

✬ ✫ ✩ ✪

  • The dashed edges are edges in Lk for some k < l.
  • The current edges in Cl are e1,l and e2,l.
  • We have reached en

2,l in the topological order.

  • The previous value m(e2,l, en

2,l) = 0 remains since v(en 2,l)

and v(e1,l) are already a basis .

  • Next we reach en

1,l and we need to determine m(e1,l, en 1,l).

  • But for any value of m(e1,l, en

1,l), we have v(en 1,l) = v(en 2,l)

and the new set of vectors cannot be a basis!

The partial coding vector - u(e) 15

slide-18
SLIDE 18

✬ ✫ ✩ ✪

Returning to the algorithm...

  • The algorithm reached node ei,l; wishes to continue to en

i,l, the

following node in Pi,l

  • So far, the set Ul = {u(e) : e ∈ Cl} is a basis.
  • A new list is generated: Cn

l = Cl ∪ en i,l \ ei,l.

  • There is a new set of partial coding vectors:

U n

l = {un(e1,l), · · · , un(en i,l), · · · , un(eh,l)}.

Returning to the algorithm... 16

slide-19
SLIDE 19

✬ ✫ ✩ ✪

  • The algorithm determines a coding coefficient m(ei,l, en

i,l)

between node ei,l and en

i,l so that U n l will be a basis.

  • Let m′(ei,l, en

i,l) be the coding coefficient between ei,l and en i,l

before this stage of the algorithm. – If with m′(ei,l, en

i,l) U n l is a basis - done!

– Otherwise - we have the following Theorem: Theorem 1 Suppose that with m′(ei,l, en

i,l) the set U n l is not

a basis. Then with any other value m(ei,l, en

i,l) the set U n l will

be a basis.

Returning to the algorithm... 17

slide-20
SLIDE 20

✬ ✫ ✩ ✪

But what about the previous sinks?

  • Changing m′(ei,l, en

i,l) changes the coding vectors incoming

at the previous sinks. May not be a basis anymore!

  • Theorem 2 Let Ck be the set of nodes incoming into the sink

tk, k < l. Denote by V ′

k = {v′(e1,k), · · · , v′(eh,k)}, ej,k ∈ Ck, the

set of global coding vectors of Ck with m′(ei,l, en

i,l).

If V ′

k is a basis, then at most a single value of new coefficient

m(ei,l, en

i,l) will cause the new set Vk = {v(e1,k), · · · , v(eh,k)} not

to be a basis.

But what about the previous sinks? 18

slide-21
SLIDE 21

✬ ✫ ✩ ✪

Summing it all up

  • If m′(ei,l, en

i,l) must be replaced, pick a new value m(ei,l, en i,l)

according to some enumeration. – Check if the independence condition is satisfied for all sinks. Otherwise take the next value for m(ei,l, en

i,l).

– Since for each sink only a single choice of m(ei,l, en

i,l) is bad,

it is sufficient to enumerate over d + 1 coefficients.

  • The l-step continues until it reaches the sink tl,
  • The algorithm terminates when it goes over all d sinks.

Summing it all up 19

slide-22
SLIDE 22

✬ ✫ ✩ ✪

Computation of transfer functions

  • In the construction algorithm the transfer function from a certain

node to another node has to be computed in each stage.

  • Define for the line graph L the |E| × |E| matrix C where

Ci,j = m(ei, ej) for (ei, ej) ∈ L and zero otherwise.

  • Koetter and M´

edard (02): The transfer function between ei and ej is Fi,j, of the matrix F = (I − C)−1 = I + C + C2 + · · · .

  • Fi,j can be computed from C with complexity O(|E|2).

Computation of transfer functions 20

slide-23
SLIDE 23

✬ ✫ ✩ ✪

Complexity of Code Construction

  • The complexity of the precoding depends on the specific

algorithm chosen.

  • The algorithm begins by finding the d flows from the source s to

the sinks at complexity O(d|E|h).

  • The algorithm has d steps and in each it may go over all nodes:

– For each stage, when check a possible m(·, ·): ∗ May compute dh transfer functions at complexity O(dh|E|2)

Complexity of Code Construction 21

slide-24
SLIDE 24

✬ ✫ ✩ ✪

∗ Perform independence test for Ul at complexity O(h), and independence test for the other sinks at complexity O(dh2) – In the average case check a constant number of m(·, ·)’s, thus stage complexity O(dh2 + dh|E|2) = O(dh|E|2) – At the worst case check d values - complexity O(d2h|E|2).

  • Total complexity: O(d2h|E|3) in the average case and

O(d3h|E|3) in the worse case.

Complexity of Code Construction 22

slide-25
SLIDE 25

✬ ✫ ✩ ✪

Comparison

  • Jaggi el al, 2003, presented an algorithm for acyclic

networks with complexity O(|E|dh2) on average and O(|E|dh2 + |E|hd3) in the worst case.

  • Our algorithm can also be used for acyclic networks at

complexity O(|E|d2h2 + |E|2) in the average case and O(|E|d3h2 + |E|2) in the worst case.

Comparison 23

slide-26
SLIDE 26

✬ ✫ ✩ ✪

A single delay in a cycle

  • In Koetter and M´

edard (02) analysis for cyclic networks it is assumed that each node in L has a single delay.

  • But as we have shown it is sufficient to have only a single delay

for each cycle in the network.

  • Song et al (05) showed that for this “asynchronous transmission”

the min-cut is an upper bound on the possible rate.

  • Since this bound is achievable, this bound is tight.

A single delay in a cycle 24

slide-27
SLIDE 27

✬ ✫ ✩ ✪

Adding and Removing Sinks

  • The existing construction algorithms (for acyclic networks) do

not provide a simple way to add and remove sinks.

  • In our algorithm - adding a new sink simply corresponds to a

new step in the algorithm, as only coding coefficients in the flow between the source and the new sink might be changed.

  • Removing sinks is analogous to adding sinks.
  • The efficient algorithm for removing and adding sinks can be

performed for block or convolutional linear network codes.

Adding and Removing Sinks 25

slide-28
SLIDE 28

✬ ✫ ✩ ✪

Decoding Delay of the Sequential Decoder

  • I. Acyclic Networks
  • The delay of the sequential decoder proposed in Erez and

Feder (04) is defined by the determinant of the matrix A(D), whose columns are the coding vectors in Vl: – If the term with the smallest power of the determinant is DN, then the delay is at most N.

  • I. Acyclic Networks

26

slide-29
SLIDE 29

✬ ✫ ✩ ✪

  • For each coding coefficient we can choose from M

polynomials.

  • For node e incoming into sink tl let Pm(e) be the path from

s to e in the flow Ll; denote by lm(e) the length of Pm(e).

  • It can be shown that for a random code the average delay at

tl, for any M > d is bounded by delay(tl) =

  • e∈Γin(tl)

lm(e)

  • I. Acyclic Networks

27

slide-30
SLIDE 30

✬ ✫ ✩ ✪

Probability Distribution of Delay

  • The cost of the flow is given by

lm =

  • e∈Γin(tl)

lm(e)

  • For random codes, for large M, the distribution of the delay:

P(delay = q) = lm + q − 1 q 1 2 q+lm

  • The distribution is better for smaller M, as long as M > d.

Probability Distribution of Delay 28

slide-31
SLIDE 31

✬ ✫ ✩ ✪

  • II. Cyclic Networks
  • For cyclic networks, the precoding stage adds delays even

for block codes ⇒ the sequential decoder is useful both for block and convolutional codes.

  • The elements of A(D) might in general be rational functions,

where the denominator of each element is indivisible by D.

  • Therefore the least common multiplier of the denominators,

denoted by lcm is also indivisible by D.

  • II. Cyclic Networks

29

slide-32
SLIDE 32

✬ ✫ ✩ ✪

  • If we multiply A(D) by lcm to yield ˜

A(D), then the determinant is multiplied by lcmh.

  • A(D) and ˜

A(D) have the same delay with the sequential decoder.

  • The delay for ˜

A(D) is determined by the sum of delays accumulated along the h paths between the source and the sink.

  • In comparison to acyclic networks, this delay might increase
  • nly by the number of nodes in ED.
  • II. Cyclic Networks

30

slide-33
SLIDE 33

✬ ✫ ✩ ✪

Proof of Theorems

Proof of Theorems 31

slide-34
SLIDE 34

✬ ✫ ✩ ✪

Lemma 1

Lemma 1 Consider a set of nodes {e1, · · · , eh} and their coding vectors W = {w(e1), · · · , w(ei), · · · , w(eh)}, which may be partial or global coding vectors. Consider now the coding vectors of the same set of nodes ˜ W = {˜ w(e1), · · · , ˜ w(ei), · · · , ˜ w(eh)}, when m(ei, e) = 0 for ∀e ∈ L. The set W is a basis iff the set ˜ W is a basis.

Lemma 1 32

slide-35
SLIDE 35

✬ ✫ ✩ ✪

Proof Outline

  • Split node ei into 3 nodes: etail, emid and ehead, connected

by edges (etail, emid) and (emid, ehead).

  • Gee: the transfer function from ehead to etail in L \ emid.
  • The relation between w(ei) and ˜

w(ei) is: w(ei) = ˜ w(ei) + Gee ˜ w(ei) + · · · = 1 1 − Gee ˜ w(ei)

Proof Outline 33

slide-36
SLIDE 36

✬ ✫ ✩ ✪

  • The other vectors are given by:

w(ej) = ˜ w(ej) + Fij 1 1 − Gee ˜ w(ei), j = i where Fij is the transfer function from ei to ej.

  • The relation between W and ˜

W is linear and inverse ⇒ a basis W will be mapped to a basis ˜ W and vice versa.

Proof Outline 34

slide-37
SLIDE 37

✬ ✫ ✩ ✪

Proof of Theorem 1

  • Denote the coding vectors of Cn

l when all the coefficients in

rl are zero by ˜ U n

l = {˜

un(e1,l), · · · , ˜ un(en

i,l), · · · , ˜

un(eh,l)}.

  • Assume Ul = {u(e1,l), · · · , u(ei,l), · · · , u(eh,l)} is a basis.
  • After replacing m′(ei,l, en

i,l) by m(ei,l, en i,l), ˜

un(en

i,l) equals:

˜ un(en

i,l) = ˜

u′(en

i,l) + (m(ei,l, en i,l) − m′(ei,l, en i,l))u(ei,l) Proof of Theorem 1 35

slide-38
SLIDE 38

✬ ✫ ✩ ✪

  • Using this relation it can be shown that if Ul is a basis and

if with m′(ei,l, en

i,l) the set ˜

U n

l is not a basis, then for any

  • ther m(ei,l, en

i,l) the set ˜

U n

l is a basis.

  • From Lemma 1 it follows that the set U n

l is also a basis. Proof of Theorem 1 36

slide-39
SLIDE 39

✬ ✫ ✩ ✪

Proof of Theorem 2

  • Before the replacement of m′(ei,l, en

i,l) the set

V ′

k = {v′(e1,k), · · · , v′(eh,k)} is a basis.

  • We want to analyze when after the replacement to

m(ei,l, en

i,l) the new set of global coding vectors

Vk = {v(e1,k), · · · , v(eh,k)}, is also basis.

  • Assume that the edges outgoing from ei,l, except (ei,l, en

i,l),

are Γo = {(ei,l, e1), · · · , (ei,l, eq)}.

Proof of Theorem 2 37

slide-40
SLIDE 40

✬ ✫ ✩ ✪

  • The system Gee can be expressed as

Gee = G1 + m(ei,l, en

i,l)G2,

  • The global coding vector v(ei,l) is given by:

v(ei,l) = ˜ v(ei,l) + Gee˜ v(ei,l) + · · · = 1 1 − Gee ˜ v(ei,l) = 1 1 − m(ei,l, en

i,l)Qy(ei,l)

where Q = G2/(1 − G1) and y(ei,l) = ˜ v(ei,l)/(1 − G1).

Proof of Theorem 2 38

slide-41
SLIDE 41

✬ ✫ ✩ ✪

  • Using the linearity of the code, it can be shown that for

1 ≤ j ≤ h: v(ej,k) − v′(ej,k) = f(m(ei,l, en

i,l))Hjy(ei,l)

where Hj ≡ F1,j + QF2,j and F1,j is the transfer function from ei,l to ej,k, when m(ei,l, e) = 0, e ∈ Γo \ (ei,l, en

i,l), and

F2,j when only the coefficient m(ei,l, en

i,l) = 0. Proof of Theorem 2 39

slide-42
SLIDE 42

✬ ✫ ✩ ✪

  • Suppose the representation of y(ei,l) in basis V ′

k is:

y(ei,l) = β1v′(e1,k) + β2v′(e2,k) + · · · + βhv′(eh,k)

  • Using this relation and the assumption that V ′

k is a basis,

the set Vk is not a basis only if the following set of equation has a non trivial solution:

− 1 f(m(ei,l, en

i,l))

       α1 . . . αh        =        H1β1 · · · Hhβ1 . . . ... . . . H1βh · · · Hhβh               α1 . . . αh       

Proof of Theorem 2 40

slide-43
SLIDE 43

✬ ✫ ✩ ✪

  • A non trivial solution exist only if the matrix has eigenvalue:

λ = − 1 f(m(ei,l, en

i,l))

  • The matrix has eigenvalue 0 with multitude h − 1 and:

λ = trace(A) = H1β1 + H2β2 + · · · + Hhβh with multitude 1.

Proof of Theorem 2 41

slide-44
SLIDE 44

✬ ✫ ✩ ✪

  • It can be shown that Vk is not a basis only for,

m(ei,l, en

i,l) =

m′(ei,l, en

i,l) − 1−m′(ei,l,en

i,l)Q

trace(A)

1 − Q

1−m′(ei,l,en

i,l)Q

trace(A)

⇒ for at most a single choice of m(ei,l, en

i,l) the set Vk will

not be a basis.

Proof of Theorem 2 42