Lecture 6 Network Information Flow I-Hsiang Wang Department of - - PowerPoint PPT Presentation

lecture 6 network information flow
SMART_READER_LITE
LIVE PREVIEW

Lecture 6 Network Information Flow I-Hsiang Wang Department of - - PowerPoint PPT Presentation

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary Lecture 6 Network Information Flow I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 2, 2014 1 / 63


slide-1
SLIDE 1

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Lecture 6 Network Information Flow

I-Hsiang Wang

Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw

December 2, 2014

1 / 63 I-Hsiang Wang NIT Lecture 6

slide-2
SLIDE 2

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

From Point-to-Point Channel to Multi-Terminal Network

So far we have put our focus on point-to-point communication systems:

1 Deliver an information source to a destination over a point-to-point

communication link in the most efficient way.

2 Source-channel separation theorem suggests that separate source

coding and channel coding is optimal in terms of rate. Now, what if the physical medium carrying the information is no longer a point-to-point link, but a collection of these links that form a network? Observations:

1 A link consists of two terminals: a transmitting terminal and a

receiving terminal.

2 A receiving terminal of one link serves as a transmitting terminal of

the succeeding link.

3 Each terminal is able to collect the data it receives, process them,

and then transmit the processed outcome. (Coding).

2 / 63 I-Hsiang Wang NIT Lecture 6

slide-3
SLIDE 3

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

A Noisy Communication Channel

i

ENC Channel DEC

j C = max

p(x) I (X; Y )

Memoryless channel: Memoryless channel with input cost constraint: C = sup

p(x):E[b(X)]≤B

I (X; Y ) W XN Y N c W Channel capacity is determined by the Channel Coding Theorem

3 / 63 I-Hsiang Wang NIT Lecture 6

slide-4
SLIDE 4

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Representing a Noisy Channel by an Edge

i j i

ENC Channel DEC

j C(e) Edge e Point-to-point communication link represented by edge e Channel capacity is determined by the Channel Coding Theorem e := (i, j) , C (e) := C = max I (Xi; Yj)

4 / 63 I-Hsiang Wang NIT Lecture 6

slide-5
SLIDE 5

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Abstraction: Layering

i j i

ENC Channel DEC

j C(e) Edge e Point-to-point communication link represented by edge e Channel capacity is determined by the Channel Coding Theorem e := (i, j) , C (e) := C = max I (Xi; Yj) m(i,j) m(i,j) ∈ h 1 : 2NC(i,j)i m(i,j)

  • b

m(i,j)

5 / 63 I-Hsiang Wang NIT Lecture 6

slide-6
SLIDE 6

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

A Network: Collection of Vertices and Edges

s i j t Source i

ENC Channel DEC

j Point-to-point communication link represented by edge e Destination m(i,j) m(i,j) C (i, j) m(i,j) − →

6 / 63 I-Hsiang Wang NIT Lecture 6

slide-7
SLIDE 7

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Graphical Network (1)

We shall focus on graphical networks, consisting of multiple terminals and links, where the terminals are modeled as vertices of a graph, and the links are modeled as edges of the graph. Moreover, links are assumed to be orthogonal at each terminal, no matter the links are incoming or outgoing.

k i j l n Orthogonal m(i,k) m(j,k) m(k,l) m(k,n)

In (k) Out (k)

In the network, some terminals, as sources of information, would like to deliver their own messages to their respective destinations. For simplicity, in this lecture we shall focus on graphs without cycles.

7 / 63 I-Hsiang Wang NIT Lecture 6

slide-8
SLIDE 8

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Graphical Network (2)

Hence, a network with message set {W1, . . . , WK} is specified by

1 A underlying directed acyclic graph (V, E), where V is the collection

  • f vertices (terminals), and E is the collection of edges (links).

2 Capacity function C : E → [0, ∞), 3 Two kinds of special vertices (terminals) in the graph:

Source vertices S = ∪K

i=1 Si, where Si ⊂ V for each i ∈ [1 : K].

Each vertex of Si has message Wi to send. Destination vertices T = ∪K

i=1 Ti, where Ti ⊂ V for each i ∈ [1 : K].

Each vertex of Ti would like to decode message Wi.

s1 s2

W1 W2 c W2 c W1

t2 t1 t3

c W1 c W2

G = (V, E, C (·))

W2

S1 = {s1} , S2 = {s1, s2} T1 = {t1, t3} , T2 = {t2, t3}

8 / 63 I-Hsiang Wang NIT Lecture 6

slide-9
SLIDE 9

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Traffic Patterns

The traffic pattern of a K-message graphical network is determined by the patterns of source vertices {Si}K

i=1 and destination vertices {Ti}K i=1.

In particular, some common traffic patterns of interest are the following:

1 Single-message single-source unicast: K = 1, |S| = 1, |T | = 1. 2 Single-message single-source multicast: K = 1, |S| = 1, |T | ≥ 2. 3 Multi-message multi-source single-group multiple multicast:

K ≥ 2, |Si| = 1, |S| = K, Ti = T , ∀ i ∈ [1 : K].

4 Multi-message single-source multi-destination multiple unicast:

K ≥ 2, |Si| = |S| = 1, |Ti| = 1, |T | = K, ∀ i ∈ [1 : K].

5 Multi-message multi-source multi-destination multiple unicast:

K ≥ 2, |Si| = |Ti| = 1, |S| = |T | = K, ∀ i ∈ [1 : K].

9 / 63 I-Hsiang Wang NIT Lecture 6

slide-10
SLIDE 10

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Single Unicast and Single Multicast

s1

W1 c W1

t1 G = (V, E, C (·))

S1 = {s1} , T1 = {t1}

Single Unicast s1

W1 c W1

t2 t1 G = (V, E, C (·))

S1 = {s1} , T1 = {t1, t2}

c W1

Single Multicast

10 / 63 I-Hsiang Wang NIT Lecture 6

slide-11
SLIDE 11

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Multiple-Access and Broadcast

s1 s2

W1 W2

t3

c W1 c W2

G = (V, E, C (·))

S1 = {s1} , S2 = {s2} T1 = {t3} , T2 = {t3}

Multiple Access: Multi-Msg Multi-Src Single-Dest Multiple Unicast s1

W1 c W2 c W1

t2 t1 G = (V, E, C (·))

W2

S1 = {s1} , S2 = {s1} T1 = {t1} , T2 = {t2}

Broadcast: Multi-Msg Single-Src Multi-Dest Multiple Unicast

11 / 63 I-Hsiang Wang NIT Lecture 6

slide-12
SLIDE 12

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

K-Unicast

s1 s2

W1 W2 c W2 c W1

t2 t1 G = (V, E, C (·))

S1 = {s1} , S2 = {s2} T1 = {t1} , T2 = {t2}

K-Unicast: Multi-Msg Multi-Src Multiple-Dest Multiple Unicast

12 / 63 I-Hsiang Wang NIT Lecture 6

slide-13
SLIDE 13

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Lecture Overview

Key question to be answered in this lecture: Given a network, what is the highest data rate (tuples) that can be delivered from the information source(s) to the respective destination(s)? In general, we have no conclusive answers for the K-unicast and more complicated traffic patterns. In this lecture, we will visit the following:

1 Single Unicast: Max-Flow (Routing) = Min-Cut 2 Single Multicast: Max-Flow (Network Coding) = Minimum Min-Cut 3 Multiple Access and Broadcast: extensions of max-flow-min-cut

In particular, several achievability and converse ideas will be presented: Achievability: Ford-Fulkerson algorithm, network coding Converse: cut-set bound Below, we formally set up the problem with single message only (hence single source), which covers both single unicast and single multicast.

13 / 63 I-Hsiang Wang NIT Lecture 6

slide-14
SLIDE 14

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Problem Setup: Single-Message Single-Source Multicast

Given a single-message single-source multicast network G = (V, E, C (·)) with source node S = {s} and destination nodes T = {t1, . . . , tK}, a ( 2NR, N ) code over the network consists of:

1 a source encoding function (encoder)

enc(N)

s

: [ 1 : 2NR] →×

j∈Out(s)

[ 1 : 2NC(s,k)] that maps each source message w ∈ [ 1 : 2NR] to an outgoing message m(s,k) for each outgoing edge (s, k), k ∈ Out (s).

s w m(s,k1) m(s,k2) m(s,k3) k1 k2 k3

14 / 63 I-Hsiang Wang NIT Lecture 6

slide-15
SLIDE 15

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Problem Setup: Single-Message Single-Source Multicast

2 a relay encoding function (encoder) for each node j ∈ V \ {s}

enc(N)

j

i∈In(j)

[ 1 : 2NC(i,j)] →×

k∈Out(j)

[ 1 : 2NC(j,k)] that maps incoming messages ( m(i,j) : i ∈ In (j) ) to an outgoing message m(j,k) for each outgoing edge (j, k), k ∈ Out (j).

j i1 i2 k1 k2 k3 m(i1,j) m(i2,j) m(j,k1) m(j,k2) m(j,k3)

m(j,kl)

f

= ( m(i1,j), m(i2,j) ) , l = 1, 2, 3

15 / 63 I-Hsiang Wang NIT Lecture 6

slide-16
SLIDE 16

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Problem Setup: Single-Message Single-Source Multicast

3 a decoding function (decoder) for each destination tk ∈ T

dec(N)

tk

i∈In(tk)

[ 1 : 2NC(i,tk)] → [ 1 : 2NR] that maps maps incoming messages ( m(i,tk) : i ∈ In (tk) ) to a reconstructed message w ∈ [ 1 : 2NR] .

t i1 i2 m(i2,t) m(i1,t) b w

f

=

  • m(i1,t), m(i2,t)
  • The error probability P(N)

e

:= Pr { W ̸= W } . A source data rate R is achievable, if ∃ a sequence of ( 2NR, N ) codes such that lim

N→∞ P(N) e

= 0.

16 / 63 I-Hsiang Wang NIT Lecture 6

slide-17
SLIDE 17

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

1 Single Unicast and Single Multicast 2 Linear Network Coding and Algorithms 3 Summary

17 / 63 I-Hsiang Wang NIT Lecture 6

slide-18
SLIDE 18

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Cut-Set Upper Bound

Definition 1 (Cut, Cut Value, and Min-Cut) For a network G = (V, E, C (·)):

1 A cut separating two disjoint vertex sets A and B, is a partition

(U, Uc) of the entire vertex set V, such that A ⊆ U and B ⊆ Uc.

2 The value of the cut is defined as C (U) :=

(i,j)∈E,i∈U,j∈Uc C (i, j). 3 The min-cut separating two disjoint vertex sets A and B is defined

as C (A ; B) := min

U:A⊆U,B⊆UcC (U).

Lemma 1 (Cut-Set Upper Bound) For a single unicast network, if R is achievable = ⇒ R ≤ C (s ; t).

18 / 63 I-Hsiang Wang NIT Lecture 6

slide-19
SLIDE 19

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

pf: By the Fano’s inequality and the data processing inequality, we have for every cut U separating s and t, N (R − ϵN) ≤ I ( W ; W )

(a)

≤ I ( W ; { M(i,j) : (i, j) ∈ E, i ∈ U, j ∈ Uc}) ≤ H ({ M(i,j) : (i, j) ∈ E, i ∈ U, j ∈ Uc})

(b)

≤ N C (U) , where ϵN → 0 as N → ∞, and (a) is due to W − { M(i,j) : (i, j) ∈ E, i ∈ U, j ∈ Uc} − W (b) is due to the definition of cut value Hence, R ≤ C (U) for every for every cut U separating s and t. Therefore, if R is achievable, then R ≤ C ({s} ; {t}), the min-cut.

19 / 63 I-Hsiang Wang NIT Lecture 6

slide-20
SLIDE 20

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Max-Flow Min-Cut Theorem

Theorem 1 (Max-Flow = Min-Cut for Single Unicast) For a single unicast network, if R < C (s ; t), then R is achievable. Conversely, if R is achievable, then R ≤ C (s; t). In other words, the maximum rate of information flow that can be delivered from the source s to the destination t in this single unicast network is equal to the minimum cut. The converse is proved in Lemma 1. In the following, we give an achievability proof based on the random coding argument. Remark: In fact, R = C (s ; t) can also be achieved, by routing information bits from s to t. We will come back to the algorithmic aspects of the max-flow-min-cut theorem later, where we introduce the Ford-Fulkerson algorithm, one of the most well-known achievability proofs of the max-flow-min-cut theorem.

20 / 63 I-Hsiang Wang NIT Lecture 6

slide-21
SLIDE 21

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Proof of Achievability by Random Coding

Goal: Show that, if R < C (s ; t), then there exist a sequence of ( 2NR, N ) network code such that limN→∞ P(N)

e

= 0. To prove the existence, we revoke the random coding argument. Note: Key differences from the point-to-point channel coding problem: The received signal at the final destination t (i.e., what it decodes from) is a deterministic function of the source’s message W. Decodability ⇐ ⇒ Invertibility of the function There are |E| encoding functions to be specified. Proof Outline:

1 Random codebook generation: we generate the codebook for each

edge according to the uniform distribution

2 Error probability analysis: we use the concept of distinguishability to

form cuts in the graph, so that min-cut emerges naturally.

21 / 63 I-Hsiang Wang NIT Lecture 6

slide-22
SLIDE 22

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Random Codebook Generation

Notation: for better presentation, denote the received messages at node j ̸= s by Fj (w), as a function of the source message w. For the source s, simply set Fs (w) = w. In other words, Fj (w) := ( m(i,j) : i ∈ In (j) ) ∀ j ∈ V \ {s}, Fs (w) = w. The codebook is to specify |E| mappings, m(j,k) (Fj) ∈ [ 1 : 2NC(j,k)] , for all (j, k) ∈ E. For each (j, k) ∈ E, we generate the mappings uniformly at random, independently over all possible Fj ∈×

i∈In(j)

[ 1 : 2NC(i,j)] : M(j,k) ∼ Unif [ 1 : 2NC(j,k)] ,

Remark: You may be a bit confused, as in Lecture 04 we generate codebook not only i.i.d. across the 2NR rows, but also i.i.d. across N symbols. This is in fact the same as here, because Unif [ 1 : 2NC(j,k)] = ( Unif [ 1 : 2C(j,k)])N .

22 / 63 I-Hsiang Wang NIT Lecture 6

slide-23
SLIDE 23

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Error Probability Analysis: Error Events

Following the symmetry of codebook generation, we can assume WLOG that W = 1, and focus on upper bounding the averaged-over-codebook error probability given W = 1, P1 (E) (here E denote the error event). Observe that E =

2NR

w=2

Ew, where the error event Ew := {Ft (w) = Ft (1)}. For a cut (U, Uc) separating s and t, define event E(U)

w

:= {Fj (w) ̸= Fj (1) , ∀ j ∈ U} ∩ {Fj (w) = Fj (1) , ∀ j ∈ Uc} . In other words, this is the event where

ALL nodes on the left of the cut can distinguish message w from 1, and NONE of the nodes on the right can distinguish message w from 1.

Note: the collection of events { E(U)

w

: (U, Uc) is a cut } partitions Ew.

23 / 63 I-Hsiang Wang NIT Lecture 6

slide-24
SLIDE 24

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

s t s t U Uc

not confused confused

Hence, by the union of events bound, P1 (E) ≤

2NR

w=2

P1 (Ew), and P1 (Ew) = ∑

U⊆V,s∈U,t∈Uc

P1 ( E(U)

w

) . Next, let us upper bound P1 ( E(U)

w

) for each w ̸= 1 and s-t cut (U, Uc).

24 / 63 I-Hsiang Wang NIT Lecture 6

slide-25
SLIDE 25

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Error Probability Analysis: Distringuishability

Since the graph is acyclic (no cycles), we can topologically sort the nodes in U and label them as follows: U = {1, 2, . . . , |U|}, where s = 1.

1 3 2

s t U Uc

not confused confused 1 3 2

s t U Uc

not confused confusing!

confusing!

confused

With this labeling, ∀ j ∈ [1 : |U|], define event A(j)

w := {Fj (w) ̸= Fj (1)}

and event B(j)

w :=

{ M(j,k) (Fi (w)) = M(j,k) (Fi (1)) , ∀ k ∈ Out (j) ∩ Uc} .

25 / 63 I-Hsiang Wang NIT Lecture 6

slide-26
SLIDE 26

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

1 3 2

s t U Uc

not confused confusing!

confusing!

confused

In other words, A(j)

w is the event where node j can distinguish message w

from 1, while B(j)

w is the event where a “non-confused” node j sends out

“confusing” messages on the edges across the cut (U, Uc). Hence, E(U)

w

=

|U|

j=1

( A(j)

w ∩ B(j) w

) = ⇒ P1 ( E(U)

w

) = P1  

|U|

j=1

( A(j)

w ∩ B(j) w

)   .

26 / 63 I-Hsiang Wang NIT Lecture 6

slide-27
SLIDE 27

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Error Probability Analysis: Finalization (1)

∴ P1 ( E(U)

w

) = P1 (∩|U|

j=1

( A(j)

w ∩ B(j) w

)) = ∏|U|

j=1 P1

( A(j)

w , B(j) w

  • A(1)

w , B(1) w , . . . , A(j−1) w

, B(j−1)

w

)

(a)

≤ ∏|U|

j=1 P1

( B(j)

w

  • A(j)

w , A(1) w , B(1) w , . . . , A(j−1) w

, B(j−1)

w

)

(b)

= ∏|U|

j=1 P1

( B(j)

w

  • A(j)

w

) (a) is due to Pr {A, B|C} = Pr {A|C} · Pr {B|A, C} ≤ Pr {B|A, C}. (b) is due to the fact that given j is not confused, the event that it encodes confusing messages on its outgoing edges is independent of what happens on nodes before it in topological order.

27 / 63 I-Hsiang Wang NIT Lecture 6

slide-28
SLIDE 28

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Error Probability Analysis: Finalization (2)

Due to the choice of uniform random coding ensemble, and recall that B(j)

w :=

{ M(j,k) (Fi (w)) = M(j,k) (Fi (1)) , ∀ k ∈ Out (j) ∩ Uc} , we have P1 ( E(U)

w

) ≤ ∏|U|

j=1 P1

( B(j)

w

  • A(j)

w

) = ∏|U|

j=1

k∈Out(j)∩Uc 2−NC(j,k)

= 2−N ∑

j∈U,k∈Uc,(j,k)∈E C(j,k)

= 2−NC(U). = ⇒ P1 (Ew) = ∑

U⊆V,s∈U,t∈Uc

P1 ( E(U)

w

) ≤ ∑

U⊆V,s∈U,t∈Uc

2−NC(U) ≤ 2(|V|−2) · 2−NC(s ;t) = ⇒ P1 (E) ≤

2NR

w=2

P1 (Ew) ≤ 2NR · 2(|V|−2) · 2−NC(s ;t). Hence, as long as R < C (s ; t), P1 (E) → 0 as N → ∞.

28 / 63 I-Hsiang Wang NIT Lecture 6

slide-29
SLIDE 29

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Coding Theorem for Single Multicast

Straightforward extension of the previous converse proof and achievability proof establishes the following coding theorem for single multicast. Theorem 2 (Max-Flow = Min Min-Cut for Single Multicast) For a single multicast network with K destination T = {t1, . . . , tK}, if R < mink∈[1:K] C (s ; tk), then R is achievable. Conversely, if R is achievable, then R ≤ mink∈[1:K] C (s ; tk). pf: The proof is left as exercise. Remark: So far we extend the proof techniques for achievability (random coding) and converse from the point-to-point noisy channel to the noiseless graphical network. Unlike the noisy channel, in the graphical network we can construct explicit schemes efficiently, by Ford-Fulkerson algorithm for single unicast, and Jaggi et al. algorithm for multicast.

29 / 63 I-Hsiang Wang NIT Lecture 6

slide-30
SLIDE 30

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

1 Single Unicast and Single Multicast 2 Linear Network Coding and Algorithms 3 Summary

30 / 63 I-Hsiang Wang NIT Lecture 6

slide-31
SLIDE 31

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Routing for Single Unicast

Recall: for a single unicast network, we proved that for any R < C (s ; t) bits/channel use, there exist a sequence of ( 2NR, N ) network codes such that the decoding error probability → 0 as N → ∞. Note: Each relay node’s operation is an arbitrary mapping from the received data to the outgoing messages on the outgoing links. Below we show that we can achieve zero-error for any finite N using an explicit construction of routing schemes. We begin with defining flow. Definition 2 (Flow) A flow f : E → [0, ∞) with value R is an assignment to edges satisfying f (i, j) ≤ C (i, j), ∀ (i, j) ∈ E. (capacity constraint) ∑

i∈In(j) f (i, j) = ∑ k∈Out(j) f (j, k), ∀j ∈ V \ {s, t}.

(conservation) ∑

i∈In(t) f (i, t) = ∑ k∈Out(s) f (s, k) = R.

31 / 63 I-Hsiang Wang NIT Lecture 6

slide-32
SLIDE 32

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Routing as a Special Case of Network Code

A valid flow on a single unicast network stands for an explicit scheme of routing NR bits of data w from source s to destination t :

s w m(s,k1) m(s,k2) k1 k2 R f (s, k1) f (s, k2)

At the source s : The NR bits of w is split into independent parts m(s,k) for all k ∈ Out (s), with rate f (s, k).

j i1 i2 k1 k2 m(i1,j) m(i2,j) m(j,k1) m(j,k2) f (i1, j) f (i2, j) f (j, k1) f (j, k2)

At a relay j : Incoming bits are split into independent parts m(j,k) for all k ∈ Out (j), with rate f (j, k).

t i1 i2 m(i2,t) m(i1,t) w f (i1, t) f (i2, t) R

At the destination t : Combine the received bits back into w.

32 / 63 I-Hsiang Wang NIT Lecture 6

slide-33
SLIDE 33

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Finding Max Flow by Solving a Linear Program

The problem of finding the flow with maximum value can be formulated as a linear program (LP) as follows: maximize R subject to: f (i, j) ≥ 0, f (i, j) ≤ C (i, j) , ∀ (i, j) ∈ E ∑

i∈In(j)

f (i, j) = ∑

k∈Out(j)

f (j, k) , ∀j ∈ V \ {s, t} ∑

i∈In(t)

f (i, t) = ∑

k∈Out(s)

f (s, k) = R The objective function is R, and the variables are {f (i, j) | (i, j) ∈ E}. Remark: The max-flow min-cut theorem can be alternatively proved by the duality of linear program – the dual of the above max-flow LP is the linear program of finding the min-cut. (See El Gamal&Kim Ch 15.1)

33 / 63 I-Hsiang Wang NIT Lecture 6

slide-34
SLIDE 34

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Flow and Edge-Disjoint Paths

Finding a flow can be intuitively thought of as finding edge-disjoint paths

  • n the network, where the paths are used to route independent data from

the source s to the destination t. Example: Max-Flow = Min-Cut = 19 s t

10 9 10 9 10

s t

10 9 10 9 10

9 9 1

34 / 63 I-Hsiang Wang NIT Lecture 6

slide-35
SLIDE 35

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Naïve Greedy Algorithm is Suboptimal

s t

10 9 10 9 10

10

Find a path with maximum flow value (10), and remove the involved edge capacities along the path.

s t

9 9

In the remaining network, continue this process until no more path can be found.

Strictly Suboptimal

35 / 63 I-Hsiang Wang NIT Lecture 6

slide-36
SLIDE 36

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Ford-Fulkerson: Allowing Regrets on Residual Networks

The key issue in the naïve greedy algorithm is that, there is no chance for “regret” after choosing a “seemingly best” path. Instead, we shall allow such “turning-back” option, by labeling the reverse link capacities in the remaining network (which we called specifically the “residual network”).

s t

10/0 9/0 10/0 9/0 10/0

s t

10/0 9/0 10/0 9/0 10/0

10

36 / 63 I-Hsiang Wang NIT Lecture 6

slide-37
SLIDE 37

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

s t

10/0 9/0 10/0 9/0 10/0

10

1

Find the path with max. flow

s t

0/10 9/0 0/10 9/0 0/10

9

2 s t

0/10 9/0 0/10 9/0 0/10

Residual Network

s t

0/10 0/9 0/10 0/9 9/1

Done

37 / 63 I-Hsiang Wang NIT Lecture 6

slide-38
SLIDE 38

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

s t

10

1

+

s t

9

2 s t

9 9 1

Final

38 / 63 I-Hsiang Wang NIT Lecture 6

slide-39
SLIDE 39

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Ford-Fulkerson Algorithm

The Ford-Fulkerson algorithm selects valid paths (called augmenting paths) on residual networks, following the steps below:

0 Initialize: Start with Gf ← G with flow f ← {f (i, j) = 0 | (i, j) ∈ E}. 1 Form the residual network Gf according to the flow f. 2 Find an augmenting path on Gf. (using breadth-first-search for example) 3 Stop if none exists; otherwise, set f ′ := the maximum flow value

along the augmenting path (saturate the path).

4 Set f ← f + f ′ and GoTo Step 1.

Note: Assuming that edge capacities {C (i, j) | (i, j) ∈ E} are integers, and because the Min-Cut C (s ; t) < ∞, the algorithm must terminate (since each iteration increases the value of the flow f ).

39 / 63 I-Hsiang Wang NIT Lecture 6

slide-40
SLIDE 40

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Ford-Fulkerson Outputs Max-Flow

Theorem 3 (Ford-Fulkerson Theorem) Let the Ford-Fulkerson algorithm terminates with flow f and residual network Gf. The following three statements are equivalent.

1 f is a max-flow. 2 Gf does not contain any augmenting path. 3 f = C (U) (the cut value) for some s-t separating cut (U, Uc).

pf: We shall prove that 1 = ⇒ 2 , 2 = ⇒ 3 , and 3 = ⇒ 1 . 3 = ⇒ 1 : By Lemma 1. 1 = ⇒ 2 : By the termination condition of the algorithm.

40 / 63 I-Hsiang Wang NIT Lecture 6

slide-41
SLIDE 41

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

2 = ⇒ 3 : Construct a cut (U, Uc) based on the residual network Gf, by setting U be the set of nodes that can be reached by source s on Gf. By the definition of the residual capacity on Gf, observe that the total amount of flow f saturates all the edges across the cut (U, Uc). Hence, the value of flow f is equal to the cut value C (U). s t

0/10 0/9 0/10 0/9 9/1

U Uc s t

0/10 0/9 0/10 0/9 9/1

41 / 63 I-Hsiang Wang NIT Lecture 6

slide-42
SLIDE 42

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Routing is Suboptimal for Multicast

s t1 t2

1 1 1 1 1 1 1 1 1

Butterfly Network

s t1

1 1 1 1 1 1 1

1 1

Max-Flow from s to t1 is 2 By symmetry, Max-Flow from s to t2 is also 2. Hence, by Theorem 2, the maximum data rate R of information flow should be 2. Question: Can we achieve 2 by routing? Exercise 1 Show that the max-flow by routing in the Butterfly Network is 3

2.

42 / 63 I-Hsiang Wang NIT Lecture 6

slide-43
SLIDE 43

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

s t1 A B s t2 A B s t1 A B t2 A B s t1 A B t2 A B

conflict

43 / 63 I-Hsiang Wang NIT Lecture 6

slide-44
SLIDE 44

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Network Coding is Necessary

To achieve rate 2, we must do network coding on the bottleneck edge.

s t1 A B t2 A B s t1 A B t2 A B A ⊕ B

t1 receives {A, A ⊕ B} and hence can decode {A, B}. t2 receives {B, A ⊕ B} and hence can decode {A, B}.

44 / 63 I-Hsiang Wang NIT Lecture 6

slide-45
SLIDE 45

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Linear Network Coding

For simplicity, in the rest of the lecture we shall focus on the case where edge capacities are integers, i.e., C : E → N. When edge capacities are integers, we can further assume WLOG that all edge capacities are 1.

i j C (i, j) = 3

i j

Linear network coding: Along each edge (i, j) ∈ E, the mapping m(i,j) is a linear function of the received data Fi at the node i. m(i,j) ∈ F2N since each edge has unit capacity. Hence, mappings on all edges can be represented by row vectors in F2N: m(j,k) ∈ F1×|In(j)|, where F := F2N.

45 / 63 I-Hsiang Wang NIT Lecture 6

slide-46
SLIDE 46

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

s t1 t2 A B A B A ⊕ B A B

  • s

t1 t2 A B

1 ⇤ ⇥ 1 ⇤ ⇥1⇤ ⇥ 1 ⇤ ⇥ 1 ⇤ ⇥ 1 ⇤ ⇥ 1 1 ⇤ ⇥1⇤ ⇥ 1 ⇤

Source message vector w ∈ FR×1. Local linear maps: for an edge e = (j, k) ∈ E, let αe := [αe (1) αe (2) · · · αe (|In(j)|)] denote the local linear map representing how the message m(j,k) is encoded from the received data at node j. Global linear maps: for an edge e = (j, k) ∈ E, let βe := [βe (1) βe (2) · · · βe (R)] denote the global linear map from the source message w to the message m(j,k), where me = βew.

46 / 63 I-Hsiang Wang NIT Lecture 6

slide-47
SLIDE 47

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

s t1 t2 A B

1 ⇤ ⇥ 1 ⇤ ⇥1⇤ ⇥ 1 ⇤ ⇥ 1 ⇤ ⇥ 1 ⇤ ⇥ 1 1 ⇤ ⇥1⇤ ⇥ 1 ⇤ e1 e2 e3 e4 e5 e6 e7 e8 e9

For example, base field F = F2 (binary field), w = [A B ] , αe3 = [ 1 ] , βe3 = [ 1 ] , αe7 = [ 1 1 ] , βe7 = [ 1 1 ] .

At a destination node t ∈ T , it collects m(i,t) = β(i,t)w for all i ∈ In (t) = {i1, . . . , il}, and forms Bt :=    β(i1,t) . . . β(il,t)    , such that mt :=    m(i1,t) . . . m(il,t)    = Btw.

47 / 63 I-Hsiang Wang NIT Lecture 6

slide-48
SLIDE 48

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

s t1 t2 A B

1 ⇤ ⇥ 1 ⇤ ⇥1⇤ ⇥ 1 ⇤ ⇥ 1 ⇤ ⇥ 1 ⇤ ⇥ 1 1 ⇤ ⇥1⇤ ⇥ 1 ⇤ e1 e2 e3 e4 e5 e6 e7 e8 e9

For example, Bt1 = [1 1 1 ] , Bt2 = [1 1 1 ] .

End-to-end linear maps: in general, the “height” of matrix Bt, |In(t)| ≥ R for any achievable R. Hence, we see that rank (Bt) ≤ R, and we can always remove some redundant rows and reduce Bt to a square matrix At ∈ FR×R. Decodability at t ⇐ ⇒ Invertibility of At ⇐ ⇒ det (At) ̸= 0

48 / 63 I-Hsiang Wang NIT Lecture 6

slide-49
SLIDE 49

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

1 2 3 4

s t1 t2

αe1 αe2 αe3 αe4 αe5 αe6 αe7 αe8 αe9 e1 e2 e3 e4 e5 e6 e7 e8 e9

A B

  • Note that det (At) is a polynomial of the local coding coefficients

α := [αe | e ∈ E]. Let gt (x) ∈ F[x]1 denote this polynomial. Exercise 2 Find gt1 (α) and gt2 (α) in the above network. Goal of Network Code Design:

1 Determine the size of the base field F = F2N (the block length N). 2 Assign local coding coefficients αe ∀ e ∈ E such that gt (α) ̸= 0 ∀ t.

1To avoid confusion between the polynomial itself and its evaluation at a particular

value α, we use x as the dummy variable.

49 / 63 I-Hsiang Wang NIT Lecture 6

slide-50
SLIDE 50

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Existence of Good Linear Network Codes

Question: Does there exist an assignment of local coding coefficients α := [αe | e ∈ E] such that (1) below hold for R = mink∈[1:K] C (s ; tk)? ∀ k ∈ [1 : K], gtk (x) |x=α ̸= 0 ⇐ ⇒ g (x) |x=α ̸= 0, (1) where g (x) := ∏K

k=1 gtk (x). Note: g (x) ∈ F[x] is also a polynomial in x.

Proposition 1 g (x) is not a zero polynomial when the base field F = F2, that is, N = 1.

(A zero polynomial is a polynomial where all coefficients are 0 ∈ F.)

pf: Since for each destination tk ∈ T , there exists a routing solution for R = mink∈[1:K] C (s ; tk), and hence ∃ α(k) such that gtk ( α(k)) ̸= 0. Therefore, gtk (x) is not a zero polynomial for each destination tk ∈ T = ⇒ g (x) is not a zero polynomial.

50 / 63 I-Hsiang Wang NIT Lecture 6

slide-51
SLIDE 51

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Existence of Good Linear Network Codes

For simplicity of presentation, relabel the local coding coefficients α = [αe | e ∈ E] = [α1 α2 · · · αn ] . Proposition 1 does not guarantee that we can find some vector α in the binary field F2 such that g (x) |x=α ̸= 0. For example, x2 + x is not a zero polynomial, but ( x2 + x )

x=0 =

( x2 + x )

x=1 = 0 in F2.

Nevertheless, intuitively as long as the size of the base field is sufficiently large, we can always meet our goal (Think of F as the real field R). This is guaranteed by the following general lemma. Lemma 2 If p (x) ∈ F2 [x] is a non-zero polynomial, then ∃ N ∈ N and α ∈ (F2N)n such that p (α) ̸= 0 in F2N. pf: For n = 1 it is intuitive since the # of roots ≤ deg (p). The complete proof is based on induction in n, which can be found in Ch 15 of EG&K.

51 / 63 I-Hsiang Wang NIT Lecture 6

slide-52
SLIDE 52

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Random Construction is Good with High Probability

For R = mink∈[1:K] C (s ; tk), if one selects the coding coefficients α randomly from the base field F, g (x) |x=α ̸= 0 with high probability. Proposition 2 If the coding coefficients {α1, . . . , αn} are chosen uniformly randomly i.i.d. from F, then Pr {g (α1, . . . , αn) ̸= 0} ≥ ( 1 −

d |F|

)n , where d is the maximum degree of g in each of the variables. pf: For n = 1: since the polynomial in α1 has maximum degree d, there are at

most d roots in F, and hence Pr {g (α1) ̸= 0} ≥ ( 1 −

d |F|

) . Induction in n: suppose it holds for all n ≤ no. For n = no + 1, rewrite g (α1, . . . , αn) = [∑dn

j=0 gj (α1, . . . , αno) (xn)j] xn=αn

, as a polynomial in xn. Hence, Pr {g (α1, . . . , αn) ̸= 0} ≥ Pr {gdn (α1, . . . , αno) ̸= 0} × ( 1 − dn

|F|

) ≥ ( 1 −

d |F|

)no+1 .

52 / 63 I-Hsiang Wang NIT Lecture 6

slide-53
SLIDE 53

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Explicit Construction of Linear Network Code: Field Size

By the proof of Proposition 2, observe that the key for determining the size of the base field F is d, the maximum degree of g in each of the local coding coefficients {α1, . . . , αn}: |F| > d = ⇒ ∃ α = [α1 · · · αn ] such that g (α) ̸= 0. Note: in a acyclic network, for each destination tk ∈ T , each edge e ∈ E should be used only once when computing the end-to-end linear map. Hence, the degree of each α in gtk should be at most 1. = ⇒ the degree of any α in g is at most K = |T |, the # of destinations. = ⇒ d ≤ K = ⇒ It suffices to choose |F| = 2N > K. Exercise 3 Show that it suffices to choose field size |F| = 2N ≥ K .

53 / 63 I-Hsiang Wang NIT Lecture 6

slide-54
SLIDE 54

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Explicit Construction of Linear Network Code: Algorithm

Jaggi et al. algorithm consists of two stages.

1 Stage 1: Find R edge-disjoint paths from s to each tk, k ∈ [1 : K],

using unicast algorithms such as Ford-Fulkerson. Only the edges in theses paths will be considered in the second stage.

2 Stage 2: Visit each vertex in a topological order from source to

destinations, and assign the local coding coefficients on the outgoing edges of the visited vertex. The goal is to ensure that at the end, the end-to-end matrices {Btk}K

k=1 are invertible.

To meet the goal, when visiting edges, for each destination tk we maintain a cut U that augments from step to step, and a matrix B(k), rows of which are global linear maps βe on all edges e in s → tk paths across the cut (U, Uc). We use Bk to denote the edge cut. To ensure that Btk is invertible at the end, ensure the invertibility of B(k) in each step of the algorithm.

54 / 63 I-Hsiang Wang NIT Lecture 6

slide-55
SLIDE 55

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

We present the concept of Jaggi et al. algorithm through an example. The details and some proofs can be found in

  • S. Jaggi et al., “Polynomial time algorithms for multicast network code construction,”

IEEE Transactions on Information Theory, vol. 51, no. 6, pp. 1973 – 1982, June 2005.

s t1 t2 A B

  • e1

e2 e3 e4 e5 e6 e7 e8 e9

Pt1

1 = (e1, e3), Pt1 2 = (e2, e5, e7, e8).

s t1 t2 A B

  • e1

e2 e3 e4 e5 e6 e7 e8 e9

Pt2

1 = (e1, e4, e7, e9), Pt2 2 = (e2, e6).

55 / 63 I-Hsiang Wang NIT Lecture 6

slide-56
SLIDE 56

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Step 1: B(1) = [1 1 ] , B(2) = [1 1 ] .

s t1 t2

e1 e2 e3 e4 e5 e6 e7 e8 e9 B1 = {e1, e2} B2 = {e1, e2} ⇥1 0⇤ ⇥ 1 ⇤

Pt1

1 =

(

e1, e3 ) , Pt1

2 =

(

e2, e5, e7, e8 ) . Pt2

1 =

(

e1, e4, e7, e9 ) , Pt2

2 =

(

e2, e6 ) .

56 / 63 I-Hsiang Wang NIT Lecture 6

slide-57
SLIDE 57

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Step 2: B(1) = [1 1 ] , B(2) = [1 1 ] .

B1 = {e3, e2} B2 = {e4, e2}

s t1 t2

e1 e2 e3 e4 e5 e6 e7 e8 e9 ⇥1⇤ ⇥ 1 ⇤ ⇥1⇤ 8 > > < > > : e3 : h 1 i = h 1 i h 1 i e4 : h 1 i = h 1 i h 1 i ⇥1 0⇤

Pt1

1 =

( e1,

e3 ) , Pt1

2 =

(

e2, e5, e7, e8 ) . Pt2

1 =

( e1,

e4, e7, e9 ) , Pt2

2 =

(

e2, e6 ) .

57 / 63 I-Hsiang Wang NIT Lecture 6

slide-58
SLIDE 58

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Step 3: B(1) = [1 1 ] , B(2) = [1 1 ] .

B2 = {e4, e6} B1 = {e3, e5}

s t1 t2

e1 e2 e3 e4 e5 e6 e7 e8 e9 ⇥1⇤ ⇥1⇤ ⇥ 1 ⇤ ⇥1⇤ 8 > > < > > : e5 : h 1 i = h 1 i h 1 i e6 : h 1 i = h 1 i h 1 i ⇥ 1 ⇤ ⇥1 0⇤

Pt1

1 =

( e1,

e3 ) , Pt1

2 =

( e2,

e5, e7, e8 ) . Pt2

1 =

( e1,

e4, e7, e9 ) , Pt2

2 =

( e2,

e6 ) .

58 / 63 I-Hsiang Wang NIT Lecture 6

slide-59
SLIDE 59

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Step 4: B(1) = [1 1 1 ] , B(2) = [1 1 1 ] .

B2 = {e7, e6} B1 = {e3, e7}

s t1 t2

e1 e2 e3 e4 e5 e6 e7 e8 e9 ⇥1⇤ ⇥1⇤ ⇥ 1 ⇤ ⇥1⇤ ⇥1 0⇤ ⇥ 1 ⇤ ⇥ 1 1 ⇤ ( e7 : h 1 1 i = h 1 1 i " 1 1 #

Pt1

1 =

( e1,

e3 ) , Pt1

2 =

( e2, e5,

e7, e8 ) . Pt2

1 =

( e1, e4,

e7, e9 ) , Pt2

2 =

( e2,

e6 ) .

59 / 63 I-Hsiang Wang NIT Lecture 6

slide-60
SLIDE 60

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Step 5: B(1) = [1 1 1 ] , B(2) = [1 1 1 ] .

B1 = {e3, e8} B2 = {e9, e6}

s t1 t2

e1 e2 e3 e4 e5 e6 e7 e8 e9 ⇥1⇤ ⇥1⇤ ⇥ 1 ⇤ ⇥1⇤ ⇥1 0⇤ ⇥ 1 ⇤ ⇥ 1 1 ⇤ 8 > > < > > : e8 : h 1 1 i = h 1 i h 1 1 i e9 : h 1 1 i = h 1 i h 1 1 i ⇥1⇤ ⇥1⇤

Pt1

1 =

( e1,

e3 ) , Pt1

2 =

( e2, e5, e7,

e8 ) . Pt2

1 =

( e1, e4, e7,

e9 ) , Pt2

2 =

( e2,

e6 ) .

60 / 63 I-Hsiang Wang NIT Lecture 6

slide-61
SLIDE 61

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Remarks on the Implementation of Jaggi et al. Algorithm

The key elements in the algorithm are:

1 How to update B(k) and Bk 2 How to assign local coding coefficients when updating B(k) and Bk.

Update happens when the vertex cut U is augmented by one vertex. Update Bk : for each of the R s → tk paths, replace the current edge with the succeeding one in that path if the current one is no longer across the cut (i.e., not at the frontier). Update B(k) : update the corresponding rows corresponding to the replaced edges in Bk, by finding proper local coding coefficients on the edge such that B(k) remains invertible. Jaggi et al. provide efficient mechanisms to check the linear independence of the update matrices. Efficiency of the search of local coding coefficients depends on the field size.

61 / 63 I-Hsiang Wang NIT Lecture 6

slide-62
SLIDE 62

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

1 Single Unicast and Single Multicast 2 Linear Network Coding and Algorithms 3 Summary

62 / 63 I-Hsiang Wang NIT Lecture 6

slide-63
SLIDE 63

Single Unicast and Single Multicast Linear Network Coding and Algorithms Summary

Single unicast: Max-Flow = Min-Cut Routing suffices for single unicast Ford-Fulkerson algorithm Single multicast: Max-Flow = Min Min-Cut Random linear network coding suffices for single multicast Jaggi et al. algorithm

63 / 63 I-Hsiang Wang NIT Lecture 6