Lecture 1 Introduction I-Hsiang Wang Department of Electrical - - PowerPoint PPT Presentation

lecture 1 introduction
SMART_READER_LITE
LIVE PREVIEW

Lecture 1 Introduction I-Hsiang Wang Department of Electrical - - PowerPoint PPT Presentation

Course Information Overview Lecture 1 Introduction I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw September 17, 2014 1 / 45 I-Hsiang Wang NIT Lecture 1 Course Information Overview


slide-1
SLIDE 1

Course Information Overview

Lecture 1 Introduction

I-Hsiang Wang

Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw

September 17, 2014

1 / 45 I-Hsiang Wang NIT Lecture 1

slide-2
SLIDE 2

Course Information Overview

Network Information Theory

Information Theory is a mathematical theory of information Information is usually obtained by getting some “messages” (speech, text, images, etc.) from others. What one may care about a piece of information? What is the meaning of a message? How important is the message? How much information can I get from the message?

2 / 45 I-Hsiang Wang NIT Lecture 1

slide-3
SLIDE 3

Course Information Overview

Network Information Theory

Information Theory is a mathematical theory of information Information is usually obtained by getting some “messages” (speech, text, images, etc.) from others. What one may care about a piece of information? What is the meaning of a message? How important is the message? How much information can I get from the message? Information theory is about the quantification of information.

3 / 45 I-Hsiang Wang NIT Lecture 1

slide-4
SLIDE 4

Course Information Overview

Network Information Theory

Information Theory is a mathematical theory of information (primarily for communication systems) that Establishes the fundamental limits of communication systems (Quantifies the amount of information that can be delivered from a party to another) Built upon probability theory and statistics Main concern: ultimate performance limit (usually the rate of information) as certain resources (usually the total amount of time) scales to the asymptotic regime, given that the desired information is delivered “reliably”.

4 / 45 I-Hsiang Wang NIT Lecture 1

slide-5
SLIDE 5

Course Information Overview

Network Information Theory

Network is a collection of terminals connected through communication links that Exchange information and communicate with one another, so that certain goals are met Use graph theoretic quantities/properties to model terminal interaction Extend the architecture of point-to-point communication systems

5 / 45 I-Hsiang Wang NIT Lecture 1

slide-6
SLIDE 6

Course Information Overview

In this course, we will

1 Establish solid foundations and intuitions of information theory, and 2 Explore various topics extending the legacy of information theory to

network settings

Later, we begin with a brief overview of information theory and how the theory extends to multi-user networks.

6 / 45 I-Hsiang Wang NIT Lecture 1

slide-7
SLIDE 7

Course Information Overview

1 Course Information 2 Overview

7 / 45 I-Hsiang Wang NIT Lecture 1

slide-8
SLIDE 8

Course Information Overview

Logistics

1 Instructor: I-Hsiang Wang 王奕翔

Email: ihwang@ntu.edu.tw Office: MD-524 明達館 524 室 Office Hours: 17:30 – 18:30, Tuesday and Wednesday

2 Lecture Time:

13:20 – 14:10 (5) Tuesday, and 10:20 – 12:10 (34) Wednesday

3 Lecture Location: EE2-225 電機二館 225 室 4 Course Website:

http://homepage.ntu.edu.tw/~ihwang/Teaching/Fa14/NIT.html

8 / 45 I-Hsiang Wang NIT Lecture 1

slide-9
SLIDE 9

Course Information Overview

Logistics

5 References

  • A. El Gamal and Y.-H. Kim, Network Information Theory, Cambridge

University Press, 2011.

  • T. Cover and J. Thomas, Elements of Information Theory,

Wiley-Interscience, 2006.

  • R. Yeung, Information Theory and Network Coding, Springer, 2008.
  • R. Gallager, Information Theory and Reliable Communications,

Wiley, 1968.

6 Prerequisites:

Probability, Linear Algebra, Principles of Communications.

9 / 45 I-Hsiang Wang NIT Lecture 1

slide-10
SLIDE 10

Course Information Overview

Grading Policy

1 Grading: Quiz (30%), Homework (40%), Project (30%) 2 Quiz:

In order to make sure that you understand the lecture materials Roughly 2 quizzes in the whole semester

3 Homework:

Roughly 3–4 problems every two weeks; late homework = 0 points. Everyone has to provide the detailed solution for one homework problem, documented in L

A

T

  • EX. We will provide L

AT

EX templates, and you should discuss with me about the homework problem that you are in charge of, so that we can make sure the solution is correct. This additional effort accounts for part of your homework grades.

10 / 45 I-Hsiang Wang NIT Lecture 1

slide-11
SLIDE 11

Course Information Overview

Project

1 Team: 2 people per team 2 Topic:

A list of potential topics will be announced Meeting with the instructor to decide the topic

3 Final presentation:

Scheduled in the last two weeks Each team has to present the results in 20 minutes to the audience Other people have to give grades on the presentation, which will become part of the final project presentation grades

4 Final report: Each team has to summarize their project in a written

report, documented in L

A

T EX

11 / 45 I-Hsiang Wang NIT Lecture 1

slide-12
SLIDE 12

Course Information Overview

Course Outline

Measures of information: entropy, mutual information, KL divergence (relative entropy), typical sequences Point-to-point communication: source coding theorem, channel coding theorem, source-channel separation Channel with states: compound channel, channel with random states, dirty-paper coding Network information flow: cut-set bound, network coding Multiple access channel (MAC): successive interference cancellation, time-sharing, joint decoding Broadcast channel (BC): superposition coding, capacity of degraded BC, Marton coding scheme

12 / 45 I-Hsiang Wang NIT Lecture 1

slide-13
SLIDE 13

Course Information Overview

Course Outline

Interference channel (IC): Han-Kobayashi coding scheme, capacity of Gaussian IC to within 1 bit, capacity of deterministic IC, interference alignment. Distributed source coding: Slepian-Wolf theorem, Wyner-Ziv theorem, CEO problem Feedback (if time allows): Schalkwijk-Kailath scheme, feedback in Gaussian IC Advanced topics (if time allows): error exponents, non-asymptotic information theory, etc.

13 / 45 I-Hsiang Wang NIT Lecture 1

slide-14
SLIDE 14

Course Information Overview

Tentative Schedule

Week Date Content Remark 1 9/16, 17 Introduction 2 9/23, 24 Measure of Information 3 9/30, 10/1 Measure of Information, Source Coding HW1 4 10/7, 8 Source Coding 5 10/14, 15 Channel Coding HW2 6 10/21, 22 Channel Coding, Source-Channel separa- tion 7 10/28, 29 Channel with States HW3 8 11/4, 5 Graphical Networks, Max-Flow Min-Cut Theorem Quiz 1 (11/5) 9 11/11, 12 Max-Flow Min-Cut Theorem, Network Coding HW4

14 / 45 I-Hsiang Wang NIT Lecture 1

slide-15
SLIDE 15

Course Information Overview

Tentative Schedule

Week Date Content Remark 10 11/18, 19 Multiple Access Channel 11 11/25, 26 Multiple Access Channel, Broadcast Channel HW5 12 12/2, 3 Broadcast Channel 13 12/9, 10 Interference Channel HW6 14 12/16, 17 Interference Channel, Distributed Source Coding 15 12/23, 24 No class Quiz 2 (12/24) 16 12/30, 31 Distributed Source Coding HW7 17 1/6, 7 Project Presentation 18 1/13, 14 Project Presentation

15 / 45 I-Hsiang Wang NIT Lecture 1

slide-16
SLIDE 16

Course Information Overview

1 Course Information 2 Overview

16 / 45 I-Hsiang Wang NIT Lecture 1

slide-17
SLIDE 17

Course Information Overview

Claude Elwood Shannon (1916 – 2001)

17 / 45 I-Hsiang Wang NIT Lecture 1

slide-18
SLIDE 18

Course Information Overview

Shannon’s landmark paper in 1948 is generally considered as the “birth” of information theory. In the paper, Shannon set it clear that information theory is about the quantification of information in a communication system. In particular, it focuses on characterizing the necessary and sufficient condition of whether or not a destination terminal can reproduce a message generated by a source terminal.

18 / 45 I-Hsiang Wang NIT Lecture 1

slide-19
SLIDE 19

Course Information Overview

Communication System

Encoder Channel Decoder Source Destination Noise

Above is an abstract model of communication system:

1 The source would like to deliver some message to the destination,

where the message includes speech, image, video, audio, text, etc.

2 The channel is the physical medium that connects the source and

the destination, such as cable, optical fiber, EM radiation, etc., and is usually subject to certain noise disturbances.

3 The encoder can carry out any processing of the source output,

including compression, modulation, insertion of redundancy, etc.

4 The decoder can carry out any processing of the channel output to

reproduce the source message.

19 / 45 I-Hsiang Wang NIT Lecture 1

slide-20
SLIDE 20

Course Information Overview

A primary concern of information theory is on the encoder and the decoder, both in terms of How the encoder and the decoder function, and The existence or nonexistence of encoders and decoders that achieve a given level of performance

20 / 45 I-Hsiang Wang NIT Lecture 1

slide-21
SLIDE 21

Course Information Overview

Prior to the 1948 paper, design of communication systems followed the analog paradigm – if the source produces a electromagnetic waveform, the destination should try its best to reconstruct this waveform, in order to extract useful information (usually, voice). This line of research was based on Fourier analysis and gave birth to sampling theory. Shannon asked: If the receiver knows that a sine wave of unknown frequency is to be communicated, why not simply send the frequency rather than the entire waveform? Prior to Shannon, theorist and engineers were able to analyze the performance of certain choice of encoders/decoders, but had little knowledge about what is the ultimate limit. Shannon asked: For all possible encoders/decoders, what is the necessary and sufficient condition for the destination to be able to reconstruct the message sent from the source?

21 / 45 I-Hsiang Wang NIT Lecture 1

slide-22
SLIDE 22

Course Information Overview

Shannon’s View

Encoder Channel Decoder Source Destination Noise

Key new insights due to Shannon’s work: Shannon: “Information is the resolution of uncertainty.” Indeed, the set of possible source outputs, rather than any particular output, is

  • f primary interest.

Introduction of an abstract mathematical model of communication system based on random processes (hence, a stochastic model) Creation of the digital paradigm of communication system design – bit – as the universal currency of information, by proposing and proving the source-channel separation theorem

22 / 45 I-Hsiang Wang NIT Lecture 1

slide-23
SLIDE 23

Course Information Overview

Stochastic Modeling

Encoder Channel Decoder Source Destination Noise

The stochastic modeling of communication system comprises: Source: model the information source by random processes, where the data to be conveyed is drawn randomly from a given distribution Channel: model the noisy channel by random processes, where the impact of noise is drawn randomly from a given distribution Why use random processes to model communication system? Shannon: “The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.”

23 / 45 I-Hsiang Wang NIT Lecture 1

slide-24
SLIDE 24

Course Information Overview

Source-Channel Separation

Source Encoder Source Noisy Channel Channel Encoder Destination Source Decoder Channel Decoder Binary Interface Bits Bits

Shannon showed that by splitting the coders into source coders and channel coders, the fundamental limit of the system remains the same. In other words, introducing a digital (binary) interface does not incur any loss of optimality, in terms of whether or not the destination can reproduce the source data. Separation of source coding and channel coding simplifies engineering design – source coder design (data compression) and channel coder design (data transmission).

24 / 45 I-Hsiang Wang NIT Lecture 1

slide-25
SLIDE 25

Course Information Overview

Source Coding (Data Compression)

Noisy Channel Channel Encoder Channel Decoder Source Encoder Source Destination Source Decoder

Features of source messages: Uncertainty: the destination has no idea what message is chosen by the source a priori. Redundancy: though randomly chosen, some choices are more likely, while others are less likely. Goal: remove redundancy of the source message and represent it by a bit sequence, so that it can be delivered to the destination reliably. Question: what is the minimum rate (bits per time) required for any given source model?

25 / 45 I-Hsiang Wang NIT Lecture 1

slide-26
SLIDE 26

Course Information Overview

Source Coding (Data Compression)

Noisy Channel Channel Encoder Channel Decoder Source Encoder Source Destination Source Decoder

s[1], . . . , s[N] b[1], . . . , b[K] b s[1], . . . , b s[N]

Notations: {s[1], . . . , s[N]} represent the source message; each s[t] is called a “source symbol”. {b[1], . . . , b[K]} represent the codeword, generated by the source encoder; each bit b[t] is called a “source codeword symbol (bit)”. { s[1], . . . , s[N]} represent the reproduced source message at the destination.

26 / 45 I-Hsiang Wang NIT Lecture 1

slide-27
SLIDE 27

Course Information Overview

Source Coding (Data Compression)

Noisy Channel Channel Encoder Channel Decoder Source Encoder Source Destination Source Decoder

s[1], . . . , s[N] b[1], . . . , b[K] b s[1], . . . , b s[N]

Shannon characterized the necessary and sufficient condition for (lossless) source coding: A Source Coding Theorem The destination can reconstruct the source message losslessly ⇐ ⇒ code rate R := K

N > the entropy rate of the source, H(S)

We will define entropy in Lecture 2; it is a quantity that can be computed from the distribution of the source random process {S[t] | t ∈ N}.

27 / 45 I-Hsiang Wang NIT Lecture 1

slide-28
SLIDE 28

Course Information Overview

Channel Coding (Data Transmission)

Source Encoder Source Noisy Channel Channel Encoder Destination Source Decoder Channel Decoder

Features of noisy channel: Uniform messages: input of channel encoders are assumed WLOG to be bit sequences with no redundancy, since source coding already removes all redundancy and convert to bit sequences. Noise: channel input sent by the channel encoder is corrupted by the noise randomly, and produce the channel output. Goal: add minimum redundancy so that messages (bit sequences) can be communicated over the noisy channel and decoded reliably. Question: what is the maximum rate (bits per time) can be achieved?

28 / 45 I-Hsiang Wang NIT Lecture 1

slide-29
SLIDE 29

Course Information Overview

Channel Coding (Data Transmission)

Source Encoder Source Noisy Channel Channel Encoder Destination Source Decoder Channel Decoder

b[1], . . . , b[K] b b[1], . . . ,b b[K] x[1], . . . , x[N] y[1], . . . , y[N]

p (y|x)

Notations: {x[1], . . . , x[N]} represent the codeword; each x[t] is called a “coded symbol”. {b[1], . . . , b[K]} represent the message; each bit b[t] is called a “data symbol (bit)”. {y[1], . . . , y[N]} represent the channel output.

29 / 45 I-Hsiang Wang NIT Lecture 1

slide-30
SLIDE 30

Course Information Overview

Channel Coding (Data Transmission)

Source Encoder Source Noisy Channel Channel Encoder Destination Source Decoder Channel Decoder

b[1], . . . , b[K] b b[1], . . . ,b b[K] x[1], . . . , x[N] y[1], . . . , y[N]

p (y|x)

Shannon gave the necessary and sufficient condition for channel coding: A Channel Coding Theorem The (channel) decoder can decode the message reliably ⇐ ⇒ code rate R := K

N < the channel capacity of the channel, C

We will define channel capacity later; it is a quantity that can be computed by maximizing the “mutual information” between X and Y, which can be computed from the conditional distribution of the channel.

30 / 45 I-Hsiang Wang NIT Lecture 1

slide-31
SLIDE 31

Course Information Overview

Extending the information theoretic results from point-to-point communication systems to networks is highly non-trivial. An initial step is to consider a graphical network as follows:

Source Destination Encoder Noisy Channel Decoder C(e) Edge e Point-to-point communication link represented by edge e

Note: By Channel Coding Theorem, we convert the noisy channel connecting two terminals into a finite capacitated noiseless link. This models the wired networks pretty well, and it also hows another level

  • f separation.

31 / 45 I-Hsiang Wang NIT Lecture 1

slide-32
SLIDE 32

Course Information Overview

In 1956, Ford-Fulkerson and Elias-Feinstein-Shannon independently proved the max-flow-min-cut theorem for graphical networks.

1956 IRE TRANXACTIONX ON INFORiMATION THEORY 117

A Note on the Maximum Flow Through a Network*

  • P. ELIASt,
  • A. FEINSTEINI,

AND

  • C. E. SHANNON!

Summary--This note discusses the problem

  • f maximizing

the rate of flow from one terminal to another, through a network which consists of a number of branches, each of which has a !imited capa-

  • city. The main result is a theorem:

The maximum possible flow from left to right through a network is equal to the minimum value among all simple cut-sets. This theorem is applied to solve a more general problem, in which a number

  • f input nodes and a number of output

nodes are used. c ONSIDER a two-terminal network such as that

  • f Fig. 1. The branches of the network

might represent communication channels,

  • r,

more generally, any conveying system of limited capacity as, for example, a railroad system, a power feeding system,

  • r a network of pipes, provided in each case it is possible

to assign a definite maximum allowed rate of flow over a given branch. The links may be of two types, either one directional (indicated by arrows) or two directional, in which case flow is allowed in either direction at anything up to maximum capacity. At the nodes or junction points

  • f the network,

any redistribution

  • f incoming flow into

the outgoing flow is allowed, subject

  • nly to the re-

striction

  • f not exceeding in any branch the capacity, and
  • f obeying the Kiichhoff

law that the total (algebraic) flow into a node be zero. Note that in the case of infor- mation flow, this may require arbitrarily large delays at each node to permit recoding of the output signals from that

  • node. The problem

is to evaluate the maximum possible flow through the network as a whole, entering at the left terminal and emerging at the right terminal. 7

  • <

3 b 5 cl I f

  • Fig. 1

The answer can be given in terms of cut-sets of the network. A cut-set of a two-terminal network is a set of branches such that when deleted from the network, the network falls into two or more unconnected parts with the two terminals in different

  • parts. Thus, every path

* Manuscript received by the PGIT, July 11, 1956. t Elec. Ena. Deot. and Res. Lab.

  • f Electronics.
  • Mass. Inst.

Tech., CambrTdge, -Mass. 1 Lincoln Lab., M.I.T., Lexington! Mass. 5 Bell Telephone Labs., Murray Hill,

  • N. J., and M.I.T.,

Cam- bridge, Mass.

from one terminal to the other in the original network passes through at least one branch in the cut-set. In the network above, some examples of cut-sets are (d, e, f), and (b, c, e, g, h), (d, g, h, i) . By a simple cut-set we will mean a cut-set such that if any branch is omitted it is no longer a cut-set. Thus (d, e, f) and (b, c, e, g, h) are simple cut-sets while (d, g, h, ;) is not. When a simple cut-set is deleted from a connected two-terminal network, the net- work falls into exactly two parts, a left part containing the left terminal and a right part containing the right terminal. We assign a value to a simple cut-set by taking the sum of capacities

  • f branches

in the cut-set,

  • nly

counting capacities, however, from the left part to the right part for branches that are unidirectional. Note that the direction

  • f an unidirectional

branch cannot be deduced from its appearance in the graph of the network. A branch is directed from left to right in a minimal cut-set if, and

  • nly if, the arrow on the branch points from a node in the

left part of the network to a node in the right part. Thus, in the example, the cut-set (d, e, f) has the value 5 + 1 = 6, the cut-set (b, c, e, g, h) has value 3 + 2 + 3 + 2 = 10. Theorem: The maximum possible flow from left to right through a net,work is equal to the minimum value among all simple cut-sets. This theorem may appear almost obvious on physical grounds and appears to have been accepted without proof for some time by workers in communication theory. However, while the fact that this flow cannot be exceeded is indeed almost trivial, the fact that it can actually be achieved is by no means obvious. We understand that proofs of the theorem have been given by Ford and Fulkerson’ and Fulkerson and Dantzig.2 The following proof is relatively simple, and we believe different in principle. To prove first that the minimum cut-set flow cannot be exceeded, consider any given flow pattern and a minimum- valued cut-set C. Take the algebraic sum X of flows from left to right across this cut-set. This is clearly less than or equal to the value V of the cut-set, since the latter would result if all paths from left to right in C were carrying full capacity, and those in the reverse direction were carrying

  • zero. Now add to S the sum of the algebraic

flows into all nodes in the right-hand group for the cut- set C. This sum is zero because of the Kirchhoff law constraint at each node. Viewed another way, however, we see that it cancels out each flow contributing to S, and also that each flow on a branch with both ends in the

1 L. Ford, Jr. and D. R. Fulkerson, Can.

  • J. Math.; to be published.

* G. B. Dantsig and D. R. Fulkerson, “On the Max-Flow Min- Cut Theorem

  • f Networks,”

in “Linear Inequalities,”

  • Ann. Math.

Studies, no. 38, Princeton, New Jersey, 1956.

1956 IRE TRANXACTIONX ON INFORiMATION THEORY 117

A Note on the Maximum Flow Through a Network*

  • P. ELIASt,
  • A. FEINSTEINI,

AND

  • C. E. SHANNON!

Summary--This note discusses the problem

  • f maximizing

the rate of flow from one terminal to another, through a network which consists of a number of branches, each of which has a !imited capa-

  • city. The main result is a theorem:

The maximum possible flow from left to right through a network is equal to the minimum value among all simple cut-sets. This theorem is applied to solve a more general problem, in which a number

  • f input nodes and a number of output

nodes are used. c ONSIDER a two-terminal network such as that

  • f Fig. 1. The branches of the network

might represent communication channels,

  • r,

more generally, any conveying system of limited capacity as, for example, a railroad system, a power feeding system,

  • r a network of pipes, provided in each case it is possible

to assign a definite maximum allowed rate of flow over a given branch. The links may be of two types, either one directional (indicated by arrows) or two directional, in which case flow is allowed in either direction at anything up to maximum capacity. At the nodes or junction points

  • f the network,

any redistribution

  • f incoming flow into

the outgoing flow is allowed, subject

  • nly to the re-

striction

  • f not exceeding in any branch the capacity, and
  • f obeying the Kiichhoff

law that the total (algebraic) flow into a node be zero. Note that in the case of infor- mation flow, this may require arbitrarily large delays at each node to permit recoding of the output signals from that

  • node. The problem

is to evaluate the maximum possible flow through the network as a whole, entering at the left terminal and emerging at the right terminal.

7

  • <

3 b

5 cl I

f

  • Fig. 1

The answer can be given in terms of cut-sets of the network. A cut-set of a two-terminal network is a set of branches such that when deleted from the network, the network falls into two or more unconnected parts with the two terminals in different

  • parts. Thus, every path

* Manuscript received by the PGIT, July 11, 1956. t Elec. Ena. Deot. and Res. Lab.

  • f Electronics.
  • Mass. Inst.

Tech., CambrTdge, -Mass. 1 Lincoln Lab., M.I.T., Lexington! Mass. 5 Bell Telephone Labs., Murray Hill,

  • N. J., and M.I.T.,

Cam- bridge, Mass.

from one terminal to the other in the original network passes through at least one branch in the cut-set. In the network above, some examples of cut-sets are (d, e, f), and (b, c, e, g, h), (d, g, h, i) . By a simple cut-set we will mean a cut-set such that if any branch is omitted it is no longer a cut-set. Thus (d, e, f) and (b, c, e, g, h) are simple cut-sets while (d, g, h, ;) is not. When a simple cut-set is deleted from a connected two-terminal network, the net- work falls into exactly two parts, a left part containing the left terminal and a right part containing the right terminal. We assign a value to a simple cut-set by taking the sum of capacities

  • f branches

in the cut-set,

  • nly

counting capacities, however, from the left part to the right part for branches that are unidirectional. Note that the direction

  • f an unidirectional

branch cannot be deduced from its appearance in the graph of the network. A branch is directed from left to right in a minimal cut-set if, and

  • nly if, the arrow on the branch points from a node in the

left part of the network to a node in the right part. Thus, in the example, the cut-set (d, e, f) has the value 5 + 1 = 6, the cut-set (b, c, e, g, h) has value 3 + 2 + 3 + 2 = 10. Theorem: The maximum possible flow from left to right through a net,work is equal to the minimum value among all simple cut-sets. This theorem may appear almost obvious on physical grounds and appears to have been accepted without proof for some time by workers in communication theory. However, while the fact that this flow cannot be exceeded is indeed almost trivial, the fact that it can actually be achieved is by no means obvious. We understand that proofs of the theorem have been given by Ford and Fulkerson’ and Fulkerson and Dantzig.2 The following proof is relatively simple, and we believe different in principle. To prove first that the minimum cut-set flow cannot be exceeded, consider any given flow pattern and a minimum- valued cut-set C. Take the algebraic sum X of flows from left to right across this cut-set. This is clearly less than or equal to the value V of the cut-set, since the latter would result if all paths from left to right in C were carrying full capacity, and those in the reverse direction were carrying

  • zero. Now add to S the sum of the algebraic

flows into all nodes in the right-hand group for the cut- set C. This sum is zero because of the Kirchhoff law constraint at each node. Viewed another way, however, we see that it cancels out each flow contributing to S, and also that each flow on a branch with both ends in the

1 L. Ford, Jr. and D. R. Fulkerson, Can.

  • J. Math.; to be published.

* G. B. Dantsig and D. R. Fulkerson, “On the Max-Flow Min- Cut Theorem

  • f Networks,”

in “Linear Inequalities,”

  • Ann. Math.

Studies, no. 38, Princeton, New Jersey, 1956.

The theorem establishes the maximum data rate that can be delivered from the source to the destination over any given graphical network.

32 / 45 I-Hsiang Wang NIT Lecture 1

slide-33
SLIDE 33

Course Information Overview

Max Flow = Min Cut

Max-Flow Min-Cut Theorem The destination can decode the message sent from the source reliably ⇐ ⇒ code rate R < the minimum cut value of the network.

Source Destination 2 5 3 3 1 3 5 2

33 / 45 I-Hsiang Wang NIT Lecture 1

slide-34
SLIDE 34

Course Information Overview

Max Flow = Min Cut

Max-Flow Min-Cut Theorem The destination can decode the message sent from the source reliably ⇐ ⇒ code rate R < the minimum cut value of the network.

Source Destination 2 5 3 3 1 3 5 2 4 3 3 3 1 1

Max flow = 4

34 / 45 I-Hsiang Wang NIT Lecture 1

slide-35
SLIDE 35

Course Information Overview

Max Flow = Min Cut

Max-Flow Min-Cut Theorem The destination can decode the message sent from the source reliably ⇐ ⇒ code rate R < the minimum cut value of the network.

Source Destination 2 5 3 3 1 3 5 2

Min cut = 4

35 / 45 I-Hsiang Wang NIT Lecture 1

slide-36
SLIDE 36

Course Information Overview

Max Flow = Min Cut

Max-Flow Min-Cut Theorem The destination can decode the message sent from the source reliably ⇐ ⇒ code rate R < the minimum cut value of the network.

Source Destination 2 5 3 3 1 3 5 2

Another cut with cut value = 6 (not a min cut)

36 / 45 I-Hsiang Wang NIT Lecture 1

slide-37
SLIDE 37

Course Information Overview

Network Coding

In a single-source-single-destination network, Ford-Fulkerson algorithm and Bellman-Ford algorithm are able to efficiently find edge-disjoint paths for routing information bits from the source to the destination. Routing (each bit takes a unit-capacity route) is optimal! However, it is no longer optimal when there are multiple destinations demanding the same source message. Instead of routing, intermediate nodes may need to perform Network Coding to attain the optimal amount of information flow.

37 / 45 I-Hsiang Wang NIT Lecture 1

slide-38
SLIDE 38

Course Information Overview

In 2001, Ahlswede-Cai-Li-Yeung proved the following theorem: Multicast Network Coding Theorem In a multicast network (single-source-multiple-destinations), all destinations can decode the message sent from the source reliably ⇐ ⇒ code rate R < the minimum of min source-destination cut values.

1204 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 46, NO. 4, JULY 2000

Network Information Flow

Rudolf Ahlswede, Ning Cai, Shuo-Yen Robert Li, Senior Member, IEEE, and Raymond W. Yeung, Senior Member, IEEE

Abstract—We introduce a new class of problems called network information flow which is inspired by computer network applica-

  • tions. Consider a point-to-point communication network on which

a number of information sources are to be mulitcast to certain sets

  • f destinations. We assume that the information sources are mu-

tually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. In this paper, we study the problem with one information source, and we have obtained a simple char- acterization of the admissible coding rate region. Our result can be regarded as the Max-flow Min-cut Theorem for network informa- tion flow. Contrary to one’s intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a “fluid” which can simply be routed or replicated. Rather, by em- ploying coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have signifi- cant impact on future design of switching systems. Index Terms—Diversity coding, multicast, network coding,

38 / 45 I-Hsiang Wang NIT Lecture 1

slide-39
SLIDE 39

Course Information Overview

Butterfly Network Example

Source Destination 2 Destination 1 Source Destination 2 Destination 1

Min cut for Source-Destination1 = Min cut for Source-Destination2 = 2 Question: can we achieve rate 2? With routing, it is impossible due to the bottleneck edge in the middle

39 / 45 I-Hsiang Wang NIT Lecture 1

slide-40
SLIDE 40

Course Information Overview

Butterfly Network Example

Source Destination 2 Destination 1 A B A B B A A ⊕ B A ⊕ B A ⊕ B

Rate 2 is achievable by XORing the two symbols at the bottleneck edge.

40 / 45 I-Hsiang Wang NIT Lecture 1

slide-41
SLIDE 41

Course Information Overview

Beyond Graphical (Wired) Networks

Graphical network serves as a good model for wired networks. This is due to the fact that in wired networks, signals from different nodes will not interfere with one another. For other networks such as wireless networks, signals from different nodes will interfere with one another, and the interaction of signals become critical in designing good coding schemes. This line of research falls in the field of Multi-user Information Theory, where most problems are dealing with small canonical models beyond the point-to-point model. In this course we will introduce four problems: Multiple Access Channel (MAC), Broadcast Channel (BC), Interference Channel (IC), and Distributed Source Coding problems.

41 / 45 I-Hsiang Wang NIT Lecture 1

slide-42
SLIDE 42

Course Information Overview

Multiple Access Channel

Encoder 1 Multiple Access Channel Decoder Encoder 2 Encoder K X1 X2 XK Source 1 Source 2 Source K p (y|x1, . . . , xK) Y Destination

The capacity region is completely characterized. This models the cellular uplink system quite well.

42 / 45 I-Hsiang Wang NIT Lecture 1

slide-43
SLIDE 43

Course Information Overview

Broadcast Channel

Broadcast Channel Decoder 2 Encoder Source Destination 2 X Y1 Decoder 1 Destination1 Decoder K Destination K Y2 YK p (y1, . . . , yK|x)

The capacity region is unknown in general (open problem). For some important channels such as the MIMO Gaussian BC, the capacity region is completely characterized. This models the cellular downlink system quite well.

43 / 45 I-Hsiang Wang NIT Lecture 1

slide-44
SLIDE 44

Course Information Overview

Interference Channel

Interference Channel Decoder 2 Destination 2 Y1 Decoder 1 Destination1 Decoder K Destination K Y2 YK Encoder 1 Encoder 2 Encoder K Source 1 Source 2 Source K p

  • y[1:K]|x[1:K]
  • The capacity region is unknown in general (open problem). Even for the

SISO two-user (K = 2) Gaussian IC, the capacity region is unknown. This is the simplest information theoretic model that captures the essence of interference. For some special cases, the capacity region is known.

44 / 45 I-Hsiang Wang NIT Lecture 1

slide-45
SLIDE 45

Course Information Overview

Summary

Information theory focuses on the quantitative aspects of information, not the qualitative aspects Information theory is mainly about what is possible and what is impossible in communication systems In information theory, one investigates problems in communication systems through the lens of probability theory and statistics In the course, we mainly focus on discrete-time signals, not on continuous-time signals Source-channel separation forms the basis of digital communication, where binary digits (bits) become the universal currency of information Extending Shannon’s principle to networks is a highly non-trivial task, and so far we are taking only initial steps

45 / 45 I-Hsiang Wang NIT Lecture 1