- Asst. Prof. Dr. Prapun Suksompong
prapun@siit.tu.ac.th
Information-Theoretic Quantities
1
Di Digi gital tal Co Comm mmuni unication cation Sy Syst stem ems
EC ECS 452
Office Hours: Rangsit Library: Tuesday 16:20-17:20 BKD3601-7: Thursday 16:00-17:00
Di Digi gital tal Co Comm mmuni unication cation Sy Syst - - PowerPoint PPT Presentation
Di Digi gital tal Co Comm mmuni unication cation Sy Syst stem ems ECS 452 EC Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th Information-Theoretic Quantities Office Hours: Rangsit Library: Tuesday 16:20-17:20 BKD3601-7:
prapun@siit.tu.ac.th
1
Office Hours: Rangsit Library: Tuesday 16:20-17:20 BKD3601-7: Thursday 16:00-17:00
2
Coursework will be weighted as follows:
3
Elements of Information
By Thomas M. Cover and
2nd Edition (Wiley) Chapters 2, 7, and 8 1st Edition available at SIIT
4
The model considered here is a simplified version of what we have seen earlier in the
course.
In the next chapter, we will present how this model can be derived from the digital
modulator-demodulator over continuous-time AWGN noise one.
The channel input is denoted by a random variable X.
The pmf pX(x) is usually denoted by simply p(x) and usually expressed in the form of a row
vector 𝑞 or .
The support 𝑇𝑌 is often denoted by .
The channel output is denoted by a random variable Y.
The pmf pY(y) is usually denoted by simply q(y) and usually expressed in the form of a row
vector 𝑟.
The support 𝑇𝑍 is often denoted by .
The channel corrupts X in such a way that when the input is 𝑌 = 𝑦, the output 𝑍 is
randomly selected from the conditional pmf 𝑞𝑍|𝑌 𝑧|𝑦 .
This conditional pmf 𝑞𝑍|𝑌 𝑧|𝑦 is usually denoted by Q 𝑧|𝑦 and usually expressed in
the form of a probability transition matrix Q.
𝑟 = 𝑞Q
Q y x X Y
5
Consider a (discrete memoryless) channel whose is Q(y|x). The “information” channel capacity of this channel is defined as
where the maximum is taken over all possible input pmf’s pX(x).
Remarks:
In the next chapter, we shall define an “operational” definition of
channel capacity as the highest rate in bits per channel use at which information can be sent with arbitrarily low probability of error.
Shannon’s theorem establishes that the information channel capacity is
equal to the operational channel capacity.
Thus, we may drop the word information in most discussions of
channel capacity.
max ; max , ,
X
p x p
C I X Y I p Q
6
0.4 0.6 0.4 0.6
Capacity of 0.029 bits is achieved by
0.5, 0.5 p X Y 0.6 0.4 0.4 0.6 Q
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.005 0.01 0.015 0.02 0.025 0.03 p0 I(X;Y)
,1 p p p
0,4,0.6 0.6 0.4 ,1 0.4 0.6 ; 0,4,0.6 H Y X H q p p I X Y H q H
7
p 1-p 1-
p 0.9, 0.4 p
Ex. Capacity of 0.0918 bits is achieved by
0.5380, 0.4620 p X Y 1 1 p p Q
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 p0 I(X;Y)
8
In general, there is no closed-form solution for capacity. The maximum can be found by standard nonlinear optimization
techniques.
A famous iterative algorithm, called the Blahut–Arimoto algorithm,
was developed by Arimoto and Blahut.
Start with a guess input pmf p0(x). For r > 0, construct pr(x) according to the following iterative prescription:
9
10
Former chair of the
Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign
Best known for
Blahut–Arimoto algorithm (Iterative Calculation of C)
11
BS, MEng and PhD