Chapter 1: Introduction to Information Theory
Book: “Information Theory, Inference, and Learning Algorithms”
from David MacKay
Chapter 1: Introduction to Information Theory Book: Information - - PowerPoint PPT Presentation
Chapter 1: Introduction to Information Theory Book: Information Theory, Inference, and Learning Algorithms from David MacKay Noisy Channels - Error correcting Codes Examples - System solution - Channel models - Binary symmetric Channel
from David MacKay
✲
✲
✲ Decoder ✲
Source coding (Data Compression) Source Entropy Channel Error Correction Channel Coding Channel Capacity Deconding Decompression
zi
✲ ❄
xi yi
✒✑ ✓✏
P(y|x) =
1 √ 2πσ2 exp
−(y−x)2
2σ2
✲ ✲
1 1 P(y = 0|x = 0) = P(y = 1|x = 1) = 1
(1 − f) (1 − f) f
✲ ✲
❅ ❅ ❅ ❅ ❅ ❅ ❅ ❘
1 1 P(y = 0|x = 0) = 1 − f P(y = 0|x = 1) = f P(y = 1|x = 0) = f P(y = 0|x = 0) = 1 − f
✲ ✲ ✲ ✲ PPPPPPPP P q PPPPPPPP P q PPPPPPPP P q
f f f f f f C C B B A A . . . P(y = A|x = A) = 1 − f P(y = B|x = A) = f P(y = B|x = B) = 1 − f P(y = C|x = B) = f · · · · · ·
(1 − f) (1 − f) f
✲ ✲
❅ ❅ ❅ ❅ ❅ ❘
1 1
P(r1r2r3)
BSC: P(r|s) = P(r|t(s)) = 3
n=1 P(rn|tn(s))
P(s=1|r) P(s=0|r) = P(r|s=1) P(r|s=0) = 3 n=1 P(rn|tn(1)) P(rn|tn(0))
2
P(rn|tn(0)) =
(1−f) f
f
Likelihood ratio P(r | s=1)
P(r | s=0)
f
≫ 1
N
2
N
3.
3 = R3 ◦ R3
3) ≈ 3
5
3 requires less computation
1 1 1 1 1 1 1 1 1 1 1
r=2
r
7pB
7
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
p(X) {I(X; Y )}
– Average block error of all random codes. – Jointly typical sequences:
N log
p(x, y)
probability of x, y being jointly typical → 1 for N → ∞.
(1 − f) (1 − f) f
✲ ✲
❅ ❅ ❅ ❅ ❅ ❘
1 1 I(X; Y ) = H(Y ) − H(Y |X) = H(Y ) −
p(x = i)H(Y |x = i) = H(Y ) − f log(1 f ) + (1 − f) log( 1 1 − f ) ≤ 1 − f log(1 f ) + (1 − f) log( 1 1 − f ) → C(bsc) = 0.5310
✲ ✲
1 1
→ Communicate only
1 R and let receiver guess
the missing fraction (1 − 1
R).
→ pb = 1
2(1 − 1 R)
C 1−H2(pb).