More on the Reliability Function of the BSC
Andrew McGregor University of Pennsylvania Alexander Barg DIMACS, Rutgers University
ISIT 2003, Yokohama
More on the Reliability Function of the BSC Alexander Barg Andrew - - PowerPoint PPT Presentation
More on the Reliability Function of the BSC Alexander Barg Andrew McGregor DIMACS, Rutgers University University of Pennsylvania ISIT 2003, Yokohama Some Definitions Some Definitions Communicating over a binary symmetric channel with
ISIT 2003, Yokohama
Communicating over a binary symmetric
Communicating over a binary symmetric
We use a length n binary code C={x1, x2, …
Communicating over a binary symmetric
We use a length n binary code C={x1, x2, …
Communicating over a binary symmetric
We use a length n binary code C={x1, x2, …
No matter what code we use there is the
e(x) = P x({0,1}n \ D(x))
The average error probability of decoding is We’re interested in We present a new lower bound for this
The average error probability of decoding is We’re interested in We present a new lower bound for this
P
e(C) = 1
|C | P
e(x) x∈C
The average error probability of decoding is We’re interested in We present a new lower bound for this
P
e(C) = 1
|C | P
e(x) x∈C
e(R) =
C:Rate(C )>R P e(C)
The average error probability of decoding is We’re interested in We present a new lower bound for this
P
e(C) = 1
|C | P
e(x) x∈C
n→∞
C:R(C )>R P e(C)
e(R) =
C:Rate(C )>R P e(C)
E(R,p) R p=0.01
0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1
E(R,p) R p=0.01
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
E(R,p) R p=0.01
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
E(R,p) R p=0.01
x
e(x) = P x({0,1}n \ D(x))
e(x) =
y∈C: d (y,x j )≤d(y,x) for some x j ∈C
x
x w
e(x) =
y∈C: d (y,x j )≤d(y,x) for some x j ∈C
x
e(x) ≥
y∈C: d (y,x j )≤d(y,x) for some x j ∈C where d(x,x j )= w
x
e(x) ≥ P x(
j:d (x,x j )= w
e(x) ≥
x(Y j) j:d (x,x j )= w
k:nk >n j
x
x(Y j) = P x(X j \
k:nk >n j
x(X j)(1−
x(Xk | X j) k:nk >n j
e(x) ≥
x(X j)(1− j:d (x,x j )= w
x(Xk | X j) k:nk >n j
x(Y j) = P x(X j \
k:nk >n j
x(X j)(1−
x(Xk | X j) k:nk >n j
Now look across the entire code. Let Xij
Therefore we have:
What we do now depends on the values of
Now look across the entire code. Let Xij
Therefore we have:
What we do now depends on the values of
e(xi) ≥
i(Yij) j:d (xi ,x j )= w
Now look across the entire code. Let Xij
Therefore we have:
What we do now depends on the values of
i(Xij)(1− Kij)
e(xi) ≥
i(Yij) j:d (xi ,x j )= w
Now look across the entire code. Let Xij
Therefore we have:
What we do now depends on the values of
i(Xij)(1− Kij)
e(xi) ≥
i(Yij) j:d (xi ,x j )= w
i(Xik | Xij) k:nik > nij
Just remove codewords in S from the code! Then in the remaining code we have for all Yij
Hence, modulo constant factors, the average
where A(w)= Pi(Xij )
Consider
Consider a codeword xj such that Kij>1/2. Then there
The upshot of S being substantial is that we discover a
Consider
Consider a codeword xj such that Kij>1/2. Then there
The upshot of S being substantial is that we discover a
Kij = P
i(Xik | Xij) k:nik > nij
= B(w,l)
k:nik > nij ,d(x j ,xk )= l
l= 0 n
Consider
Consider a codeword xj such that Kij>1/2. Then there
The upshot of S being substantial is that we discover a
Kij = P
i(Xik | Xij) k:nik > nij
= B(w,l)
k:nik > nij ,d(x j ,xk )= l
l= 0 n
B(w,l) = P
i(Xik | Xij) where d(xi,x j) = d(xi,xk) = w, d(x j,xk) = l
A priori we don’t know whether we required a
But if there existed a nuisance level l1 then
Hence we can repeat the process with this
A priori we don’t know whether we required a
But if there existed a nuisance level l1 then
Hence we can repeat the process with this
e(C, p) ≥ min A(w)µ(w), A(w) B(w,l1 )
A priori we don’t know whether we required a
But if there existed a nuisance level l1 then
Hence we can repeat the process with this
e(C, p) ≥ min A(w)µ(w), A(w) B(w,l1 )
1)
e(C, p) ≥ min A(w)µ(w), A(l ) B(w,l )
It can be shown that, with high probability, the
Using this instead of Litsyn’s expression µ
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1