today rate and relative distance recall the four integer
play

Today Rate and Relative Distance Recall the four integer parameters - PDF document

Today Rate and Relative Distance Recall the four integer parameters Asymptotically good codes. (Block) Length of code n Random/Greedy codes. Message length of code k Some impossibility results. Minimum Distance of code d


  1. Today Rate and Relative Distance Recall the four integer parameters • Asymptotically good codes. • (Block) Length of code n • Random/Greedy codes. • Message length of code k • Some impossibility results. • Minimum Distance of code d • Alphabet size q Code with above parameters referred to as ( n, k, d ) q code. If code is linear it is an [ n, k, d ] q code. (Deviation from standard coding non-linear codes are referred to by number of codewords. so a linear [ n, k, d ] q with the all zeroes word � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 1 � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 2 deleted would be an ( n, q k − 1 , d ) q code, while Impossibility result 1: Singleton Bound we would have it as an ( n, k − ǫ, d ) q code.) Today will focus on the normalizations: Note: Singleton is a person’s name! Not related to proof technique. Should be called def ”Projection bound”. • Rate R = k/n . Main result: R + δ ≤ 1 . def • Relative Distance δ = d/n . More precisely, for any ( n, k, d ) q code, k + d ≤ Main question(s): How does R vary as n + 1 . function of δ , and how does this variation Proof: Take an ( n, k, d ) q code and project on depend on q ? to k − 1 coordinates. Two codewords must project to same sequence (PHP). Thus these two codewords differ on at most n − ( k − 1) coordinates. Thus d ≤ n − k + 1 . � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 3 � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 4

  2. Impossibility result 2: Hamming Bound Question: Are these bounds in the right ballpark? Recall from lecture 1, Hamming proved a bound for binary codes: If bounds are tight, it implies there could be codes of positive rate at δ = 1 . Is this Define Vol q ( n, r ) to be volume of ball of feasible? Will rule this out in the next few radius r in Σ n , where | Σ | = q . lectures. Then Hamming claimed 2 k · Vol 2 ( n, ( d − If bounds are in the right ballpark, there exist 1) / 2) ≤ 2 n . codes of positive rate and relative distance. Is this feasible? YES! Lets show this. Asymptotically R + H 2 ( δ/ 2) ≤ 1 . q -ary generalization: q k · Vol q ( n, ( d − 1) / 2) ≤ q n . Asymptotically R + H q ( δ/ 2) ≤ 1 , where H q ( p ) = − p log q p − (1 − p ) log q (1 − p ) + p log q ( q − 1) . � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 5 � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 6 The random code The random code Lets pick c 1 , . . . , c K at random from { 0 , 1 } n Recall the implication of Shannon’s theorem: and consider the probabilty that they are all Can correct p fraction of (random) error, with pairwise hope they are at distance d = δn . encoding algorithms of rate 1 − H ( p ) . Surely this should give a nice code too? Will analyze Let X i be the indicator variable for the event below. that the codeword c i is at distance less than d from some codeword c j for j < i . Code: Pick 2 k random codewords in { 0 , 1 } n . Lets analyze distance. Note that the probability that X i = 1 is at most ( i − 1) · 2 H ( δ ) · n / 2 n . Thus the probability that there exists an i such that X i = 1 is at most � K i =1 ( i − 1) · 2 H ( δ ) − 1 · n . The final quantity above is roughly 2 (2 R + H ( δ ) − 1) · n and thus we have that we can get codes of rate R with relative distance δ provided 2 R + H ( δ ) < 1 . � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 7 � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 8

  3. some K , then it means that there exist codes A better random code with K/ 2 codewords that have distance at least d . Furthermore if the probability that The bound we have so far only says we can X K = 1 is less than 1 / 10 , we have that the get codes of rate 1 2 as the relative distance probability that � K i =1 X i > K/ 2 is at most 1 approaches 0 . One would hope to do better. 5 (by Markov’s Inequality) and so it suffices to have E [ X K ] = K 2 ( H ( δ ) − 1) · n ≤ 1 However, we don’t know of better ways to 10 . Thus, we estimate either the probability that X i = 1 , get that if R + H ( δ ) < 1 then there exists a or the probability that {∃ i | X i = 1 } . code with rate R and distance δ . Turns out, a major weakness is in our In the Problem Set, we will describe many interpretation of the results. Notice that other proofs of this fact. if X i = 1 , it does not mean that the code we found is totally bad. It just means that we have to throw out the word c i from our code. So rather than analyzing the probability that all X i s are 0 , we should analyze the probability of the event � K i =1 X i ≥ K/ 2 . If we can bound this probability away from 1 for � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 9 � Madhu Sudan, Fall 2004: Essential Coding Theory: MIT 6.895 c 10

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend