information theory
play

Information Theory Amount of information in a message by the - PowerPoint PPT Presentation

Information Theory Amount of information in a message by the average number of bits needed to encode all possible messages in an optimal encoding. In computer systems, programs and text files are usually encoded with 8-bit ASCII codes,


  1. Information Theory • Amount of information in a message by the average number of bits needed to encode all possible messages in an optimal encoding. • In computer systems, programs and text files are usually encoded with 8-bit ASCII codes, regardless of the amount of information in them. • text files can be compressed by about 40% without losing any information. • Amount of Information: Entropy – function of the probability distribution over the set of all possible messages.

  2. Entropy H (X) = Example: Suppose there are two possibilities: Male and Female, both equally likely; Thus p(Male) = p(Female) = 1/2. Then

  3. Because 1/p(X) decreases as p(X) increases, an optimal encoding uses short codes for frequently occurring messages at the expense of using longer ones for infrequent messages. This principle is applied in Morse code , where the most frequently used letters are assigned the shortest codes. Huffman codes are optimal codes assigned to characters, words, machine instructions, or phrases. Single-character Huffman codes are frequently used to compact large files.

  4. An optimal encoding assigns a 1-bit code to A and 2-bit codes to B and C. For example, A can be encoded with the bit 0, while B and C can be encoded with two bits each, 10 and 11. Using this encoding, the 8-letter sequence ABAACABC is encoded as the 12-bit sequence 010001101011 AB AAC AB C 0 10 0 0 11 0 10 11 The average number of bits per letter is 12/8 = 1.5. The above encoding is optimal; the expected number of bits per letter would be at least 1.5 with any other encoding. Note that B, for example, cannot be encoded with the single bit 1, because it would then be impossible to decode the bit sequence 11 (it could be either BB or C). Morse code avoids this problem by separating letters with spaces. Because spaces (blanks) must be encoded in computer applications, this approach in the long run requires more storage,

  5. Example: Suppose all messages are equally likely; that is, p(Xi) = 1/n for i = 1 . . . . , n. Then H(X)= n[(1/2)log2 n] = log2 n. Thus, log2 n bits are needed to encode each message. and k bits are needed to encode each possible message, m Example: Let n = 1 and p(X) = 1. Then H(X) = log2 1 = 0. There is no information because there is no choice,

  6. Given n, H(X) is maximal for p(X1) = . . . = p(Xn) = 1/n; that is, when all messages are equally likely . H(X) decreases as the distribution of messages becomes more and more skewed, reaching a minimum of H(X) = 0 when p(Xi) = 1 for some message Xi Example: Suppose X – is a 32-bit integer variable. Then X can have at most 32 bits of information. If small values of X are more likely than larger ones (as is typical in most programs), then H(X) will be less than 32, and if the exact value of X is known, H(X) will be 0.

  7. The entropy of a message: measures its uncertainty in that it gives the number of bits of information that must be learned when the message has been distorted by a noisy channel or hidden in ciphertext. Example: if a cryptanalyst knows the ciphertext block "Z$JP7K" corresponds to either the plaintext "MALE" or the plaintext "FEMALE", the uncertainty is only one bit. The cryptanalyst need only determine one character, say the first, and because there are only two possibilities for that character, only the distinguishing bit of that character need be determined. If it is known that the block corresponds to a salary, then the uncertainty is more than one bit, but it can be no more than log2 n bits, where n is the number of possible salaries. Similarly Bank Pins

  8. For a given language, consider the set of all messages N characters long. The rate of the language for messages of length N is defined by r = H(X)/N; • that is, the average number of bits of information in each character. For large N, estimates of r for English range from 1.0 bits/letter to 1.5 bits/letter. The absolute rate of the language = the maximum number of bits of information that could be encoded in each character assuming all possible sequences of characters are equally likely. If there are L characters in the language, then the absolute rate is given by R = log2 L, the maximum entropy of the individual characters. For English, R = log2 26 = 4.7 bits/letter. The actual rate of English is thus considerably less than its absolute rate. The reason is that English, like all natural languages, is highly redundant. For example, the phrase "occurring frequently" could be reduced by 58% to "crng frq" without loss of information.

  9. Statistical Properties (1) • Single letter frequency distributions: Certain letters such as E, T, and A occur much more frequently than others. • 2- Digram frequency distributions. Certain digrams (pairs of letters) such as TH and EN occur much more frequently than others. Some digrams (e.g., QZ) never occur in meaningful messages even when word boundaries are ignored (acronyms are an exception). • 3-Trigram distributions. The proportion of meaningful sequences decreases when trigrams are considered (e.g., BBB is not). Among the meaningful trigrams, certain sequences such as THE and ING occur much more frequently than others. • 4-N-gram distributions. As longer sequences are considered, the proportion of meaningful messages to the total number of possible letter sequences decreases. Long messages are structured not only according to letter sequences within a word but also by word sequences (e.g., the phrase PROGRAMMING LANGUAGES is much more likely than the phrase LANGUAGES PROGRAMMING).

  10. Statistical properties (2) • Programming languages have a similar structure: – Here there is more freedom in letter sequences – e.g., the variable name QZK is perfectly valid – but the language syntax imposes other rigid rules about the placement of keywords and delimiters. • The rate of a language (entropy per character) is determined by estimating the entropy of N-grams for increasing values of N. • As N increases, the entropy per character decreases because there are fewer choices and certain choices are much more likely. The decrease is sharp at first but tapers off quickly; • The rate is estimated by extrapolating for large N.

  11. Rate / Absolute rate of a Language • Rata of the languag e: for messages of length N = H(X)/N where X is the message – average no. of bits of information in each character – For large N, estimates of r for English range from 1.0 bits/letter to 1.5 bits/letter • Absolute Rate of the Language: maximum no. of bits of info that can be encoded in each character assuming all possible sequences of characters are likely. • If there are L characters in the language then R = log_2 (26) = 4.7 bits/ letter --

  12. Statistical Properties (3) • The redundancy of a language with rate r and absolute rate R is defined by • D = R – r • For R = 4.7 and r = 1, D = 3.7, whence the ratio D/R shows English to be about 79% redundant; • For r = 1.5, D = 3.2, implying a redundancy of 68%. • Often use conservative estimates in practice

  13. Statistical Properties (3) • The uncertainty of messages may be reduced given additional information. • Ex: X - 32-bit integer such that all values are equally likely; • Thus the entropy of X is H(X) = 32. • Suppose it is known that X is even. • Then the entropy is reduced by one bit because the low order bit must be 0.

  14. Equivocation

  15. Equivocation Example: • Let n = 4 and p(X) = 1/4 for each message X; • thus H(X) = log_2( 4) = 2. • Similarly, let m = 4 and p(Y) = 1/4 for each message Y. • Now, suppose each message Y narrows the choice of X to two of the four messages where both messages are equally likely: • Y1: X1 or X2, Y2: X2 or X3 • Y3:X3 or X4, Y4:X4 or X1 . • Then for each Y, py (X) = 1/2 for two of the X's and py(X) = 0 for the remaining two X's.

  16. Equivocation is then given by Thus knowledge of Y reduces the uncertainty of X to one bit, corresponding to the two remaining choices for X.

  17. Perfect Secrecy (Shannon) = the probability that message M was sent given that C was received • thus C is the encryption of message M. Perfect secrecy is defined by the condition • intercepting the ciphertext gives a cryptanalyst no additional information = probability of receiving ciphertext C given that M was sent. Then Usually, with one key; but some can map many to one THUS For perfect secrecy

  18. Example: Four messages, all equally likely, and four keys, also equally likely P(M) = 1/4, and P_M(C) = p(C) = 1/4 for all M and C. A cryptanalyst intercepting one of the ciphertext messages C1, C2, C3, or C4 would have no way of determining which of the four keys was used and, therefore, whether the correct message is M1, M2, M3, or M4

  19. Perfect Secrecy • Perfect secrecy requires that the number of keys must be at least as great as the number of possible messages.

  20. M = THERE IS NO OTHER LANGUAGE BUT FRENCH Only one of the keys (K = 18) produces a meaningful message,

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend