lecture 7 lossy source coding
play

Lecture 7 Lossy Source Coding I-Hsiang Wang Department of - PowerPoint PPT Presentation

Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem Lecture 7 Lossy Source Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 2, 2015 1 / 39 I-Hsiang


  1. Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem Lecture 7 Lossy Source Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 2, 2015 1 / 39 I-Hsiang Wang IT Lecture 7

  2. Lossy Source Coding Theorem for Memoryless Sources Recall: in Lecture 03, we investigated the fundamental limit of (almost) I-Hsiang Wang 2 / 39 the entropy rate of the source: The minimum compression ratio to fulfill lossless reconstruction is lim The recovery criterion is vanishing probability of error : Proof of the Coding Theorem lossless block-to-block (or fixed-to-fixed) source coding. IT Lecture 7 The Block-to-Block Source Coding Problem s [1 : N ] b [1 : K ] b s [1 : N ] Source Source Encoder Decoder Source Destination { S N } S N ̸ = � N →∞ P = 0 . R ∗ = H ( { S i } ) , for stationary and ergodic { S i } .

  3. Lossy Source Coding Theorem for Memoryless Sources where the setting is the same as before, except I-Hsiang Wang 3 / 39 S I min The minimum compression ratio to fulfill reconstruction to within a Proof of the Coding Theorem d lim sup The recovery criterion is reconstruction to with a given distortion D : IT Lecture 7 In this lecture, we turn our focus to lossy block-to-block source coding, The Block-to-Block Source Coding Problem s [1 : N ] b [1 : K ] s [1 : N ] b Source Source Encoder Decoder Source Destination [ ( S N )] S N , � ≤ D . E N →∞ given distortion D is the rate-distortion function: ( ) S ; � R ( D ) = , for DMS { S i } . S | S : E [ d ( S , � S )] ≤ D p �

  4. Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem I-Hsiang Wang 4 / 39 particular, Gaussian sources will be our main focus. theorems from the discrete-source case to the continuous-source case. In Then, we employ the discretization technique to extend the coding image/video/audio compression, etc. quantization/digitization of continuous-valued signals, Lossy source coding has wide range of applications, including the source is usually infinite! Sometimes it is impossible to reconstruct the source losslessly. lossless way. Sometimes it might be too expensive to reconstruct the source in a Why lossy source coding? IT Lecture 7 For example, if the source is continuous-valued, the entropy rate of In this lecture, we first focus on discrete memoryless sources (DMS).

  5. Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem I-Hsiang Wang 5 / 39 Performance is determined by the chosen distortion measure. different in general. source coding: Lossy source coding has a couple of notable differences from lossless distortion (quantization error) is below a prescribed level D . The general lossy source coding problem involves quantizing all possible Lossless vs. Lossy Source Coding IT Lecture 7 source sequences s N ∈ S N into 2 K reconstruction sequences � s N ∈ � S N , which can be represented by K bits. The goal is to design the correspondence between s N and � s N so that the Source alphabet S and the reconstruction alphabet � S could be

  6. Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem Lossy Source Coding Theorem Rate Distortion Function 1 Lossy Source Coding Theorem for Memoryless Sources Lossy Source Coding Theorem Rate Distortion Function 2 Proof of the Coding Theorem Converse Proof Achievability 6 / 39 I-Hsiang Wang IT Lecture 7

  7. Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem Lossy Source Coding Theorem Rate Distortion Function 1 Lossy Source Coding Theorem for Memoryless Sources Lossy Source Coding Theorem Rate Distortion Function 2 Proof of the Coding Theorem Converse Proof Achievability 7 / 39 I-Hsiang Wang IT Lecture 7

  8. Lossy Source Coding Theorem for Memoryless Sources s . I-Hsiang Wang 8 / 39 Examples : below are two widely used distortion measures: N Proof of the Coding Theorem d defined as the average of the per-symbol distortion: s N , the distortion between them is IT Lecture 7 Lossy Source Coding Theorem Definition 1 (Distortion Measure) Rate Distortion Function We begin with the definition of the distortion measure per symbol. Distortion Measures A per-symbol distortion measure is a mapping d ( s , � s ) that maps from S × � S to [0 , ∞ ) , and it is understood as the cost of representing s by � For two length N sequences s N and � ( s N ) ∑ N ≜ 1 s N , � i =1 d ( s i , � s i ) . Hamming distortion: S = � s ) ≜ 1 { s ̸ = � S , d ( s , � s } . Squared-error distortion: S = � s ) 2 . s ) ≜ ( s − � S = R , d ( s , �

  9. Lossy Source Coding Theorem for Memoryless Sources 1 A I-Hsiang Wang 9 / 39 codes such that lim sup sequence of . d s N . Proof of the Coding Theorem source code consists of IT Lecture 7 Lossy Source Coding Theorem Rate Distortion Function Lossy Source Coding: Problem Setup s [1 : N ] b [1 : K ] b s [1 : N ] Source Source Encoder Decoder Source Destination ( ) 2 NR , N an encoding function (encoder) enc N : S N → { 0 , 1 } K that maps each source sequence s N to a bit sequence b K , where K ≜ ⌊ NR ⌋ . a decoding function (decoder) dec N : { 0 , 1 } K → � S N that maps each bit sequence b K to a reconstructed source sequence � [ ( S N )] 2 The expected distortion of the code D ( N ) ≜ E S N , � 3 A rate-distortion pair ( R , D ) is said to be achievable if there exist a ( ) D ( N ) ≤ D . 2 NR , N N →∞ The optimal compression rate R ( D ) ≜ inf { R | ( R , D ) : achievable } .

  10. Lossy Source Coding Theorem for Memoryless Sources , I-Hsiang Wang 10 / 39 N d s Proof of the Coding Theorem s the expected distortion is still D min . IT Lecture 7 Even the decoder knows the entire s N Lossy Source Coding Theorem Rate Distortion Function distortion so that the rate is finite. It denotes the minimum possible target Rate Distortion Trade-off R D min ≜ min s ( s ) E [ d ( S , � s ( S ))] � H ( S ) R ( D min ) R ( D ) s N ( s N ) and finds a best representative � Achievable Not Achievable D max ≜ min D E [ d ( S , � s )] D min D max � s ∗ ≜ arg min E [ d ( S , � s )] . Then for target distortion D ≥ D max , we can use a Let � � s ∗ ≜ ( � s ∗ ) to reconstruct all s N ∈ S N (rate is 0 !), and s ∗ , � s ∗ , . . . , � single representative � [ ( s ∗ )] ∑ N D ( N ) = E = 1 s ∗ )] = D max ≤ D . S N , � i =1 E [ d ( S i , � Hence, R ( D ) = 0 for all D ≥ D max .

  11. Lossy Source Coding Theorem for Memoryless Sources S (1) Proof of the Coding Theorem H S S I Uncertainty I of source S Uncertainty of S S The rate used in S 11 / 39 I-Hsiang Wang S Interpretation : IT Lecture 7 Theorem 1 (A Lossy Source Coding Theorem for DMS) Lossy Source Coding Theorem Lossy Source Coding Theorem Rate Distortion Function min s [1 : N ] b [1 : K ] s [1 : N ] b Source Source Encoder Decoder Source Destination For a discrete memoryless source { S i | i ∈ N } , ( ) S ; � R ( D ) = . S | S : E [ d ( S , � S )] ≤ D p � ( � ) ( ) � �� S ; � H ( S ) − = − = compressing S to � after learning �

  12. Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem Lossy Source Coding Theorem Rate Distortion Function 1 Lossy Source Coding Theorem for Memoryless Sources Lossy Source Coding Theorem Rate Distortion Function 2 Proof of the Coding Theorem Converse Proof Achievability 12 / 39 I-Hsiang Wang IT Lecture 7

  13. Lossy Source Coding Theorem for Memoryless Sources Proof of the Coding Theorem I-Hsiang Wang 13 / 39 Below we sketch the proof of these properties. These properties are all quite intuitive. 4 Continuous in D . 3 Convex in D . 2 Non-increasing in D . 1 Nonnegative satisfies the following properties: IT Lecture 7 Properties of Rate Distortion Function Rate Distortion Function Lossy Source Coding Theorem A rate distortion function R ( D ) R H ( S ) R ( D min ) R ( D ) Achievable Not 5 R ( D min ) ≤ H ( S ) . Achievable D D min D max 6 R ( D ) = 0 if D ≥ D max .

  14. Lossy Source Coding Theorem for Memoryless Sources I I-Hsiang Wang 14 / 39 S S S R S Proof is complete since I Proof of the Coding Theorem s S d , the optimizing conditional S s IT Lecture 7 Rate Distortion Function R Convexity Clear from the definition. Monotonicity arg min Lossy Source Coding Theorem The goal is to prove that D 1 , D 2 ≥ D min and λ ∈ (0 , 1) , λ ≜ 1 − λ , ( ) λ D 1 + λ D 2 ≤ λ R ( D 1 ) + λ R ( D 2 ) . ( ) S ; � s | s ) ≜ Let p i ( � S | S : E [ d ( S , � S )] ≤ D i p � distribution that achieves distortion D i , for i = 1 , 2 . Let p λ ≜ λ p 1 + λ p 2 . s | s ) , the expected distortion between S and � S ≤ λ D 1 + λ D 2 , Under p λ ( � [ ( )] ∑ ∑ [ ] S , � ∵ E p ( S ) p λ ( � = p ( s ) λ p 1 ( � s | s ) + λ p 2 ( � s | s ) d ( s , � s ) . s | s ) � ( ) S ; � is convex in p S | � S with a fixed p S : ( ) ( ) ( ) ( ) S ; � S ; � S ; � λ D 1 + λ D 2 ≤ I ≤ λ I + λ I p 1 p 2 p λ = λ R ( D 1 ) + λ R ( D 2 ) .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend