on source channel communication in networks
play

On Source-Channel Communication in Networks Michael Gastpar - PowerPoint PPT Presentation

On Source-Channel Communication in Networks Michael Gastpar Department of EECS University of California, Berkeley gastpar@eecs.berkeley.edu DIMACS: March 17, 2003. Michael Gastpar: March 17, 2003. Outline 1. Source-Channel Communication seen


  1. On Source-Channel Communication in Networks Michael Gastpar Department of EECS University of California, Berkeley gastpar@eecs.berkeley.edu DIMACS: March 17, 2003. Michael Gastpar: March 17, 2003.

  2. Outline 1. Source-Channel Communication seen from the perspective of the separation theorem 2. Source-Channel Communication seen from the perspective of measure-matching Acknowledgments • Gerhard Kramer • Bixio Rimoldi • Emre Telatar • Martin Vetterli Michael Gastpar: March 17, 2003.

  3. Source-Channel Communication Consider the transmission of a discrete-time memoryless source across a discrete-time memoryless channel. ˆ S X Y S ✲ ✲ ✲ ✲ Source F Channel G Destination G ( Y n ) = ˆ F ( S n ) = X n S n The fundamental trade-off is cost versus distortion , ∆ = Ed ( S n , ˆ S n ) Γ = Eρ ( X n ) What is the set of • achievable trade-offs (Γ , ∆) ? • optimal trade-offs (Γ , ∆) ? Michael Gastpar: March 17, 2003.

  4. The Separation Theorem ˆ S X Y S ✲ ✲ ✲ ✲ Source F Channel G Destination G ( Y n ) = ˆ F ( S n ) = X n S n For a fixed source ( p S , d ) and a fixed channel ( p Y | X , ρ ) : A cost-distortion pair (Γ , ∆) is achievable if and only if R (∆) ≤ C (Γ) . A cost-distortion pair (Γ , ∆) is optimal if and only if R (∆) = C (Γ) , subject to certain technicalities. Rate-matching: In an optimal communication system, the minimum source rate is matched (i.e., equal) to the maximum channel rate. Michael Gastpar: March 17, 2003.

  5. Source-Channel Communication in Networks Simple source-channel network: ˆ S 1 X 1 Y 1 S 11 ✲ F 1 ✲ ✲ G 1 ✲ Src 1 Dest 1 Channel S 12 , ˆ ˆ S 2 X 2 Y 2 S 22 ✲ F 2 ✲ ✲ G 12 ✲ Src 2 Dest 2 Trade-off between cost (Γ 1 , Γ 2 ) and distortion (∆ 11 , ∆ 12 , ∆ 22 ) . Achievable cost-distortion tuples? Optimal cost-distortion tuples? For the sketched topology, the (full) answer is unknown. Michael Gastpar: March 17, 2003.

  6. These Trade-offs Are Achievable: For a fixed network topology and fixed probability distributions and cost/distortion functions: R 2 ✻ If a cost-distortion tuple satisfies R R (∆ 1 , ∆ 2 , . . . ) ∩ C (Γ 1 , Γ 2 , . . . ) � = ∅ , ❅ ❅ ❅ ❅ ❅ C then it is achievable. ❅ R 1 ✲ When is it optimal? Michael Gastpar: March 17, 2003.

  7. Example: Multi-access source-channel communication S 1 F 1 X 1 ∈ { 0 , 1 } S 1 , ˆ ˆ ✲ Src 1 S 2 ❅ ❅ Y = X 1 + X 2 ❅ ❘ ♥ ✲ G 12 ✲ Dest ✒ � S 2 � ✲ F 2 X 2 ∈ { 0 , 1 } � Src 2 Capacity region of this channel is contained inside R 1 + R 2 ≤ 1 . 5 . Goal: Reconstruct S 1 and S 2 perfectly. R and C do not intersect. S 1 and S 2 are correlated: S 1 = 0 S 1 = 1 Yet uncoded transmission works. S 2 = 0 1 / 3 1 / 3 S 2 = 1 0 1 / 3 R 1 + R 2 ≥ log 2 3 ≈ 1 . 585 . This example appears in T. M. Cover, A. El Gamal, M. Salehi, “Multiple access channels with arbitrarily correlated sources.” IEEE Trans IT-26, 1980. Michael Gastpar: March 17, 2003.

  8. So what is capacity? The capacity region is computed assuming independent messages. In a source-channel context, the underlying sources may be dependent. MAC example: Allowing arbitrary dependence of the channel inputs, the capacity is log 2 3 = 1 . 585 , ”fixing” the example: R ∩ C � = ∅ . Can we simply redefine capacity appropriately? Remark: Multi-access with dependent messages is still an open problem. T. M. Cover, A. El Gamal, M. Salehi, “Multiple access channels with arbitrarily correlated sources.” IEEE Trans IT-26, 1980. Michael Gastpar: March 17, 2003.

  9. Separation Strategies for Networks In order to retain a notion of capacity: Discrete messages are transmitted reliably ˆ S 1 F ′ X 1 Y 1 G ′ S 11 ✲ F ′′ ✲ G ′′ ✲ ✲ ✲ ✲ Src 1 Dest 1 1 1 1 1 Channel S 12 , ˆ ˆ S 2 F ′ X 2 Y 2 G ′ S 22 ✲ F ′′ ✲ G ′′ ✲ ✲ ✲ ✲ Src 2 Dest 2 2 2 12 12 What is the best achievable performance for such a system? — The general answer is unknown. Michael Gastpar: March 17, 2003.

  10. Example: Broadcast Z 1 ˆ Y 1 S 1 ❄ ✲ ♥ ✲ G 1 ✲ Dest 1 S X ✲ Z 2 Src F ˆ Y 2 S 2 ❄ ✲ ♥ ✲ G 2 ✲ Dest 2 ∆ ∗ 1 and ∆ ∗ S, Z 1 , Z 2 are i.i.d. Gaussian. 2 cannot be achieved si- multaneously by sending messages Goal: Minimize the mean- reliably: The messages disturb one squared errors ∆ 1 and ∆ 2 . another. Denote by ∆ ∗ 1 and ∆ ∗ 2 the single- But uncoded transmission achieves user minima. ∆ ∗ 1 and ∆ ∗ 2 simultaneously. This cannot be fixed by altering the definitions of capacity and/or rate-distortion regions. Michael Gastpar: March 17, 2003.

  11. Alternative approach Eρ ( X n ) ≤ Γ ˆ S X Y S ✲ ✲ ✲ ✲ Source F Channel G Destination G ( Y n ) = ˆ F ( S n ) = X n S n A code ( F, G ) performs optimally if and only if it satisfies R (∆) = C (Γ) (subject to certain technical conditions). Equivalently, a code ( F, G ) performs optimally if and only if ρ ( x n ) = c 1 D ( p Y n | x n || p Y n ) + ρ 0 d ( s n , ˆ s n ) = − c 2 log 2 p ( s n | ˆ s n ) + d 0 ( s ) I ( S n ; ˆ S n ) = I ( X n ; Y n ) We call this the measure-matching conditions. Michael Gastpar: March 17, 2003.

  12. Single-source Broadcast ˆ Y 1 S 1 ✲ G 1 ✲ Dest 1 S X ✲ Src F Chan ˆ Y 2 S 2 ✲ G 2 ✲ Dest 2 Measure-matching conditions for single-source broadcast: If the single-source broadcast communication system satisfies ρ ( x ) = c (1) 1 D ( p Y 1 | x || p Y 1 ) + ρ (1) = c (2) 1 D ( p Y 2 | x || p Y 2 ) + ρ (2) 0 , 0 s 1 ) = − c (1) s 1 ) + d (1) 2 log 2 p ( s | ˆ d 1 ( s, ˆ 0 ( s ) , s 2 ) = − c (2) s 2 ) + d (2) d 2 ( s, ˆ 2 log 2 p ( s | ˆ 0 ( s ) , I ( X ; Y 1 ) = I ( S ; ˆ I ( X ; Y 2 ) = I ( S ; ˆ S 1 ) , and S 2 ) , then it performs optimally. Michael Gastpar: March 17, 2003.

  13. Sensor Network M wireless sensors measure physical phenomena characterized by S . ❅ � U 1 X 1 ✲ F 1 ❅ � U 2 X 2 ✲ F 2 ❅ � ˆ S Y S ✲ ✲ ✲ Source G Dest ❅ � U M X M ✲ F M Michael Gastpar: March 17, 2003.

  14. Gaussian Sensor Network The observations U 1 , U 2 , . . . , U k are noisy versions of S . W 1 U 1 X 1 ✓✏ ❄ ✲ ✲ F 1 k =1 E | X k | 2 ≤ MP � M ✒✑ ❈ ❈ ❈ ❈ W 2 ❈ U 2 X 2 ✓✏ ❄ ❈ ✲ ✲ F 2 ❈ Z ✒✑ ❆ ❈ ❆ ❈ ❆ ❈ ❆ ❈ ❆ ❈❈ ˆ S Y ❲ S ✓✏ ❆ ❯ ❄ ✲ ✲ Source G Dest ✒✑ ✄✄ ✗ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ W M ✄ U M X M ✓✏ ❄ ✄ ✲ ✲ F M ✄ ✒✑ Michael Gastpar: March 17, 2003.

  15. Gaussian Sensor Network: Bits Consider the following communication strategy: W 1 U 1 X 1 ✓✏ ❄ F ′ F ′′ ✲ ✲ Bits k =1 E | X k | 2 ≤ MP � M ✒✑ 1 1 ❈ ❈ ❈ ❈ W 2 ❈ U 2 X 2 ✓✏ ❄ ❈ F ′ F ′′ ✲ ✲ ❈ Bits Z 2 2 ✒✑ ❆ ❈ ❆ ❈ ❆ ❈ ❆ ❈ ❆ ❈❈ ˆ S Y ❲ S ✓✏ ❆ ❯ ❄ ✲ G ′′ G ′ ✲ Source Bits Dest ✒✑ ✄✄ ✗ ✄ ✄ ✄ ✄ ✄ ✄ ✄ ✄ W M ✄ U M X M ✓✏ ❄ ✄ F ′ F ′′ ✲ ✲ ✄ Bits ✒✑ M M Michael Gastpar: March 17, 2003.

  16. Gaussian Sensor Network: Bits (1/2) Source coding part. CEO problem. See Berger, Zhang, Viswanathan (1996); Viswanathan and Berger (1997); Oohama (1998). S ∼ N c (0 , σ 2 W 1 S ) U 1 T 1 ✓✏ ❄ F ′ ✲ ✲ ✒✑ 1 and for k = 1 , . . . , M , W 2 U 2 T 2 ✓✏ ❄ W k ∼ N c (0 , σ 2 F ′ W ) ✲ ✲ ✒✑ 2 ˆ S S G ′ ✲ Src Dest For large R tot , the be- havior is D CEO ( R tot ) = σ 2 W . R tot W M U M T M ✓✏ ❄ F ′ ✲ ✲ ✒✑ M Michael Gastpar: March 17, 2003.

  17. Gaussian Sensor Network: Bits (2/2) Channel coding part. Additive white Gaussian multi-access channel: � � 1 + MP R sum ≤ log 2 . σ 2 Z However, the codewords may be dependent. Therefore, the sum rate may be up to 1 + M 2 P � � R sum ≤ log 2 . σ 2 Z ⋆ ⋆ ⋆ Hence, the distortion for a system that satisfies the rate-matching condition is at least σ 2 W D rm ( M ) ≥ � � 1 + M 2 P log 2 σ 2 Z Is this optimal? Michael Gastpar: March 17, 2003.

  18. Gaussian Sensor Network: Uncoded transmission Consider instead the following “coding” strategy: α 1 W 1 U 1 X 1 ✓✏ ❄ ✓✏ ❄ ✲ ❅ � k =1 E | X k | 2 ≤ MP � M � ❅ ✒✑ ✒✑ ❈ ❈ ❈ α 2 ❈ W 2 ❈ U 2 X 2 ✓✏ ❄ ✓✏ ❄ ❈ ✲ ❈ ❅ � Z � ❅ ✒✑ ✒✑ ❆ ❈ ❆ ❈ ❆ ❈ ❆ ❈ ❆ ❈❈ ˆ S Y ❲ S ✓✏ ❆ ❯ ❄ ✲ ✲ Source G Dest ✒✑ ✄✄ ✗ ✄ ✄ ✄ ✄ ✄ ✄ ✄ α M ✄ W M ✄ U M X M ✓✏ ❄ ✓✏ ❄ ✄ ✲ ✄ ❅ � � ❅ ✒✑ ✒✑ Michael Gastpar: March 17, 2003.

  19. Gaussian Sensor Network: Uncoded transmission Strategy: The sensors transmit whatever they measure, scaled to their power constraint, without any coding at all. � M � � P � Y [ n ] = MS [ n ] + W k [ n ] + Z [ n ] . σ 2 S + σ 2 W k =1 If the “decoder” is the minimum mean-squared error estimate of S based on Y , the following distortion is incurred: Proposition 1. Uncoded transmission achieves σ 2 S σ 2 W D 1 ( MP ) = . M 2 W ) /P σ 2 S + σ 2 M +( σ 2 Z /σ 2 W )( σ 2 S + σ 2 W This is better than separation ( D rm ∝ 1 / log M ). In this sense, uncoded transmission beats capacity. Is it optimal? Michael Gastpar: March 17, 2003.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend