part ii linear equalizations
play

Part II. Linear Equalizations Matched-Filter, Zero-Forcing, MMSE - PowerPoint PPT Presentation

Part II. Linear Equalizations Matched-Filter, Zero-Forcing, MMSE Equalization 1 Mitigate ISI with Linear Filters { W m } Linear Equalizer symbol-wise { V m } { u m } LTI filter: { g ` } detection ISI is caused by a (discrete-time) LTI


  1. Part II. Linear Equalizations Matched-Filter, Zero-Forcing, MMSE Equalization 1

  2. Mitigate ISI with Linear Filters { W m } Linear Equalizer symbol-wise { V m } { ˆ u m } LTI filter: { g ` } detection • ISI is caused by a (discrete-time) LTI filter due to the frequency selectivity of the channel • Why not use another discrete-time LTI filter at the receiver to mitigate ISI, and do symbol-wise detection at the filtered output? • Design of the filter requires some objectives for optimization: { g ` } ‣ Probability of error? hard to analyze ‣ Energy will be easier to handle • Since the ISI is treated as noise in the symbol-wise detection, we should try to maximize the signal-to-interference-and-noise ratio (SINR) at the filtered output { W m } 2

  3. Linear Equalizers to be Introduced • Use Z transform to represent the discrete-time LTI filter X g ` ζ − ` g ( ζ ) , → ˇ g ` ← ` ‣ Recall its relation with DTFT: g ( e j2 π f ) g ( f ) = ˇ ˘ • Three kinds of linear equalizers: ‣ Matched filter (MF): g ( �� ) ( ζ ) = ˇ ˇ h ∗ (1 / ζ ∗ ) . ‣ Zero forcing (ZF): g ( �� ) ( ζ ) = (ˇ h ( ζ )) − 1 . ˇ ‣ Minimum mean squared error (MMSE): maximize SINR E s ˇ h ∗ (1 / ζ ∗ ) g ( ���� ) ( ζ ) = ˇ N 0 + E s ˇ h ∗ (1 / ζ ∗ )ˇ h ( ζ ) • Low SNR regime ( ): g ( ���� ) ( ζ ) ≈ E s g ( �� ) ( ζ ) ˇ N 0 ˇ E s ⌧ N 0 • High SNR regime ( ): g ( ���� ) ( ζ ) ≈ ˇ g ( �� ) ( ζ ) E s � N 0 ˇ 3

  4. � � � � � � � � � Matrix Representation of ISI Channel V 1 h 0 u 1 Z 1 = + V 2 h 0 u 2 h 1 u 1 Z 2 = + + V L h 0 u L h 1 u L − 1 h L − 1 u 1 Z L = + + + + · · · V L +1 h 0 u L +1 h 1 u L h L − 1 u 2 Z L +1 = + + + + · · · V n h 0 u n h 1 u n − 1 h L − 1 u n − L +1 Z n = + + + + · · · V n +1 h 1 u n h L − 1 u n − L +2 Z n +1 = + + + · · · V n + L − 1 h L − 1 u n Z n + L − 1 = + 4

  5. � � � � � � � � � � � � � � � � � � � � � Matrix Representation of ISI Channel � V = h u + Z = u m [ h ] m + u i [ h ] i + Z i ̸ = m  h 0  0 0 · · ·   h 1 h 0   m ∼ ( m + L − 1) -th     elements are h 0 , h 1 , ...h L − 1 h 1       h L − 1 0 h �     0 h L − 1 h 0       h 1 0         h L − 1 0 0 [ h ] m 5

  6. Matrix Representation of Equalizer { W m } Linear Equalizer symbol-wise { V m } { ˆ u m } LTI filter: { g ` } detection W m = ⟨ V , [ g ] m ⟩ = [ g ] H m V ˜ Z m � [ g ] H m Z m [ h ] i ) u i + ˜ � = ([ g ] H ([ g ] H m [ h ] m ) u m + Z m i ̸ = m signal ISI noise | ⟨ [ h ] m , [ g ] m ⟩ | 2 E s Goal: maximize SINR = i ̸ = m | ⟨ [ h ] i , [ g ] m ⟩ | 2 E s + ∥ [ g ] m ∥ 2 N 0 � 6

  7. Low SNR Regime E s ⌧ N 0 = ) ������� ���� m [ h ] i ) u i + ˜ � W m = ([ g ] H ([ g ] H m [ h ] m ) u m + Z m i ̸ = m | ⟨ [ h ] m , [ g ] m ⟩ | 2 E s SINR = i ̸ = m | ⟨ [ h ] i , [ g ] m ⟩ | 2 E s + ∥ [ g ] m ∥ 2 N 0 � � 2 E s � | ⟨ [ h ] m , [ g ] m ⟩ | = ∥ [ g ] m ∥ N 0 ⇒ [ g ( �� ) ] m = [ h ] m = 7

  8. Matched Filter W m = h ∗ 0 V m + h ∗ 1 V m +1 + . . . + h ∗ L − 1 V m + L − 1 L − 1 0 0 g ( �� ) X X X h ∗ h ∗ ` V m + ` = − ` V m − ` = V m − ` , = ` ` =0 ` = − ( L − 1) ` = − ( L − 1) g ( �� ) ( ζ ) = ˇ ⇒ g ( �� ) ˇ h ∗ (1 / ζ ∗ ) = h ∗ = − ` ` g ( �� ) ( f ) = ˘ ˘ h ∗ ( f ) project the signal onto the signal direction, so that the signal energy is maximized. 8

  9. High SNR Regime E s � N 0 = ) ������� ������ m [ h ] i ) u i + ˜ � W m = ([ g ] H ([ g ] H m [ h ] m ) u m + Z m i ̸ = m | ⟨ [ h ] m , [ g ] m ⟩ | 2 E s SINR = i ̸ = m | ⟨ [ h ] i , [ g ] m ⟩ | 2 E s + ∥ [ g ] m ∥ 2 N 0 � ⇒ [ g ( �� ) ] m ⊥ [ h ] i , ∀ i ̸ = m = [ g ( �� ) ] m = ( h † ) H e m = h ( h H h ) − 1 e m one choice: 9

  10. Geometric Interpretation [ g ( �� ) ] m ≡ [ h ] m [ g ( �� ) ] m interference subspace �������� ������� �� ��� n − 1 ������� �� h ������� [ h ] m � 10

  11. Max. SINR Min. MSE ≡ Linear Equalizer { V m } { W m } L − 1 X X X X W m = g k V m − k = g k h ` u m − k − ` + g k Z m − k k k ` =0 k �� L − 1 � u m + ˜ I m + ˜ = ℓ =0 g − ℓ h ℓ Z m the same for all m � WLOG assume it is 1 = u m + ˜ I m + ˜ Z m Ξ m : kind of estimation error ⇥ | U m | 2 ⇤ SINR = E E s h | Ξ m | 2 i max SINR ≡ min E E [ | Ξ m | 2 ] = E [ | Ξ m | 2 ] mean squared error (MSE) 11

  12. Minimum MSE Estimation • In general, one can consider the following estimation problem: ‣ Given a random observation, estimate a target s.t. the MSE is minimized target observation estimation Estimator in H H P X,Y ˆ X = g ( Y ) X Y g ( · ) random processes LTI filter (FIR/IIR, causal/non-causal) { X n } , { Y n } random vectors general functions/linear functions X , Y � 2 � � MSE ( X, ˆ � X − ˆ g ( ���� ) ( · ) = argmin X ) , E MSE ( X, g ( Y )) X � � � g ∈ H ‣ You might be familiar with the general case: g ( ���� ) ( Y ) = E [ X | Y ] • Here, we focus on the random process case and linear estimators without any causality and finite-tap constraints. ‣ After deriving the optimal filter for MMSE estimation, we apply it back to the original problem 12

  13. Recap: Discrete-Time Random Process General (joint) WSS First µ X [ n ] ≡ µ X µ X [ n ] , E [ X n ] moment Second R X [ n 1 , n 2 ] , E ⇥ ⇤ R X [ n + k, n ] ≡ R X [ k ] X n 1 X ∗ moment n 2 (auto-correlation) R X [ − k ] = R ∗ X [ k ] R XY [ n 1 , n 2 ] , E ⇥ ⇤ R XY [ n + k, n ] ≡ R XY [ k ] X n 1 Y ∗ n 2 (cross-correlation) R Y X [ k ] = R ∗ XY [ − k ] PSD R X [ k ] ← → S X ( ζ ) R XY [ k ] ← → S XY ( ζ ) S Y X ( ζ ) = S ∗ XY (1 / ζ ∗ ) 13

  14. Recap: Filtering Random Processes X 1 [ n ] h 1 [ n ] Y 1 [ n ] = ( X 1 ∗ h 1 )[ n ] jointly WSS jointly WSS X 2 [ n ] h 2 [ n ] Y 2 [ n ] = ( X 2 ∗ h 2 )[ n ] Cross-correlation: R Y 1 ,Y 2 [ k ] = ( h 1 ∗ R X 1 ,X 2 ∗ h 2 , rv ) [ k ] S Y 1 ,Y 2 ( ζ ) = ˇ h 1 ( ζ ) S X 1 ,X 2 ( ζ )ˇ Cross PSD: h ∗ 2 (1 / ζ ∗ ) 14

  15. Derivation of the Optimal Filter Estimation via jointly WSS Linear Filter { ˆ { Y n } { X n } X n } = { ( g ∗ Y ) n } { g k } ← → ˇ g ( ζ ) � 2 � � � X n − ˆ MSE , E X n � � � Ξ n { g ( ���� ) Goal: } = argmin MSE also WSS! k { g k } ! ∗ # " h X n ) ∗ i ( X n − ˆ X n )( X n − ˆ X Note: MSE = E = E Ξ n X n − g k Y n − k k ∂ ⇥ ⇤ ⇥ ⇤ ⇥ ⇤ ∀ k, 0 = MSE = − E Ξ n Y ∗ = E ( g ∗ Y ) n Y ∗ X n Y ∗ − E n − k n − k n − k ∂ g ∗ k ⇒ ˇ g ( ζ ) S Y ( ζ ) = S XY ( ζ ) ⇒ ∀ k, ( g ∗ R Y )[ k ] = R XY [ k ] ⇐ ⇐ Solution: g ( ���� ) ( ζ ) = ( S Y ( ζ )) − 1 S XY ( ζ ) (non-causal IIR Wiener filter) ˇ 15

  16. Orthogonality Principle • A key equation in deriving the optimal estimator is ⇥ ⇤ Ξ n Y ∗ = 0 , 8 k ( ) h Ξ n , ( f ⇤ Y ) n i = 0 , 8 ��� ����� { f ` } E n − k ‣ For two r.v.’s , we define the “inner product” as h X, Y i , E [ XY ∗ ] ( X, Y ) ‣ (you can check the axioms of inner product space …) • A geometric interpretation: for an estimator that minimizes MSE, its estimation error should be “orthogonal” to the any estimators that one can choose ‣ Caveat: the family of estimators (which are also r.v.‘s) should form a “subspace” of the r.v. inner product space 16

  17. target X Ξ ���� ˆ X ���� ( Y ) estimator subspace observation target estimation Estimator in H H P X,Y ˆ X = g ( Y ) X Y g ( · ) 17

  18. The Minimum MSE min MSE = E [ Ξ n Ξ ∗ n ] = E [ Ξ n X ∗ n ] h i ( g ( ���� ) ∗ Y ) n X ∗ = E [ X n X ∗ n ] − E n g ( ���� ) X = R X [0] − R Y X [ − k ] k k = R X [0] − ( g ( ���� ) ∗ R Y X )[0] 1 Z 2 ⇣ ⌘ g ( ���� ) ( f ) S Y X ( f ) = S X ( f ) − ˘ d f − 1 2 ! 1 S X ( f ) − | S XY ( f ) | 2 Z 2 = d f S Y ( f ) − 1 2 18

  19. Other kinds of Wiener Filter FIR Wiener Filter IIR Causal Wiener Filter 19

  20. Optimal Linear Equalizer Back to our problem of linear equalization V m = ( h ∗ U ) m + Z m S U ( ζ ) = E s Linear Equalizer { U m } { V m } { W m } S Z ( ζ ) = N 0 { ˆ X n } { Y n } { X n } ⇒ S V ( ζ ) = ˇ h ( ζ ) S U ( ζ )ˇ V m = ( h ∗ U ) m + Z m = h ∗ (1 / ζ ∗ ) + S Z ( ζ ) S UV ( ζ ) = S U ( ζ )ˇ h ∗ (1 / ζ ∗ ) Optimal linear equalizer: E s ˇ g ( ���� ) ( ζ ) = S UV ( ζ ) h ∗ (1 / ζ ∗ ) ˇ S V ( ζ ) = E s ˇ h ∗ (1 / ζ ∗ )ˇ h ( ζ ) + N 0 20

  21. The Maximum SINR 1 E s = max SINR = 1 ◆ − 2 min MSE 2 E s ✓� Z 2 � � ˘ h ( f ) + 1 d f � � N 0 � − 1 2 ! 1 S U ( f ) − | S UV ( f ) | 2 Z 2 min MSE = d f S V ( f ) − 1 2 2 0 1 � � � ˘ E 2 h ( f ) 1 � � Z 2 s � = A d f @ E s − B C 2 � � � ˘ − 1 h ( f ) E s + N 0 � � 2 � 2 0 1 1 Z 1 2 = E s d f B C 2 E s @ � � A � ˘ − 1 h ( f ) N 0 + 1 � � 2 � 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend