timing recovery at low snr cramer rao bound and
play

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the - PowerPoint PPT Presentation

I N S T A I T I G U R T O E E G O F E H T T E C F H P R O G R E S S S E R V I C E O N A N D O L L A O E G S Y 1 8 8 5 Timing Recovery at Low SNR


  1. I N S • T A I T I G U R T O E E • G O • F E • H T T E • C F H P R O G R E S S S E R V I C E O N A N D O • L L A O E G S Y • • 1 8 8 5 Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL Aravind R. Nayak John R. Barry Steven W. McLaughlin {nayak, barry, swm}@ece.gatech.edu Georgia Institute of Technology

  2. Communication system model ECC Transmitter Source Modulator Encoder Channel Channel ECC Receiver Equalizer Sampler Decoder Discrete-time Continuous-time 1

  3. Continuous-time discrete-time interface Discrete-time ECC to Source Modulator Encoder Continuous-time Channel Continuous-time ECC to Equalizer Sampler Decoder Discrete-time 2

  4. Sampling: Timing recovery 0 T 2 T 3 T a 0 a 1 a 3 τ 0 τ 1 – τ 3 τ 2 TIME a 2 T – Symbol duration a 0 , a 1 , a 2 ,... – Data symbols τ 0 , τ 1 , τ 2 ,... – Timing offsets Estimate τ 0 , τ 1 , τ 2 , ... Timing Recovery Problem : 3

  5. Timing offset models Constant offset: τ k τ = Frequency offset: τ k τ 0 k ∆ T τ k ∆ T = + = + 1 – Random walk: k τ k τ k τ 0 ∑ w k w i = + = + 1 + i 0 = where w i are i.i.d. zero-mean Gaussian random variables of σ w σ w 2 2 variance . determines the severity of the random walk. 4

  6. Timing recovery in two stages Acquisition : • Estimate τ 0 • Correlation techniques • Known preamble sequence at start of packet (Trained mode) • Parameter τ 0 spans a large range Tracking : • Keep track of τ 1 , τ 2 , τ 3 ,... • Based on the phase-locked loop (PLL) • Data symbols unknown (Decision-directed mode) • Sufficient to track small signals τ 1 – τ 0 , τ 2 – τ 1 , τ 3 – τ 2 , ... 5

  7. PLL: Motivation Consider the simple case of a time-invariant offset: τ k = τ τ ˆ i Let be the current timing estimate. ε i = τ i – τ = τ – τ ˆ i ˆ i Timing error: . ε = ε i . ˆ i With a perfect timing error detector (TED), we get τ τ ε τ ˆ i ˆ i ˆ i Update: = + = + 1 τ τ αε ˆ i ˆ i ˆ i With imperfect TED: = + + 1 6

  8. PLL-based timing recovery r k ( ) r t y ( t ) f ( t ) for further processing rcv. filter τ kT + ˆ k PLL T.E.D. UPDATE ε ˆ k τ τ αε ˆ k ˆ k ˆ k = + First-order PLL + 1 k 1 – τ τ αε β ∑ ε ˆ k ˆ k ˆ k = + + ˆ i 1 Second-order PLL + i 0 = 7

  9. Timing Error Detector (TED) r k ( ) r t y ( t ) f ( t ) for further processing rcv. filter τ kT + ˆ k ˆ d k ε ˆ k PLL T.E.D. UPDATE ε ( ) ˆ k ˆ k 3 T - r k d ˆ k r k – d = – - - - - - - - 1 – 1 16 Mueller & Müller Timing Error Detector • TED is a decision-directed device • Usually, instantaneous hard quantization • Better decisions entail delay that destabilizes the loop 8

  10. Improving timing recovery • Improve the quality of decisions (Approach I) ⇒ Need to get around the delay induced by better decisions. ⇒ Feedback from the ECC decoder and equalizer to timing recovery. Dr. Barry’s presentation! • Improve the timing recovery architecture (Approach II) ⇒ Need to assume perfect decisions for tractability. ⇒ Methods based on gradient search and projection operation. ⇒ Use Cramer-Rao bound to evaluate competing methods. This presentation! 9

  11. Overview: Approach II Questions : • How good is the PLL-based system? • Can it be improved upon? Method : • Derive fundamental performance limits. • Compare the PLL performance with these limits. • Develop methods that outperform the PLL. 10

  12. Problem statement We consider the following uncoded system: AWGN { } 0 N 1 – r k a k τ h ( t) LPF uncoded i.i.d. kT (uniform) The uniform samples are: N 1 ∑ – a l h ( kT – lT – τ l ) + n k , r k = l = 0 where σ 2 is the noise variance, and h ( t ) is the impulse response. Given samples { r k } and knowledge of channel model, estimate Problem: the N uncoded i.i.d. data symbols { a k } • the N timing offsets {τ k } . • 11

  13. Cramer-Rao bound (CRB) Cramer-Rao bound • answers the following question: “What is the best any estimator can do?” • is independent of the estimator itself. • is a lower bound on the error variance of any estimator. 12

  14. CRB, intuitively f r θ 1 ( ) f r θ 2 ( ) f r θ 1 ( ) f r θ 2 ( ) r r → fixed, unknown parameter to be estimated θ r → observations • f r θ ( ) θ Sensitivity of to changes in determines quality of estimation. • f r θ ( ) θ is narrow, for a given r , probable If s lie in a narrow range. ⇒ θ can be estimated better, i.e. , with lesser error variance. ∂ • f r θ ( ) CRB uses - - - - - - as a measure of narrowness. log ∂θ 13

  15. CRB for a random parameter θ If is random as opposed to being fixed and unknown, • θ f θ ( ) is characterized by a p.d.f. and • θ f r θ ( , ) r , are characterized by the joint p.d.f. . The measure for narrowness in this case is ∂ f r θ ( , ) - - - - - - log ∂θ 14

  16. CRB is the inverse of Fisher information θ ( ) ˆ r For any unbiased estimator , the estimation error covariance matrix is lower bounded by T J 1 – [ ( θ ( ) θ ) θ ( ( ) θ ) ] ≥ ˆ r ˆ r E – – where J is the information matrix given by   ∂ ∂ T f r θ ( , ) f r θ ( , )   J E = - - - - - - log - - - - - - log ∂θ ∂θ   In particular, 2 J 1 – [ ( θ ( ) θ i ) ] ≥ ( , ) ˆ i r E i i – 15

  17. Efficient estimators • An estimator that achieves the CRB is called efficient. • Efficient estimators do not always exist. Fixed, unknown θ : ML is efficient ∂ f r θ ( ) - - - - - - 0 log = ∂θ Random θ : MAP is efficient ∂ f r θ ( , ) 0 - - - - - - log = ∂θ 16

  18. CRB: lower bound on timing error variance Constant offset: σ 2 σ ε ≥ 2 - - - - - - - - - - - - - - NE h ' Frequency offset: 6 σ 2 σ ε ≥ 2 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ( ) N 2 N ( ) E h ' N 1 1 – – 17

  19. CRB for a random walk The Cramer-Rao bound on the error variance of any unbiased timing estimator: [ ( τ τ k ) 2 ] ≥ ⋅ ( ) E ˆ k h f k – where η 2 σ w h = - - - - - - - - - - - - - - - is the steady state value, η 2 1 – ( ( ( ) ) η ) N 0.5 2 k 1 sinh + – + log ( ) ( ( ) η ) 1 f k N 0.5 = tanh + log – - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - , ( ( ) η ) N 0.5 sinh + log 2  σ w λ 2 2 π 2 λ  4 + – η λ 2 1 = - - - - - - - - - - - - - - - - - - - - - - - - - - - - - and = + - - - - - - - - - – - - - - - - - - - - - - - .   2 3 σ 2 T 2 18

  20. CRB: Steady-state value Parameters SNR bit = 5 dB σ w ⁄ T = 0.05% N = 5000 1.2% σ ε ⁄ T (%) h 0.8% 0.4% 0 Time 0 5000 Steady-state value becomes more representative as SNR and N increase. 19

  21. Trained PLL away from the CRB 5% ESTIMATION ERROR JITTER σ ε ⁄ T (%) Trained PLL with α optimized 4% Parameters SNR = 5 dB CRLB 3% N = 4095 σ w ⁄ T = 0.7% α = 0.03 10000 sectors 2% 1% 0 2050 4100 Time (bit periods) Trained PLL does not achieve the steady-state CRB. 20

  22. Trained PLL vs. Steady-state CRB 5% α = 0.03 α = 0.05 α = 0.1 α = 0.2 ESTIMATION ERROR JITTER σ ε ⁄ T (%) 4% α = 0.01 Parameters 3% σ w ⁄ T = 0.5% α = 0.02 1000 trials N = 500 7 dB α = 0.03 CRB 2% α = 0.05 1% 0 0 10 20 30 SNR (dB) 21

  23. Outperforming the PLL: Block processing • As in the random walk case, the PLL does not achieve the CRB in the constant offset and the frequency offset cases. • Using Kalman filtering analysis, we can show that PLL is the optimal causal timing recovery scheme. ⇒ Eliminate causality constraint to improve performance. ⇒ Block processing . 22

  24. Constant offset: Gradient search τ The trained maximum-likelihood (ML) estimator picks to minimize ∞   2 J τ ( ) ∑ ∑ ( τ ) ˆ a r k a l h kT lT ˆ ; = – – –   ∞ l k = – This minimization can be implemented using gradient descent: τ τ µ J ' τ ( ) ˆ i ˆ i ˆ i a = – ; + 1 • Initialization using PLL. • J τ ( ) J τ ( ) ˆ a ˆ a Without training, use ˆ instead of . ; ; 23

  25. Parameters τ/ T = π/ 20 Trained ML achieves CRB α = 0.01 N = 5000 20% 10% Decision-directed ML σ ε / T Decision-directed PLL RMS Timing Error 3% 2% Trained PLL 1% Trained ML, CRB 0.3% 0.2% -8 -3 2 7 SNR (dB) Two ways to improve performance over conventional PLL: * Better architecture – ML for example. * Better decisions – exploit error correction codes. 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend