extending the square root method to account for model
play

Extending the square root method to account for model noise in the - PowerPoint PPT Presentation

Extending the square root method to account for model noise in the ensemble Kalman filter Patrick Nima Raanes , 1 , 2 , Alberto Carrassi 1 , and Laurent Bertino 1 1 Nansen Environmental and Remote Sensing Center 2 Mathematical Institute,


  1. Extending the square root method to account for model noise in the ensemble Kalman filter Patrick Nima Raanes ∗ , 1 , 2 , Alberto Carrassi 1 , and Laurent Bertino 1 1 Nansen Environmental and Remote Sensing Center 2 Mathematical Institute, University of Oxford Os, June 9, 2015 NERSC NERSC ∗ email: patrick.n.raanes@gmail.com 1 / 34

  2. Paper MWR special issue on DA, 2015 ? Abstract: A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related algorithms. Primarily intended to replace additive, pseudo-random noise simulation, the core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise and multiplicative inflation. 2 / 34

  3. Model noise – Problem statement Assume x t +1 = f ( x t ) + q t , q t ∼ N (0 , Q where Q Q ) , (1) Q with f and Q Q = C ov ( q ) perfectly known. Then we want the forecast ensemble to satisfy f = ¯ ¯ ¯ ¯ ¯ ¯ P P P P P P + Q Q , Q (2) where 1 � x ) T . ¯ ¯ ¯ P P = P ( x n − ¯ x )( x n − ¯ (3) N − 1 n 3 / 34

  4. Model noise – Problem statement Assume x t +1 = f ( x t ) + q t , q t ∼ N (0 , Q where Q Q ) , (1) Q with f and Q Q = C ov ( q ) perfectly known. Then we want the forecast ensemble to satisfy f = ¯ ¯ ¯ ¯ ¯ ¯ P P P P P P + Q Q , Q (2) where 1 � x ) T . ¯ ¯ ¯ P P = P ( x n − ¯ x )( x n − ¯ (3) N − 1 n 3 / 34

  5. Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments 4 / 34

  6. Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments 5 / 34

  7. Lessons learnt the past 15 years A A �→ A A T Any square root update, A AT T, will ◮ deterministically match covariance relations ◮ preserve the ensemble subspace ◮ satisfy linear, homogeneous, equality constraints ⋆ A �→ A A A T Furthermore, the “symmetric choice”, A AT T s , will ◮ preserve the mean ◮ satisfy linear, inhomogeneous constraints ⋆ ◮ satisfy the first-order approximation to non-linear constraints ⋆ ◮ minimise ensemble displacement ⋆ ◮ yield equally likely realisations ⋆ ⋆ : (plausibly) improves “dynamical consistency” of realisations. 6 / 34

  8. Lessons learnt the past 15 years A A �→ A A T Any square root update, A AT T, will ◮ deterministically match covariance relations ◮ preserve the ensemble subspace ◮ satisfy linear, homogeneous, equality constraints ⋆ A �→ A A A T Furthermore, the “symmetric choice”, A AT T s , will ◮ preserve the mean ◮ satisfy linear, inhomogeneous constraints ⋆ ◮ satisfy the first-order approximation to non-linear constraints ⋆ ◮ minimise ensemble displacement ⋆ ◮ yield equally likely realisations ⋆ ⋆ : (plausibly) improves “dynamical consistency” of realisations. 6 / 34

  9. Sqrt-Core f = ¯ ¯ ¯ ¯ ¯ ¯ ¯ Q can be rewritten using ¯ ¯ 1 A T , yielding: P P Q P A A P P P P + Q P = P N − 1 A AA A f T = A A T + ( N − 1) Q A f A A A A A AA A Q Q . (4) A + : (Brutally) factorising out A A using the M-P pseudoinverse, A A A A f T = A � A T ) + � A T , A f A A + Q A A A A A I I N + ( N − 1) A I A Q Q ( A A A A (5) we get Sqrt-Core : A f = A T f A A A AT T (6) s T f where T T s is the sym. square root of the middle factor in eqn. (5). We also see that the problem of eqn. (4) is ill-posed. . . 7 / 34

  10. Sqrt-Core f = ¯ ¯ ¯ ¯ ¯ ¯ ¯ Q can be rewritten using ¯ ¯ 1 A T , yielding: P P Q P A A P P P P + Q P = P N − 1 A AA A f T = A A T + ( N − 1) Q A f A A A A A AA A Q Q . (4) A + : (Brutally) factorising out A A using the M-P pseudoinverse, A A A A f T = A � A T ) + � A T , A f A A + Q A A A A A I I N + ( N − 1) A I A Q Q ( A A A A (5) we get Sqrt-Core : A f = A T f A A A AT T (6) s T f where T T s is the sym. square root of the middle factor in eqn. (5). We also see that the problem of eqn. (4) is ill-posed. . . 7 / 34

  11. Sqrt-Core f = ¯ ¯ ¯ ¯ ¯ ¯ ¯ Q can be rewritten using ¯ ¯ 1 A T , yielding: P P Q P A A P P P P + Q P = P N − 1 A AA A f T = A A T + ( N − 1) Q A f A A A A A AA A Q Q . (4) A + : (Brutally) factorising out A A using the M-P pseudoinverse, A A A A f T = A � A T ) + � A T , A f A A + Q A A A A A I I N + ( N − 1) A I A Q Q ( A A A A (5) we get Sqrt-Core : A f = A T f A A A AT T (6) s T f where T T s is the sym. square root of the middle factor in eqn. (5). We also see that the problem of eqn. (4) is ill-posed. . . 7 / 34

  12. Sqrt-Core f = ¯ ¯ ¯ ¯ ¯ ¯ ¯ Q can be rewritten using ¯ ¯ 1 A T , yielding: P P Q P A A P P P P + Q P = P N − 1 A AA A f T = A A T + ( N − 1) Q A f A A A A A AA A Q Q . (4) A + : (Brutally) factorising out A A using the M-P pseudoinverse, A A A A f T = A � A T ) + � A T , A f A A + Q A A A A A I I N + ( N − 1) A I A Q Q ( A A A A (5) we get Sqrt-Core : A f = A T f A A A AT T (6) s T f where T T s is the sym. square root of the middle factor in eqn. (5). We also see that the problem of eqn. (4) is ill-posed. . . 7 / 34

  13. Sqrt-Core In fact, Sqrt-Core only satisfies A f T = A A T + ( N − 1) ˆ A f A ˆ ˆ A A A A Q A AA Q Q (7) A + is the orthogonal projector where ˆ ˆ ˆ Q Π Q Π Π A A Q = Π Q Π A A Q QΠ Π A A , and Π Π A A = A AA A A A onto the column space of A A A. 8 / 34

  14. Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments 9 / 34

  15. Overview of alternatives A f = Method A A where thus satisfying Q 1 / 2 Ξ , A A A + D D D D D D = Q Q ξ n ∼ N (0 , I I I m ) D ( eqn. (4) ) Add-Q E D D λ 2 = trace( ¯ ¯ ¯ P P + Q P Q Q ) Mult - 1 λ A A A trace( eqn. (4) ) trace( ¯ ¯ ¯ P P P ) Λ 2 = diag( ¯ P ) − 1 diag( ¯ ¯ ¯ ¯ ¯ Mult - m Λ Λ ΛA A A Λ Λ P P P P P + Q Q Q ) diag( eqn. (4) ) � A +T � 1 / 2 A + Q A A AT T T T = T T I I N + ( N − 1) A I A Q QA A Π Π Π A A ( eqn. (4) ) Π Π Π A Sqrt-Core A A A s Also: ◮ Complete resampling ◮ 2nd-order exact sampling (Pham, 2001) ◮ A similar (but distinct) square root method (Nakano, 2013) ◮ Relaxation (Zhang et al., 2004) ◮ Forcings fields or boundary conditions (Shutts, 2005) ◮ SEIK, with forgetting factor (Pham, 2001) ◮ RRSQRT, with orthogonal ensemble (Heemink et al., 2001) 10 / 34

  16. Overview of alternatives A f = Method A A where thus satisfying Q 1 / 2 Ξ , A A A + D D D D D D = Q Q ξ n ∼ N (0 , I I I m ) D ( eqn. (4) ) Add-Q E D D λ 2 = trace( ¯ ¯ ¯ P P + Q P Q Q ) Mult - 1 λ A A A trace( eqn. (4) ) trace( ¯ ¯ ¯ P P P ) Λ 2 = diag( ¯ P ) − 1 diag( ¯ ¯ ¯ ¯ ¯ Mult - m Λ Λ ΛA A A Λ Λ P P P P P + Q Q Q ) diag( eqn. (4) ) � A +T � 1 / 2 A + Q A A AT T T T = T T I I N + ( N − 1) A I A Q QA A Π Π Π A A ( eqn. (4) ) Π Π Π A Sqrt-Core A A A s Also: ◮ Complete resampling ◮ 2nd-order exact sampling (Pham, 2001) ◮ A similar (but distinct) square root method (Nakano, 2013) ◮ Relaxation (Zhang et al., 2004) ◮ Forcings fields or boundary conditions (Shutts, 2005) ◮ SEIK, with forgetting factor (Pham, 2001) ◮ RRSQRT, with orthogonal ensemble (Heemink et al., 2001) 10 / 34

  17. Overview of alternatives A f = Method A A where thus satisfying Q 1 / 2 Ξ , A A A + D D D D D D = Q Q ξ n ∼ N (0 , I I I m ) D ( eqn. (4) ) Add-Q E D D λ 2 = trace( ¯ ¯ ¯ P P + Q P Q Q ) Mult - 1 λ A A A trace( eqn. (4) ) trace( ¯ ¯ ¯ P P P ) Λ 2 = diag( ¯ P ) − 1 diag( ¯ ¯ ¯ ¯ ¯ Mult - m Λ Λ ΛA A A Λ Λ P P P P P + Q Q Q ) diag( eqn. (4) ) � A +T � 1 / 2 A + Q A A AT T T T = T T I I N + ( N − 1) A I A Q QA A Π Π Π A A ( eqn. (4) ) Π Π Π A Sqrt-Core A A A s Also: ◮ Complete resampling ◮ 2nd-order exact sampling (Pham, 2001) ◮ A similar (but distinct) square root method (Nakano, 2013) ◮ Relaxation (Zhang et al., 2004) ◮ Forcings fields or boundary conditions (Shutts, 2005) ◮ SEIK, with forgetting factor (Pham, 2001) ◮ RRSQRT, with orthogonal ensemble (Heemink et al., 2001) 10 / 34

  18. Snapshot comparison 30 None Add-Q 20 Mult-1 Mult-m Sqrt-Core 10 0 y −10 −20 −30 −25 −20 −15 −10 −5 0 5 10 15 20 25 30 x Figure: Snapshot of ensemble forecasts with the Lorenz-63 system after model noise incorporation. 11 / 34

  19. Experimental setup ◮ Twin experiment: tracking a simulated “truth”, x t � x t − x t � 2 1 ◮ RMSE = m � ¯ 2 ◮ Analysis update: ◮ ETKF (using the symmetric square root) ◮ No localisation ◮ Inflation (for analysis update errors): tuned for Add-Q ◮ Baselines 12 / 34

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend