enkf faq
play

EnKF FAQ (Ensemble Kalman filter Frequently asked questions) pdf - PowerPoint PPT Presentation

EnKF FAQ (Ensemble Kalman filter Frequently asked questions) pdf x Patrick N. Raanes, Geir Evensen, Andreas S. Stordal Marc Bocquet, Alberto Carrassi NERSC NERSC EnKF workshop, Voss, June 4, 2019 Also answered these questions about


  1. EnKF – FAQ (Ensemble Kalman filter – Frequently asked questions) pdf x Patrick N. Raanes, Geir Evensen, Andreas S. Stordal Marc Bocquet, Alberto Carrassi NERSC NERSC EnKF workshop, Voss, June 4, 2019

  2. Also answered these questions about the EnKF: x ) 2 ? � 1 Why do we use ( N − 1) in n ( x n − ¯ N − 1 About nonlinearity: Why does it create sampling error? Why does it cause divergence?

  3. Also answered these questions about the EnKF: Why do we prefer the Kalman gain ”form”? About ensemble linearizations: What are they? Why is this rarely discussed? How does it relate to analytic derivatives?

  4. Ensemble linearizations

  5. Traditional EnKF presentation Recall the KF gain: � − 1 K = C x H T � HC x H T + R (1) . 1 st idea: substitute C x ← ¯ N − 1 XX T 1 C x = � − 1 C x H T � C x H T + R ⇒ ¯ K = ¯ H¯ = (2) � − 1 = XY T � YY + ( N − 1) R (3) with Y = HX (4) = H ( X ) (5) = H ( E ) − mean . (6) 2 nd idea: use eqn. (6) also in nonlinear case (when ∄ H ).

  6. What is the ensemble’s linearization? N − 1 XX T and suppose H is nonlinear. Recall ¯ 1 C x =  N − 1 XY T = ¯ T 1 C x ¯ H  Question: Is there a matrix ¯ H such that ? T N − 1 YY T = ¯ 1 H¯ C x ¯  H Answer: Yes (mostly): ¯ H = YX + . Follow up questions: How come this is rarely discussed? Why YX + ? Does it relate to the analytic derivative ( H ′ ) ?

  7. H = YX + relate to H ′ ? Does ¯ E [ H ′ ( x )] ¯ Theorem: lim H = N →∞ N →∞ YX + = C yx C − 1 = lim x N →∞ YX T ( XX T ) − 1 = lim (by Stein/IBP) − 1 C yx ¯ ¯ = lim C x N →∞ = C yx C − 1 (a.s., by Slutsky, sub. to reg. ) x I.e. ¯ H may (indeed) be called the “average” derivative. Assumptions: Ensemble (behind ¯ H ) and x share the same distribution. This is Gaussian.

  8. H = YX + is rarely discussed? How come ¯ Substitute H ← ¯ H in ¯ K : T � ¯ T + R � − 1 K = ¯ ¯ C x ¯ H¯ C x ¯ (7) H H � − 1 , = XY T � YΠ X T Y T + ( N − 1) R (8) where Π X T = X + X , which is scary... But Π X T is just a projection; vanishes if H is linear, or ( N − 1) ≤ M ; is present for any/all linearization of H ; H = YX + ? Why ¯ ¯ H is: Linear least-squares (LLS) estimate of H given Y and X . BLUE ? MVUE ? H is LLS because ¯ ¯ K is LLS, and the chain rule applies.

  9. Why the “gain form”?

  10. Not equivalent when ( N − 1) < M : P = [ I − ¯ ¯ KH ] ¯ (9) B B + + H T R − 1 H � − 1 � ¯ ¯ P = (10) Why is the Kalman gain form (9) better? Note that eqn. (10) follows from T ¯ + ( x − ¯ prior ∝ exp[ − 1 2 ( x − ¯ x ) x )] , (11) B which is “flat” in the directions outside of col( ¯ B ) . = ⇒ eqn. (10) yields “opposite” of the correct update. Note: further complications in case ¯ P not defined in eqn. (10).

  11. Likelihood Prior (ens and contour) Post. using K-gain Post. using p-inv x 2 x 1

  12. Nonlinearity, sampling error, and divergence.

  13. M Lin M NonLin Aim: study sampling error, due to nonlinearity, � + 2 without worrying about non-Gaussianity. 0 � − 2 √ M Lin ( x ) = 2 x , √ 2 F − 1 � F χ ( x 2 ) � M NonLin ( x ) = N −1 0 1

  14. Motivational problem prior = N ( x | 0 , 2 ) , likelihood = N (0 | x, 2) , = ⇒ posterior = N ( x | 0 , 1 ) . √ √ ✘ = ✘✘✘✘ ✟ M Lin ( x ) = ✟✟ dyn. model = M Lin ( x ) = 2 x . dyn. model 2 x . dyn. model = Problem repeats identically! w apply a (square-root) EnKF to it.

  15. Sampling error from nonlinearity – why? Consider the error in the m -th sample moment of the forecast (f) ensemble, propagated by a degree- d model. It can be shown that md � Error f C m,i Error a m = (12) i , i =1 i.e. the moments get coupled, which defeats moment-matching.

  16. Riccati recursion Assume constant, linear dynamics ( M ), Q = 0 , H = I , and a deterministic EnKF. The ensemble covariance obeys: B k = M 2 ¯ ¯ Forecast: (13) P k − 1 . Analysis: P k = ( I − ¯ ¯ K k ) ¯ B k (14) − 1 − 1 + R − 1 . ¯ = ¯ ⇐ ⇒ (15) P B k k = ⇒ The “Riccati recursion”: − 1 P k − 1 ) − 1 + R − 1 . = ( M 2 ¯ ¯ (16) P k

  17. Attenuation Stationary Riccati: − 1 P ∞ ) − 1 + R − 1 ∞ = ( M 2 ¯ ¯ P (17) � I − M − 2 if M ≥ 1 , P ∞ = ¯ ¯ ¯ ⇐ ⇒ K ∞ = (18) K ∞ R , 0 otherwise. Initial conditions (ICs) don’t appear = ⇒ ICs are “forgotten”. = ⇒ Sampling error is attenuated.

  18. Why ( N − 1) ? Suppose we re-define the EnKF algorithm to use a different normalization factor, i.e. α α P k ˜ ˜ N − 1 X k X T N − 1 X 0 X T 0 = α ¯ P k α ¯ P 0 = P 0 . (19) k But, B k = M 2 ˜ ˜ the ensemble forecast yields P k − 1 , (20) − 1 − 1 + R − 1 . the analysis using ˜ ˜ = ˜ B k yields (21) P B k k − 1 P k − 1 ) − 1 + R − 1 = ( M 2 ˜ ˜ = ⇒ The Riccati recursion: (22) P k Note: α does not appear. = ⇒ its impact is attenuated, just like ICs. ⇒ ˜ ¯ = P k − − − → P k . k →∞ ⇒ ˜ x k − − − → = k →∞ ¯ x k .

  19. Filter divergence Recall Riccati: M 2 ¯ P k = ( I − ¯ ¯ K k ) (23) P k − 1 . � �� � − − − → k →∞ M − 2 Now consider δ ¯ P k . Its recursion is: K k ) 2 [ M 2 + MM ′′ ] δ ¯ δ ¯ P k ≈ ( I − ¯ P k − 1 , (24) k →∞ 0 in the linear case ( M ′′ = 0 ), Yielding δ ¯ P k − − − → as we found previously. By contrast, no such guarantee exists when M ′′ � = 0 = ⇒ filter divergence. Also, M ′′ may grow worse with k = ⇒ vicious circle.

  20. Revising EnRML

  21. EnRML issues Gauss-Newton version: (Reynolds et al., 2006; Gu and Oliver, 2007; Chen and Oliver, 2012): Requires “model sensitivity” matrix. . . . which requires pseudo-inverse of the “anomalies”. = ⇒ Levenberg-Marquardt version (Chen and Oliver, 2013): Modification inside Hessian, simplifying likelihood increment . Further complicates prior increment , which is sometimes dropped! ⇒ New version Raanes, Evensen, Stordal, 2019: = No explicit computation of the model sensitivity matrix Computing its product with the prior covariance is efficient. Does not require any pseudo-inversions.

  22. Algorithm simplification Chen and Oliver (2013): Raanes, Evensen, Stordal (2019):

  23. Available from github.com/nansencenter/DAPPER

  24. t obs = 0.2 t DAW = 0.4 5 Stoch. MDA 10 3 4 3 EnRML 10 3 2 Determ. MDA 10 3 1.0 Average filter RMS error IEnKS 10 3 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 20 30 40 60 80 100 Ensemble size ( N )

  25. Summary In the linear case, ICs are forgotten by Riccati. = ⇒ Sampling error attenuates. = ⇒ The covariance normalization factor is inconsequential. By contrast, nonlinearity undoes the attenuation, causing filter divergence. creates sampling error by cascading higher-order error down through the moments. Gain form Precision-matrix form. > The ensemble linearizations are LLS regression estimates. converge to the average, analytic sensitivity.

  26. References Yan Chen and Dean S. Oliver. Ensemble randomized maximum likelihood method as an iterative ensemble smoother. Mathematical Geosciences , 44(1):1–26, 2012. Yan Chen and Dean S. Oliver. Levenberg–Marquardt forms of the iterative ensemble smoother for efficient history matching and uncertainty quantification. Computational Geosciences , 17(4):689–703, 2013. Yaqing Gu and Dean S. Oliver. An iterative ensemble Kalman filter for multiphase fluid flow data assimilation. SPE Journal , 12(04):438–446, 2007. Patrick N. Raanes, Marc Bocquet, and Alberto Carrassi. Adaptive covariance inflation in the ensemble Kalman filter by Gaussian scale mixtures. Quarterly Journal of the Royal Meteorological Society , 145(718):53–75, 2019a. doi: 10.1002/qj.3386 . Patrick N. Raanes, Geir Evensen, and Andreas S. Stordal. Revising the stochastic iterative ensemble smoother. Nonlinear Processes in Geophysics , 0(0):0–0, 2019b. doi: 10.5194/npg-2019-10 . arXiv preprint arXiv:1901.06570. A. C. Reynolds, M. Zafari, and G. Li. Iterative forms of the ensemble Kalman filter. In 10th European Conference on the Mathematics of Oil Recovery , 2006.

  27. Appendix

  28. Sampling error from nonlinearity – why? Consider the m -th “true” and “sample” moments: µ m = E [ x m ] , (25) N � µ m = N − 1 x m ˆ n . (26) n =1 Define: Error m = ˆ µ m − µ m . �� M ( x ) � m � . Define: µ f m = E Assume degree- d Taylor-exp. of M is accurate. Then md � µ f m = C m,i µ i . (27) i =1 Hence, Due to coupling of moments, md � Error f C m,i Error a m = i Error i , (28) i =1 which defeats moment-matching.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend