what do we know about enkf
play

What do we know about EnKF? David Kelly Kody Law Andrew Stuart - PowerPoint PPT Presentation

What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant . David Kelly (NYU) EnKF April 10, 2015 1 / 32 Talk outline 1


  1. What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant . David Kelly (NYU) EnKF April 10, 2015 1 / 32

  2. Talk outline 1 . What is EnKF? 2 . What is known about EnKF? 3 . Can we use stochastic analysis to better understand EnKF? David Kelly (NYU) EnKF April 10, 2015 2 / 32

  3. The data assimilation problem We have a model dv dt = F ( v ) with v 0 ∼ µ , with a flow v ( t ) = Ψ t ( v 0 ). Think of this as very high dimensional , nonlinear and possibly stochastic . We want to estimate v n = v ( nh ) for some h > 0 and n = 0 , 1 , 2 , . . . given the observations y n = Hv n + ξ n for ξ n iid N (0 , Γ). David Kelly (NYU) EnKF April 10, 2015 3 / 32

  4. To formulate a solution to this problem, we write down the conditional density using Bayes’ formula . David Kelly (NYU) EnKF April 10, 2015 4 / 32

  5. Bayes’ formula filtering update Let Y n = { y 0 , y 1 , . . . , y n } . We want to compute the conditional density P ( v n +1 | Y n +1 ), using P ( v n | Y n ) and y n +1 . By Bayes’ formula, we have P ( v n +1 | Y n +1 ) = P ( v n +1 | Y n , y n +1 ) ∝ P ( y n +1 | v n +1 ) P ( v n +1 | Y n ) But we need to compute the integral � P ( v n +1 | Y n ) = P ( v n +1 | Y n , v n ) P ( v n | Y n ) dv n . For high dimensional nonlinear systems, this is computationally infeasible. David Kelly (NYU) EnKF April 10, 2015 5 / 32

  6. The Kalman Filter For linear models, this integral is Gaussian and can be computed explicitly. The conditional density is characterized by its mean and covariance m n +1 = � m n − G n +1 ( H � m n − y n +1 ) C n +1 = ( I − G n +1 H ) � C n +1 , where m n +1 , � • ( � C n +1 ) is the forecast mean and covariance. C n +1 H T ) − 1 is the Kalman gain . • G n +1 = � C n +1 H T (Γ + H � The procedure of updating ( m n , C n ) �→ ( m n +1 , C n +1 ) is known as the Kalman filter . When applied to nonlinear models, this is called the Extended Kalman filter . David Kelly (NYU) EnKF April 10, 2015 6 / 32

  7. For high dimensional non-linear systems, calculations are expensive. A better idea is to sample . David Kelly (NYU) EnKF April 10, 2015 7 / 32

  8. The Ensemble Kalman Filter (EnKF) is a low dimensional sampling algorithm. (Evensen ’94) EnKF generates an ensemble of approximate samples from the posterior. David Kelly (NYU) EnKF April 10, 2015 8 / 32

  9. For linear models, one can draw samples, using the Randomized Maximum Likelihood method. David Kelly (NYU) EnKF April 10, 2015 9 / 32

  10. RML method m , � Let u ∼ N ( � C ) and η ∼ N (0 , Γ). We make an observation y = Hu + η . We want the conditional distribution of u | y . This is called an inverse problem . RML takes a sample m , � u (1) , . . . , � u ( K ) } ∼ N ( � { � C ) and turns them into a sample { u (1) , . . . , u ( K ) } ∼ u | y David Kelly (NYU) EnKF April 10, 2015 10 / 32

  11. RML method: How does it work? u (1) , . . . , � u ( K ) } , we create artificial Along with the prior sample { � observations { y (1) , . . . , y ( K ) } where y ( k ) = y + η ( k ) where η ( k ) ∼ N (0 , Γ) i.i.d Then define u ( k ) using the Kalman mean update , with ( � u ( k ) , y ( k ) ) u ( k ) = � u ( k ) − G ( � u ( k ) − y ( k ) ) . u )( H � Where the Kalman gain G ( � u ) is computing using the covariance of the prior � u . The set { u (1) , . . . , u ( K ) } are exact samples from u | y . David Kelly (NYU) EnKF April 10, 2015 11 / 32

  12. EnKF uses the same method, but with an approximation of the covariance in the Kalman gain. David Kelly (NYU) EnKF April 10, 2015 12 / 32

  13. The set-up for EnKF Suppose we are given the ensemble { u (1) n , . . . , u ( K ) } . For each ensemble n member, we create an artificial observation y ( k ) n +1 = y n +1 + ξ ( k ) ξ ( k ) n +1 iid N (0 , Γ). , n +1 We update each particle using the Kalman update � � u ( k ) n +1 = Ψ h ( u ( k ) H Ψ h ( u ( k ) n ) − y ( k ) n ) − G ( u n ) , n +1 where G ( u n ) is the “ Kalman gain ” computed using the forecasted ensemble covariance K � C n +1 = 1 � (Ψ h ( u ( k ) n ) − Ψ h ( u n )) T (Ψ h ( u ( k ) n ) − Ψ h ( u n )) . K k =1 David Kelly (NYU) EnKF April 10, 2015 13 / 32

  14. What do we know about EnKF? Not much. Theorem : For linear forecast models, ENKF → KF as K → ∞ (Le Gland et al / Mandel et al. 09’). David Kelly (NYU) EnKF April 10, 2015 14 / 32

  15. Ideally, we would like results with a finite ensemble size. 1 - Filter divergence 2 - Filter stability 3 - Continuous time scaling limit David Kelly (NYU) EnKF April 10, 2015 15 / 32

  16. 1 - Filter divergence In certain situations, it has been observed ( ⋆ ) that the ensemble can blow-up (ie. reach machine-infinity) in finite time , even when the model has nice bounded solutions. This is known as catastrophic filter divergence . Does this have a dynamical justification or is it a numerical artefact ? ⋆ Harlim, Majda (2010), Gottwald (2011), Gottwald, Majda (2013). David Kelly (NYU) EnKF April 10, 2015 16 / 32

  17. Assumptions on the model We make a dissipativity assumption on the model. Namely that dv dt + Av + B ( v , v ) = f with A linear elliptic and B bilinear, satisfying certain estimates and symmetries. This guarantees the absorbing ball property (the system has a Lyapunov function). Eg . 2d-Navier-Stokes, Lorenz-63, Lorenz-96. David Kelly (NYU) EnKF April 10, 2015 17 / 32

  18. Discrete time results Fix an observation frequency h > 0. Let e ( k ) = u ( k ) − v n . n n Theorem (K, Law, Stuart 14’) If H = Id, Γ = Id then there exists constant β > 0 such that � e 2 β nh − 1 � n | 2 ≤ e 2 β nh E | e ( k ) 0 | 2 + 2 K γ 2 E | e ( k ) e 2 β h − 1 Rmk . Thm (Tong, Majda, K 15) sup n ≥ 1 E | u ( k ) n | 2 < ∞ The ensemble inherits the absorbing ball property from the model. David Kelly (NYU) EnKF April 10, 2015 18 / 32

  19. Discrete time results with variance inflation Suppose we replace C n +1 �→ α 2 I + � � C n +1 at each update step. This is known as additive variance inflation . Theorem (K, Law, Stuart 14’) If H = Id , Γ = γ 2 Id then there exists constant β > 0 such that � 1 − θ n � n | 2 ≤ θ n E | e ( k ) 0 | 2 + 2 K γ 2 E | e ( k ) 1 − θ where θ = γ 2 e 2 β h α 2 + γ 2 . In particular, if we pick α large enough (so that θ < 1 ) then n | 2 ≤ 2 K γ 2 n →∞ E | e ( k ) lim 1 − θ David Kelly (NYU) EnKF April 10, 2015 19 / 32

  20. 2 - Filter stability Is the filter stable with respect to its initial data ( u (1) 0 , . . . , u ( K ) 0 )? Will initialization errors dissipate or propagate over time? This can be answered by verifying ergodicity . David Kelly (NYU) EnKF April 10, 2015 20 / 32

  21. Geometric ergodicity In addition to the dissipativity assumption, assume the model is stochastic with a positive density everywhere. Theorem (Tong, Majda, K 15) If H = Id then the signal-ensemble process ( v n , u (1) n , . . . , u ( K ) ) is n geometrically ergodic . That is, there exists unique stationary measure ρ , θ ∈ (0 , 1) such that, given any initialization µ | P n µ − ρ | ≤ C θ n where P n µ is the distribution ( v n , u (1) n , . . . , u ( K ) ) initialized with µ . n Rmk . The H = Id is not really needed. Sufficient to have a Lyapunov function for ( v n , u (1) n , . . . , u ( K ) ). n David Kelly (NYU) EnKF April 10, 2015 21 / 32

  22. 3 - Scaling limit Can we learn anything from the h → 0 scaling limit of the algorithm? David Kelly (NYU) EnKF April 10, 2015 22 / 32

  23. The EnKF equations look like a discretization Recall the ensemble update equation � � u ( k ) n +1 = Ψ h ( u ( k ) y ( k ) n +1 − H Ψ h ( u ( k ) n ) + G ( u n ) n ) C n +1 H + Γ) − 1 � � C n +1 H T ( H T � = Ψ h ( u ( k ) n ) + � y ( k ) n +1 − H Ψ h ( u ( k ) n ) Subtract u ( k ) from both sides and divide by h n u ( k ) n +1 − u ( k ) = Ψ h ( u ( k ) n ) − u ( k ) n n h h C n +1 H + h Γ) − 1 � � C n +1 H T ( hH T � y ( k ) n +1 − H Ψ h ( u ( k ) + � n ) Clearly we need to rescale the noise (ie. Γ). David Kelly (NYU) EnKF April 10, 2015 23 / 32

  24. Continuous-time limit If we set Γ = h − 1 Γ 0 and substitute y ( k ) n +1 , we obtain u ( k ) n +1 − u ( k ) = Ψ h ( u ( k ) n ) − u ( k ) n C n +1 H T ( hH T � n + � C n +1 H + Γ 0 ) − 1 h h � � Hv + h − 1 / 2 Γ 1 / 2 ξ n +1 + h − 1 / 2 Γ 1 / 2 ξ ( k ) n +1 − H Ψ h ( u ( k ) n ) 0 0 But we know that Ψ h ( u ( k ) n ) = u ( k ) + O ( h ) n and K � C n +1 = 1 (Ψ h ( u ( k ) n ) − Ψ h ( u n )) T (Ψ h ( u ( k ) � n ) − Ψ h ( u n )) K k =1 � K = 1 ( u ( k ) − u n ) T ( u ( k ) − u n ) + O ( h ) = C ( u n ) + O ( h ) n n K k =1 David Kelly (NYU) EnKF April 10, 2015 24 / 32

  25. Continuous-time limit We end up with u ( k ) n +1 − u ( k ) = Ψ h ( u ( k ) n ) − u ( k ) n n 0 H ( u ( k ) − C ( u n ) H T Γ − 1 − v n ) n h h � � h − 1 / 2 ξ n +1 + h − 1 / 2 ξ ( k ) + C ( u n ) H T Γ − 1 + O ( h ) 0 n +1 This looks like a numerical scheme for Itˆ o S(P)DE du ( k ) 0 H ( u ( k ) − v ) = F ( u ( k ) ) − C ( u ) H T Γ − 1 ( • ) dt � � dt + dW ( k ) dB + C ( u ) H T Γ − 1 / 2 . 0 dt David Kelly (NYU) EnKF April 10, 2015 25 / 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend