on the efficiency and consitency of covariance
play

On the efficiency and consitency of covariance localisation in the - PowerPoint PPT Presentation

On the efficiency and consitency of covariance localisation in the EnKF Alban Farchi and Marc Bocquet CEREA, laboratoire commun cole des Ponts ParisTech and EDF R&D, Universit Paris-Est, Champs-sur-Marne, France Tuesday, 4 June 2018


  1. On the efficiency and consitency of covariance localisation in the EnKF Alban Farchi and Marc Bocquet CEREA, laboratoire commun École des Ponts ParisTech and EDF R&D, Université Paris-Est, Champs-sur-Marne, France Tuesday, 4 June 2018 International EnKF workshop Alban Farchi Localisation of the EnSRF 4 June 2019 1 / 26

  2. Contents 1 Rank deficiency of the EnKF 2 The LEnSRF with augmented ensembles 3 A new perturbation update scheme 4 References Alban Farchi Localisation of the EnSRF 4 June 2019 2 / 26

  3. Rank deficiency of the EnKF The Kalman filter The ensemble Kalman filter analysis ◮ We focus on the Kalman filter analysis step: K = BH T � R + HBH T � − 1 , (1) (2) x a = x b + K ( y − Hx b ) , P a = ( I − KH ) B , (3) ◮ In the EnKF, the statistics are carried through by the ensemble � � x i , i = 1 . . . N e : (4) x b = x , B = XX T , (5) where x is the ensemble mean and X is the normalised anomaly matrix . Alban Farchi Localisation of the EnSRF 4 June 2019 3 / 26

  4. Rank deficiency of the EnKF Rank deficiency and localisation Rank deficiency of the EnKF ◮ The matrix XX T has rank limited by N e − 1 , too small to accurately represent B in a high-dimensional system ( N x ≫ N e ). ◮ When N e is too small, XX T is characterised by large sampling errors, which often take the form of spurious correlations at long distance. ◮ A common solution is to use localisation: either make the analysis local (domain localisation) or use a localised B (covariance localisation). domain localisation covariance localisation relies on a collection of local analyses relies on a localised background error B embarrassingly parallel no obvious parallelisation of the perturbation update ad hoc treatment of non-local observations rigourous treatment of non-local observations Alban Farchi Localisation of the EnSRF 4 June 2019 4 / 26

  5. Rank deficiency of the EnKF Rank deficiency and localisation Covariance localisation in the deterministic EnKF The (exact) LEnSRF Regularise XX T with a localisation matrix ρ : B = ρ ◦ � XX T � (6) . Update the perturbation as: T = I + BH T R − 1 H , (7) X a = T − 1 / 2 X . (8) ◮ First focus: efficient implementation (accuracy/speed) of the perturbation update. ◮ Second focus: consistency (how well does X a X T a match P a ) of the perturbation update. Alban Farchi Localisation of the EnSRF 4 June 2019 5 / 26

  6. The LEnSRF with augmented ensembles Contents 1 Rank deficiency of the EnKF 2 The LEnSRF with augmented ensembles The augmented ensemble Update the perturbations Numerical illustration: the Lorenz 1996 model Numerical illustration: the multi-layer Lorenz 1996 model 3 A new perturbation update scheme 4 References Alban Farchi Localisation of the EnSRF 4 June 2019 6 / 26

  7. The LEnSRF with augmented ensembles The augmented ensemble Using an augmented ensemble to represent B ◮ In the analysis step, we choose to compute � N e > N e perturbations � X such that B = ρ ◦ XX T ≈ � X � X T . There are two methods. The modulation method Suppose that there is a matrix W with N m columns such that ρ ≈ WW T . Let � X be the matrix with N m N e columns: �� X � jN e + i = [ W ] j n [ X ] i (9) n . n The random svd method Compute the truncated svd B = ρ ◦ � XX T � ≈ U Σ U T with N m columns, and form the matrix � X = UΣ 1 / 2 . Alban Farchi Localisation of the EnSRF 4 June 2019 7 / 26

  8. The LEnSRF with augmented ensembles The augmented ensemble The modulation method Compute the modulation product � X = W ∆ X 1: �� X � jN e + i = [ W ] j n [ X ] i (10) n . n ◮ This is a mix between a Schur product (for the state variable index n ) and a tensor product (for the ensemble indices i and j ). [Buehner, 2005] ◮ The modulation product is based on a factorisation property shown by [Lorenc, 2003] and is currently used for covariance localisation [e.g., Bishop et al., 2017], including in operational centers [e.g., Arbogast et al., 2017]. Alban Farchi Localisation of the EnSRF 4 June 2019 8 / 26

  9. The LEnSRF with augmented ensembles The augmented ensemble The random svd method Compute the truncated svd B = ρ ◦ � XX T � ≈ U Σ U T with N m columns. 1: Form the matrix � X = UΣ 1 / 2 . 2: ◮ The matrix B is sparse, which means that efficient (and parallelisable) methods can be used to compute the truncated svd, e.g., the random svd algorithm of [Halko et al., 2011]. [Farchi and Bocquet, 2019] Alban Farchi Localisation of the EnSRF 4 June 2019 9 / 26

  10. The LEnSRF with augmented ensembles Update the perturbations Compute the updated perturbations ◮ How to obtain N e updated members using the � N e analysis perturbations � X a of the augmented ensemble? ◮ A solution is to use the augmented ensemble to compute an approximate left-transform update of the (non-augmented) ensemble as follows: X a = � I + BH T R − 1 H � − 1 / 2 X , (11) X a = � X T H T R − 1 H � − 1 / 2 X , I + � X � (12) � � � Y � 1 / 2 � − 1 � Y + � Y T R − 1 � I − � I + � Y T R − 1 � I + � Y T R − 1 � (13) X a = X H X , where � Y = H � X . ◮ The linear algebra is performed in the augmented ensemble space (i.e., using � N e × � N e matrices) using the formula derived by [Bocquet, 2016], later used by [Bishop et al., 2017] under the name gain form of the ETKF . Alban Farchi Localisation of the EnSRF 4 June 2019 10 / 26

  11. The LEnSRF with augmented ensembles Numerical illustration: the Lorenz 1996 model The Lorenz 1996 model ◮ We use the L96 model with N x = 400 variables: d x n (14) = ( x n +1 − x n − 2 ) x n − 1 − x n + 8 , n = 1 . . . N x . d t ◮ The observations are given every ∆ t = 0 . 05 by (15) y = x + v , v ∼ N ( 0 , I ) . ◮ The localisation matrix is constructed using the Gaspari–Cohn function, assuming that the grid points are equally distributed in space. ◮ All algorithms use an ensemble of N e = 10 members. ◮ The runs are 2 × 10 4 ∆ t long and our criterion is the time-average analysis RMSE . Alban Farchi Localisation of the EnSRF 4 June 2019 11 / 26

  12. The LEnSRF with augmented ensembles Numerical illustration: the Lorenz 1996 model Results with the L96 model 2 . 25 LETKF random svd, 1 thread 0 . 45 random svd random svd, 24 threads 2 . 00 modulation modulation 0 . 40 1 . 75 � × 10 3 s 1 . 50 � 0 . 35 Wall-clock time RMSE 1 . 25 0 . 30 1 . 00 0 . 75 0 . 25 0 . 50 0 . 20 0 . 25 50 100 150 200 250 300 0 . 20 0 . 22 0 . 24 0 . 26 0 . 28 0 . 30 Augmented ensemble size � RMSE N e ◮ Both methods can yield similar RMSE scores as the LETKF. ◮ The modulation method requires a larger augmented ensemble size to yield similar RMSE scores as the random svd method. ◮ For a given level of RMSE score, the random svd method is faster than the modulation method. [Farchi and Bocquet, 2019] Alban Farchi Localisation of the EnSRF 4 June 2019 12 / 26

  13. The LEnSRF with augmented ensembles Numerical illustration: the multi-layer Lorenz 1996 model A multilayer extension of the L96 model ◮ We introduce the mL96 model , that consists of P z = 32 coupled layers of the L96 model with P h = 40 variables: = � � d x ( z, h ) x ( z, h +1) − x ( z, h − 2) x ( z, h − 1) − x ( z, h ) + F z d t � � + δ { z> 0 } x ( z − 1 , h ) − x ( z, h ) � � (16) + δ { z<P z } x ( z +1 , h ) − x ( z, h ) . ◮ The forcing term linearly decreases from F 1 = 8 to F 32 = 4 . Alban Farchi Localisation of the EnSRF 4 June 2019 13 / 26

  14. The LEnSRF with augmented ensembles Numerical illustration: the multi-layer Lorenz 1996 model Satellite observations for the mL96 model ◮ Each column is observed independently via: 32 � P z y c, h = [ Ω ] c, z x z, h + v c, h , v c, h ∼ N (0 , 1) , (17) 24 z =1 Vertical layer index z where Ω is a weighting matrix with N c = 8 channels that is designed to mimic satellite radiances . 16 ◮ The 8 × 40 observations are available every ∆ t = 0 . 05 . ◮ The runs are 10 4 ∆ t long. 8 ◮ All algorithms use an ensemble of N e = 8 members. 1 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4 Weight Covariance localisation (with augmented ensembles) is used only in the vertical direction . Domain localisation (LETKF-like) is used in the horizontal direction. Alban Farchi Localisation of the EnSRF 4 June 2019 14 / 26

  15. The LEnSRF with augmented ensembles Numerical illustration: the multi-layer Lorenz 1996 model Results with the mL96 model ad hoc LETKF trunc. svd, � N e = 8 0 . 5 trunc. svd, � N e = 32 trunc. svd, � N e = 64 modulation, � N e = 8 ◮ Using covariance localisation in the vertical 0 . 4 modulation, � N e = 32 modulation, � N e = 128 direction yields better RMSE scores than the LETKF. RMSE 0 . 3 ◮ The modulation method requires a larger augmented ensemble size to yield similar RMSE 0 . 2 scores as the random svd method. ◮ Both methods benefit from the parallelisation of 0 . 1 1 5 9 13 17 21 the local analyses, but the parallelisation potential of 10 3 the random svd method is not fully exploited because of our limited computational platform. Wall-clock time [s] 10 2 [Farchi and Bocquet, 2019] 10 1 1 5 9 13 17 21 Horizontal localisation radius r h Alban Farchi Localisation of the EnSRF 4 June 2019 15 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend