using model reduction in data assimilation
play

Using Model Reduction in Data Assimilation Met Office Website - PDF document

Using Model Reduction in Data Assimilation Met Office Website Nancy Nichols, Amos Lawless The University of Reading Caroline Boess, Angelika Bunse-Gerstner Cerfacs & The University of Bremen Outline Incremental 4D variational


  1. Using Model Reduction in Data Assimilation Met Office Website Nancy Nichols, Amos Lawless The University of Reading Caroline Boess, Angelika Bunse-Gerstner Cerfacs & The University of Bremen Outline • Incremental 4D variational assimilation • Model reduction in incremental 4DVar • Oblique projection using balanced truncation • Numerical experiments • Conclusions

  2. 4D-Var Nonlinear Problem 1 T � 1 � � � min J ( x ) ( x x ) B ( x x ) 0 0 b 0 b 2 n � o T 1 o � � � o � o ( H [ x ] y ) R ( H [ x ] y ) i i i i i i i i � 0 � S ( t , t , ) x 0 x subject to i i 0 x - Background state (prior) b o - Observations y i H - Observation operator i - Background error covariance matrix B - Observation error covariance matrix R i Incremental 4D-Var Temperature Observation Analysis Background x b Time Solve by iteration a sequence of linear least squares problems that approximate the nonlinear problem .

  3. Incremental 4D-Var ( 0 ) x Set (usually equal to background) 0 ( k ) ( k ) � x S ( t , t , x ) For k = 0, …, K find: i i 0 0 Solve inner loop linear minimization problem: � � T ( k ) ( k ) ( k ) ( k ) � 1 ( k ) ( k ) � � � min J [ � x ] � x � x B ( � x � x ) 0 0 b 0 0 b n � � ( k ) ( k ) ( k ) T � 1 ( k ) � n � � � � min J [ x ] ( H x d ) R ( H x d ) � � 0 ( k ) i i o T i 1 i ( k i ) i o i � ( � � ) ( � � ) H x d R H x d � i i i i i i i i 0 i � 0 o ( k ) � � d y H x [ ] subject to , i i i i � ( k 1 ) ( k ) ( k ) � � � Update: x x x 0 0 0 On each outer iteration the linear least squares problem is solved subject to the linearized dynamical system N � � � x i � � � x M x N � N � 1 i i i � � M i � � d H x p � N i i i � � � i In practice this problem is too computationally expensive to solve. Approximations to the inner minimization problem are therefore used.

  4. Previous Results • Incremental 4D-Var without approximations is equivalent to a Gauss-Newton iteration for nonlinear least squares problems. • In operational implementation the solution procedure is approximated: – Truncate inner loop iterations – Use an approximate linear system model • Theoretical convergence results obtained by reference to Gauss-Newton method (QJRMS, SIOPT). Low order incremental 4D-Var Aim: approximate the linearized system by a low order inner problem of size r << N. Define: T r � N � � U Linear restriction operators i T ˆ � � x U � x Low order variables i i i N � r � � V Prolongation operators i T T � V i U U V I where and is a projection operator. i i i r

  5. A restricted version of the dynamical linear system is then given by r ˆ � � � x ˆ ˆ � ˆ � x M � x ˆ r � r i � 1 i i � � M ˆ � ˆ d H � x ˆ p � r � � � i � 1 i i � 1 V ˆ T where approximates M i M U i i i ˆ T and approximates H i H i U i Then a low order inner minimization is solved subject to the low order linear system. Low Order Assimilation Problem ( 0 ) x Set (usually equal to background) 0 ( k ) ( k ) � x S ( t , t , x ) For k = 0, …, K find: i i 0 0 Solve low order inner loop minimization problem: � � � � T ˆ � ( k ) ( k ) ( k ) T ( k ) T � 1 ( k ) T ( k ) ( k ) ( k ) ( k ) ( k ) 1 ( k ) ( k ) min J [ � ˆ ] � � ˆ � � ( � ˆ � � ) min x � x U � x B x � U x J [ � x ] � x � x B ( � x � x ) 0 0 0 b 0 0 0 b 0 0 b 0 0 b n � � n � ( k ) ( k ) ( k ) T 1 ( k ) n � � � � � min J [ x ] ( H x d ) R ( H x d ) ˆ ˆ � ( k ) T � 1 ( k ) o o � � ˆ � ( k ) o T � 1 ˆ � ( k ) o 0 ( H � x i d ) i R ( i H � x i i d ) i i � � � ( H � x d ) R ( H � x d ) i i i i i i i i i i i i i i i � 0 � i 0 � i 0 o ( k ) ( k ) ( k ) ( k ) � � d y H x [ ] � � � with x L ( t , t , x ) x , i i i i i i 0 0 0 � ( ( k k � 1 1 ) ) ( ( k k ) ) ( k ) ˆ k ( ) � � � � � x x x x V x � x Update: 0 0 0 0 0 0 0

  6. How are the operators U iT and V i chosen so that the solution of the reduced problem is accurate? Two approaches: 1. Standard operational technique: The restriction U iT is a low resolution spatial operator and the prolongation operator V i represents spatial interpolation. 2. New method: The projections are based on optimal model reduction techniques. Optimal Reduced Order Models Aim: • Find approximate linear system models using optimal reduced order modeling techniques from control theory to improve the efficiency of the incremental 4DVar method. • Test feasibility of approach in comparison with low resolution models using a simple shallow water flow model.

  7. Model Reduction via Oblique Projections Given: Find: projections U , V with U T V = I r , r << N, such that the output of the reduced order system minimizes: (over all inputs with expected norm equal to a constant) Balanced truncation Balanced truncation removes states that are least affected by inputs and that have least effect on outputs (in a statistical sense). There are 2 steps: 1. Balancing – Transform system to one in which these states are the same. 2. Truncation – Truncate states related to the smallest singular values of the transformed covariance matrices (Hankel singular values). Projected system exactly matches the largest Hankel singular values of the full system.

  8. Balanced Truncation Find: such that where is diagonal and Then: near optimal projections are given by Reduced Order Assimilation Problem The reduced order inner loop problem is to minimize subject to , and set

  9. Why might we expect a benefit? • The model reduction approach tries to match the input-output response of the whole system, allowing for the system dynamics, the observations and the error covariances. • The use of a low resolution model ignores some of this information. Does this help in the data assimilation problem? 1D Shallow Water Model Nonlinear continuous equations D u � � � h � � � g � � D t x x � � D(ln ) u � � 0 � D t x � � D � � with u D t � t � x We discretize using a semi-implicit semi-Lagrangian scheme and linearize to get linear model (TLM).

  10. Methodology • Define an initial random perturbation � x 0 from a distribution B 0 . • Calculate ‘true’ solution by solving full linear least squares problem. • Calculate ‘observations’ d i = H � x i for 5 steps (t=0 to t=5) • Compare solutions solving with – Low resolution linear model. – Reduced order model. • Size of full dimension is 400. Numerical Experiments - Error Norms from TLM model observations at every other point quite realistic covariance matrix , .

  11. Error between exact and approximate analysis for 1-D SWE model Low Res Model of order = 200 Low Res Model of order = 200 vs Reduced Model of order = 200 vs Reduced Model of order = 80 Log Error Log Error Component of state Component of state Red (dotted) = Low Res Model Green (dashed) = Reduced Rank Model Comparison of Error Norms Low resolution vs Reduced order models 22

  12. Comparison of Error Norms Low resolution vs Reduced order models 23 Comparison of Error Norms Low resolution vs Reduced order models 24

  13. Comparison of Model Eigenvalues (a) (c) (b) Eigenvalues plotted on the complex plane for (a) full resolution model; (b) low resolution model of order 200; (c) reduced rank model of order 200. Importance of B Matrix Errors where covariance B 0 is not used in model reduction Low Res Model of order = 200 vs Reduced Model of order = 200 Log Error Component of state Red (dotted) = Low Res Model Green (dashed) = Reduced Rank Model

  14. Conclusions Conclusions • Reduced rank linear models obtained by optimal reduction techniques give more accurate analyses than low resolution linear models that are currently used in practice.

  15. Conclusions • Reduced rank linear models obtained by optimal reduction techniques give more accurate analyses than low resolution linear models that are currently used in practice. • Incorporating the background and observation error covariance information is necessary to achieve good results Conclusions • Reduced rank linear models obtained by optimal reduction techniques give more accurate analyses than low resolution linear models that are currently used in practice. • Incorporating the background and observation error covariance information is necessary to achieve good results • Reduced order systems capture the optimal growth behaviour of the model more accurately than low resolution models

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend