introduction to data assimilation
play

Introduction to Data Assimilation Ma elle Nodet - PowerPoint PPT Presentation

Introduction to Data Assimilation Ma elle Nodet team.inria.fr/moise/maelle Universit e de Grenoble, INRIA, LJK GdT Couplage, Montpellier 6 mars 2012 Intoduction What is data assimilation? What is data assimilation? Combine at best


  1. Introduction to Data Assimilation Ma¨ elle Nodet team.inria.fr/moise/maelle Universit´ e de Grenoble, INRIA, LJK GdT Couplage, Montpellier 6 mars 2012

  2. Intoduction What is data assimilation? What is data assimilation? Combine at best different sources of information to estimate the state of a system: model equations observations, data background, a priori information statistics Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 2 / 57

  3. Intoduction What is data assimilation? Data only data reference (1 every 25 gripoint) Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 3 / 57

  4. Intoduction What is data assimilation? Model only reference model (after 4 months) Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 4 / 57

  5. Intoduction What is data assimilation? Model-Data coupling reference after data assimilation Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 5 / 57

  6. Intoduction What is data assimilation? What is data assimilation for? Historically: meteorology. Later, oceanography. Today, many other fields glaciology, seismology, nuclear fusion, medicine, agronomy, etc. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 6 / 57

  7. Intoduction What is data assimilation? What is data assimilation for? Historically: initial state estimation, for weather forecasting. Today, many other applications: initial conditions for predictions, calibration and validation, observing system design, monitoring and assessment, reanalysis, better understanding (model errors, data errors, physical process interactions, parameters, etc), etc. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 7 / 57

  8. Intoduction Problem position Example from ocean forecasting Short term ocean forecasting (a few weeks) of currents, temperature, salinity, for fisheries, coastal economy, sailing... We aim at producing an estimate x a of the true state x t = ( u , v , T , S ) of the ocean at initial time, to initialize forecasts. We are given: a background estimate x b = ( u b , v b , T b , S b ), which is either a previous forecast or comes from climatology, partial observations y o = H ( x t ) + ǫ o , distributed over a given time window, e.g. temperatures from buoys, sea surface elevation from satellites, currents from moorings, . . . Observation operator H contains the dynamical model mapping the initial state of the ocean to the actual temperature, currents, sea surface height at given points in space and time. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 8 / 57

  9. Intoduction Problem position Example from glaciology Short term ice dynamics forecasting (100 years), to estimate Antarctica and Greenland contribution to sea level change. We aim at producing an estimate x a of the true input parameters x t = β of the basal drag coefficient (function of space, constant over time) at the bottom of the ice cap. We are given: a background estimate x b = β b , which is roughly inferred from surface velocities, partial observations y o = H ( x t ) + ǫ o , e.g. surface velocities, ice surface elevation, approximate bedrock topography . . . Observation operator H contains the dynamical model mapping the basal drag of the ice cap to the surface variables at given points in space and time. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 9 / 57

  10. Intoduction Problem position Outline Stochastic data assimilation 1 Best linear unbiased estimator (BLUE) Kalman filter algorithm Variational Data Assimilation 2 Principle of variational methods Gradient-based optimization Variational algorithms Implementation issues 3 Non linearities High dimensional problems Gradient computation: adjoint method Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 10 / 57

  11. Stochastic data assimilation Best linear unbiased estimator (BLUE) Errors statistics Mean: E ( x ) = < x > scalar , E ( x ) = ( E ( x 1 ) , E ( x 2 ) , ..., E ( x n )) vector-valued Variance, covariance ( x , y scalar): Var ( x ) = E (( x − E ( x )) 2 ) , Cov ( x , y ) = E (( x − E ( x ))( y − E ( y ))) We say that errors are: unbiased if E ( ǫ ) = 0 ; uncorrelated if E ( ǫ 1 ǫ T 2 ) = 0 ; non trivial if Cov ( ǫ ) is positive-definite ; Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 13 / 57

  12. Stochastic data assimilation Best linear unbiased estimator (BLUE) Covariance matrix Covariance matrix ( x vector-valued): E (( x − E ( x ))( x − E ( x )) T ) Cov ( x ) = ( Cov ( x )) i , j = Cov ( x i , x j ) = E (( x i − E ( x i ))( x j − E ( x j ))) E.g. for x = ( x 1 , x 2 , x 3 ):   Var ( x 1 ) Cov ( x 1 , x 2 ) Cov ( x 1 , x 3 ) Cov ( x ) = Cov ( x 1 , x 2 ) Var ( x 2 ) Cov ( x 2 x 3 )   Cov ( x 1 , x 3 ) Cov ( x 2 , x 3 ) Var ( x 3 ) Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 14 / 57

  13. Stochastic data assimilation Best linear unbiased estimator (BLUE) Notations State x state vector or input parameters x t true state (unknown) x b background state (a priori information), background error ǫ b = x b − x t , covariance matrix B x a analyzed state (result of the assimilation process), analysis error ǫ a = x a − x t , covariance matrix A Observations observation vector y o observation operator H , mapping state space to observation space: y o = H ( x t ) + ǫ o observation error ǫ o , covariance matrix R Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 15 / 57

  14. Stochastic data assimilation Best linear unbiased estimator (BLUE) Problem position: what we have We aim at producing an estimate x a of the true input parameters x t of the system. We are given: a background estimate x b , whose error ǫ b are assumed unbiased and non trivial, with covariance matrix B given, partial observations y o = H ( x t ) + ǫ o , where ǫ o are unbiased and non trivial, with covariance matrix R given. Observation operator H maps the input parameters to the observation variables (can contain complex laws, PDEs, non linear physics, . . . ). We also assume that: H = H is a linear operator, ǫ o and ǫ b are not correlated. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 16 / 57

  15. Stochastic data assimilation Best linear unbiased estimator (BLUE) Problem position: what we look for We aim at producing an estimate x a of the true state x t of the system. The best estimate is searched for as a linear combination of the background estimate and the observation: x a = L x b + K y o Optimality criterium We look for an unbiased estimate x a , with minimal variance tr( A ). Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 17 / 57

  16. Stochastic data assimilation Best linear unbiased estimator (BLUE) Best linear unbiased estimator, or least squares analysis BLUE analysis: 1 � x a = ( I − KH ) x b + K y o = x b + K ( y o – H ( x b )) K = BH T ( HBH T + R ) –1 K : gain, or weight matrix, y o – H ( x b ) innovation. Analysis covariance matrix: A = ( I – KH ) B 2 Equivalent variational optimization problem: (optimal least squares) 3 � x a = arg min J J ( x ) = ( x – x b ) T B –1 ( x – x b ) + ( y o – H ( x )) T R –1 ( y o – H ( x )) J : cost function. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 18 / 57

  17. Stochastic data assimilation Best linear unbiased estimator (BLUE) Data assimilation methods Two types of methods: Direct computation of the BLUE, and the gain matrix K . 1 Main algorithm: Kalman filter − → stochastic data assimilation, this section. Minimization of the cost function J using optimization and adjoint methods. 2 Main algorithm: 4D-Var − → variational data assimilation, next section. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 19 / 57

  18. b b u b b b u Stochastic data assimilation Kalman filter algorithm Time-dependant problems: the Kalman filter sequence state y o x f x f x a x a b x a u y o y o x f t k − 2 k − 1 k Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 21 / 57

  19. Stochastic data assimilation Kalman filter algorithm Back to notations Vectors: k time index x f k forecast state (background), forecast error covariance matrix P f k x a k analyzed state (result of the assimilation process), analysis error covariance matrix P a k Operators: model operator x t k +1 = M k , k +1 ( x t k ) + η k , k +1 , model error η k , k +1 , covariance matrix Q k observation operator y o k = H k ( x t ) + ǫ o k , observation error ǫ o k , covariance matrix R k Kalman’s hypotheses, schematically: Model and observations operators M k , k +1 and H k are linear ; Errors are unbiased, gaussian, independant and white in time. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 22 / 57

  20. Stochastic data assimilation Kalman filter algorithm Kalman’s hypotheses Schematically: Model and observations operators are linear, denoted M k , k +1 and H k ; Errors are unbiased, gaussian, independant and white in time. Ma¨ elle Nodet (Grenoble) Introduction to Data Assimilation 6 mars 2012 23 / 57

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend