mathematical strategies for filtering turbulent systems
play

Mathematical Strategies for Filtering Turbulent Systems: Sparse - PowerPoint PPT Presentation

Mathematical Strategies for Filtering Turbulent Systems: Sparse Observations, Model Errors, and Stochastic Parameter Estimation John Harlim Courant Institute of Mathematical Sciences, New York University July 1, 2009 What is filtering? 1.


  1. Mathematical Strategies for Filtering Turbulent Systems: Sparse Observations, Model Errors, and Stochastic Parameter Estimation John Harlim Courant Institute of Mathematical Sciences, New York University July 1, 2009

  2. What is filtering? 1. Forecast (Prediction) 2. Analysis (Correction) um+1|m (prior) um+1|m (prior) um|m (posterior) um+1|m+1 (posterior) true signal true signal observation (vm+1) observation (vm+1) tm tm+1 tm tm+1 The correction step is an application of Bayesian update p ( u m +1 | m +1 ) ≡ p ( u m +1 | m | v m +1 ) ∼ p ( u m +1 | m ) p ( v m +1 | u m +1 | m ) Kalman filter formula produces the optimal unbiased posterior mean and covariance by assuming linear model and Gaussian observations and forecasts errors.

  3. The standard Kalman filter algorithm for solving: Fu m + ¯ = f m + σ m +1 u m +1 Gu m + σ o v m = m

  4. The standard Kalman filter algorithm for solving: Fu m + ¯ = f m + σ m +1 u m +1 Gu m + σ o v m = m Forecast (Prediction) u m | m + ¯ A) ¯ u m +1 | m = F ¯ f m , R m +1 | m = FR m | m F ∗ + R , B)

  5. The standard Kalman filter algorithm for solving: Fu m + ¯ = f m + σ m +1 u m +1 Gu m + σ o v m = m Forecast (Prediction) u m | m + ¯ A) ¯ u m +1 | m = F ¯ f m , R m +1 | m = FR m | m F ∗ + R , B) Analysis (Correction) D) ¯ u m +1 | m +1 = ( I − K m +1 G )¯ u m +1 | m + K m +1 v m +1 E) R m +1 | m +1 = ( I − K m +1 G ) R m +1 | m , K m +1 = R m +1 | m G T ( GR m +1 | m G T + R o ) − 1 . F)

  6. Example of application: predicting path of hurricane

  7. Computational and Theoretical Issues: ◮ How to handle large system? Perhaps N = 10 6 state variables (e.g., 200 km resolved Global Weather Model)

  8. Computational and Theoretical Issues: ◮ How to handle large system? Perhaps N = 10 6 state variables (e.g., 200 km resolved Global Weather Model) ◮ Where is the computational burden? Propagating covariance matrix of size N × N (6N minutes = 300,000 hours).

  9. Computational and Theoretical Issues: ◮ How to handle large system? Perhaps N = 10 6 state variables (e.g., 200 km resolved Global Weather Model) ◮ Where is the computational burden? Propagating covariance matrix of size N × N (6N minutes = 300,000 hours). ◮ Handling nonlinearity! Why not particle filter? Convergence requires ensemble size that grows exponentially with respect to the ensemble spread relative to observation errors rather than to the state dimension per se(Bengtsson, Bickel, and Li 2008).

  10. Computational and Theoretical Issues: ◮ How to handle large system? Perhaps N = 10 6 state variables (e.g., 200 km resolved Global Weather Model) ◮ Where is the computational burden? Propagating covariance matrix of size N × N (6N minutes = 300,000 hours). ◮ Handling nonlinearity! Why not particle filter? Convergence requires ensemble size that grows exponentially with respect to the ensemble spread relative to observation errors rather than to the state dimension per se(Bengtsson, Bickel, and Li 2008). ◮ Some successful strategies: Ensemble Kalman filters (ETKF of Bishop et al. 2001, EAKF of Anderson 2001). Each involves computing singular value decomposition (SVD).

  11. Computational and Theoretical Issues: ◮ How to handle large system? Perhaps N = 10 6 state variables (e.g., 200 km resolved Global Weather Model) ◮ Where is the computational burden? Propagating covariance matrix of size N × N (6N minutes = 300,000 hours). ◮ Handling nonlinearity! Why not particle filter? Convergence requires ensemble size that grows exponentially with respect to the ensemble spread relative to observation errors rather than to the state dimension per se(Bengtsson, Bickel, and Li 2008). ◮ Some successful strategies: Ensemble Kalman filters (ETKF of Bishop et al. 2001, EAKF of Anderson 2001). Each involves computing singular value decomposition (SVD). ◮ However, these accurate filters are not immune from ”catastrophic filter divergence” (diverge beyond machine infinity) when observations are sparse, even when the true signal is a dissipative system with ”absorbing ball property”.

  12. Filtering in Frequency space Rea9 Space 7o*rier Space Independent Fourier 78 Simplest Turbulent Model Coef  cient: -onstant -oe/ ! cient !an$e&in e)*ation !inear Stochastic 45E <nno&ati&e 7o*rier 5omain -9assica9 Ensemb9e Strate$y Ka9man 7i9ter Ka9man 7i9ter Ka9man 7i9ter 78 Fourier Coef  cients Noisy Observations of the noisy observations

  13. Filtering Stochastically forced advection-diffusion equation ∂ u ( x , t ) − ∂ ∂ x u ( x , t ) + ¯ = F ( x , t ) ∂ t

  14. Filtering Stochastically forced advection-diffusion equation F ( x , t ) + µ ∂ 2 ∂ u ( x , t ) − ∂ ∂ x u ( x , t ) + ¯ ∂ x 2 u ( x , t ) + σ ( x ) ˙ = W ( t ) ∂ t

  15. Filtering Stochastically forced advection-diffusion equation F ( x , t ) + µ ∂ 2 ∂ u ( x , t ) − ∂ ∂ x u ( x , t ) + ¯ ∂ x 2 u ( x , t ) + σ ( x ) ˙ = W ( t ) ∂ t x j = j ˜ h , (2 N + 1)˜ x j , t m ) + σ o v (˜ x j , t m ) = u (˜ m , ˜ h = 2 π. where σ o m ∼ N (0 , r o ).

  16. Filtering Stochastically forced advection-diffusion equation F ( x , t ) + µ ∂ 2 ∂ u ( x , t ) − ∂ ∂ x u ( x , t ) + ¯ ∂ x 2 u ( x , t ) + σ ( x ) ˙ = W ( t ) ∂ t x j = j ˜ h , (2 N + 1)˜ x j , t m ) + σ o v (˜ x j , t m ) = u (˜ m , ˜ h = 2 π. where σ o m ∼ N (0 , r o ). In Fourier Domain, we reduce filtering (2N+1) dimensional problem to filtering decoupled scalar stochastic Langevin equations: [( − µ k 2 − i k )ˆ u k ( t ) + ˆ d ˆ u k ( t ) = F k ( t )] dt + σ k dW k ( t ) σ o ˆ v k , m = ˆ u k , m + ˆ k , m σ o k , m ∼ N (0 , r o / (2 N + 1)). where ˆ

  17. How to deal with Sparse Regularly Spaced Observations? ALIASING !! m=25, l=3, k=2, N=5 1 sin(25x) sin(3x) 0.8 sin(3x j )=sin(25x j ) 0.6 0.4 0.2 0 ! 0.2 ! 0.4 ! 0.6 ! 0.8 ! 1 0 1 2 3 4 5 6 7

  18. Recall Aliasing Formula: | k |≤ N ˆ f fine ( k ) e i kx j where x j = jh and ◮ Fine mesh: f ( x j ) = � (2 N + 1) h = 2 π .

  19. Recall Aliasing Formula: | k |≤ N ˆ f fine ( k ) e i kx j where x j = jh and ◮ Fine mesh: f ( x j ) = � (2 N + 1) h = 2 π . | ℓ |≤ M ˆ x j where ˜ x j = j ˜ f coarse ( ℓ ) e i ℓ ˜ ◮ Coarse mesh: f (˜ x j ) = � h and (2 M + 1)˜ h = 2 π .

  20. Recall Aliasing Formula: | k |≤ N ˆ f fine ( k ) e i kx j where x j = jh and ◮ Fine mesh: f ( x j ) = � (2 N + 1) h = 2 π . | ℓ |≤ M ˆ x j where ˜ x j = j ˜ f coarse ( ℓ ) e i ℓ ˜ ◮ Coarse mesh: f (˜ x j ) = � h and (2 M + 1)˜ h = 2 π . ◮ Suppose the coarse grid points ˜ x j coincide with the fine mesh grid points x j at every P = (2 N + 1) / (2 M + 1) fine grid points.

  21. Recall Aliasing Formula: | k |≤ N ˆ f fine ( k ) e i kx j where x j = jh and ◮ Fine mesh: f ( x j ) = � (2 N + 1) h = 2 π . | ℓ |≤ M ˆ x j where ˜ x j = j ˜ f coarse ( ℓ ) e i ℓ ˜ ◮ Coarse mesh: f (˜ x j ) = � h and (2 M + 1)˜ h = 2 π . ◮ Suppose the coarse grid points ˜ x j coincide with the fine mesh grid points x j at every P = (2 N + 1) / (2 M + 1) fine grid points. x j = e i ( ℓ + q (2 M +1))˜ x j = e i ℓ ˜ ◮ Since e i k ˜ x j ,

  22. Recall Aliasing Formula: | k |≤ N ˆ f fine ( k ) e i kx j where x j = jh and ◮ Fine mesh: f ( x j ) = � (2 N + 1) h = 2 π . | ℓ |≤ M ˆ x j where ˜ x j = j ˜ f coarse ( ℓ ) e i ℓ ˜ ◮ Coarse mesh: f (˜ x j ) = � h and (2 M + 1)˜ h = 2 π . ◮ Suppose the coarse grid points ˜ x j coincide with the fine mesh grid points x j at every P = (2 N + 1) / (2 M + 1) fine grid points. x j = e i ( ℓ + q (2 M +1))˜ x j = e i ℓ ˜ ◮ Since e i k ˜ x j , ◮ We deduce ˆ � ˆ | ℓ | ≤ M , f coarse ( ℓ ) = f fine ( k j ) , k j ∈A ( ℓ ) where A ( ℓ ) = { k : | k | ≤ N , k = ℓ + q (2 M + 1) , q ∈ Z }

  23. Consider the following sparse observations: 123 grid pts (61 modes) but only 41 observations (20 modes) available Physical Space sparse observations for P=3 Fourier Space -61 -20 0 20 61 aliasing set ! (1) = {1,-40,42} for P=3 and M=20 -61 -20 0 20 61 aliasing set ! (11) = {11,-30,52} for P=3 and M=20

  24. Aliasing Formula: Observation at time t m becomes: � ℓ, m , = G � σ o σ o v ℓ, m = ˆ ˆ u k j , m + ˆ ˆ u ℓ, m + ˆ ℓ, m k j ∈A ( ℓ ) σ o ℓ, m ∼ N (0 , r o / (2 M + 1)). where G = [1 , 1 , . . . , 1] and ˆ

  25. Aliasing Formula: Observation at time t m becomes: � ℓ, m , = G � σ o σ o v ℓ, m = ˆ ˆ u k j , m + ˆ u ℓ, m + ˆ ˆ ℓ, m k j ∈A ( ℓ ) σ o ℓ, m ∼ N (0 , r o / (2 M + 1)). where G = [1 , 1 , . . . , 1] and ˆ Reduced Filters ◮ With the aliasing formula above, we reduce filtering (2 N + 1) dimensional system with (2 M + 1) observations, where M < N , to decoupled P = (2 N + 1) / (2 M + 1) dimensional problem with scalar observations (FDKF).

  26. Aliasing Formula: Observation at time t m becomes: � ℓ, m , = G � σ o σ o v ℓ, m = ˆ ˆ u k j , m + ˆ ˆ u ℓ, m + ˆ ℓ, m k j ∈A ( ℓ ) σ o ℓ, m ∼ N (0 , r o / (2 M + 1)). where G = [1 , 1 , . . . , 1] and ˆ Reduced Filters ◮ With the aliasing formula above, we reduce filtering (2 N + 1) dimensional system with (2 M + 1) observations, where M < N , to decoupled P = (2 N + 1) / (2 M + 1) dimensional problem with scalar observations (FDKF). ◮ When the energy spectrum of the system decays as a function of wavenumbers, we can ignore the high wavenumbers (e.g., RFDKF, SDAF).

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend