Nonlinear filtering with local couplings
Alessio Spantini Ricardo Baptista Youssef Marzouk
Massachusetts Institute of Technology Department of Aeronautics & Astronautics ISDA 2019
January 22nd, 2019
1 / 16
Nonlinear filtering with local couplings Alessio Spantini Ricardo - - PowerPoint PPT Presentation
Nonlinear filtering with local couplings Alessio Spantini Ricardo Baptista Youssef Marzouk Massachusetts Institute of Technology Department of Aeronautics & Astronautics ISDA 2019 January 22nd, 2019 1 / 16 Assimilation step in ensemble
Massachusetts Institute of Technology Department of Aeronautics & Astronautics ISDA 2019
1 / 16
mit-logo.jpg
◮ πX is the forecast distribution on Rn ◮ πY |X is the likelihood of the observations Y ∈ Rd ◮ πX|Y =y ∗ is the filtering distribution for a realization y ∗ of the data
1 / 16
mit-logo.jpg
◮ Seek a map T that pushes forward prior to posterior [Moselhy, 2012]
◮ The map induces a coupling between prior and posterior measures
2 / 16
mit-logo.jpg
◮ Construct a coupling T between the joint πY ,X and the posterior ◮ T can be computed via convex optimization given samples from πY ,X ◮ Sample πY ,X using the forecast ensemble and the likelihood
◮ Intuition: generalization of perturbed observation EnKF
3 / 16
mit-logo.jpg
T : Rd+n → Rn
S : Rd+n → Rd+n
◮ In a few slides: we will show how to derive T from S
4 / 16
mit-logo.jpg
◮ Definition: for any pair of densities π, η on Rn, there exists a unique
◮ Triangular function (nonlinear generalization of a triangular matrix)
◮ Existence stem from general factorization properties of a density, e.g.,
5 / 16
mit-logo.jpg
◮ Each component Sk links marginal conditionals of π and η ◮ For instance, if η = N(0, I), then for all x1, . . . , xk−1 ∈ Rk−1
◮ Simulate the conditional πXk|X1:k−1 by inverting a 1-D map
6 / 16
mit-logo.jpg
◮ We are interested in the KR map S that pushes πY ,X to N(0, Id+n) ◮ The rearrangement has a typical block structure
◮ The analysis map that pushes πY ,X to πX|Y =y ∗ is then given by
7 / 16
mit-logo.jpg
i =
8 / 16
mit-logo.jpg
◮ Recall the form of the analysis map
◮ We propose the following estimator
◮ The MLE of S can be computed via convex optimization [Parno, 2014]
9 / 16
mit-logo.jpg
◮ Constrained MLE for S
S∈H
M
♯
pullback
◮ Each component
Sk∈Hk
M
mit-logo.jpg
Sk∈Hk
M
◮ In general, need convex optimization (e.g., Newton’s method) ◮ Connection to EnKF: a linear parameterization of
◮ Choice of approximation space allows to control the bias-variance of
◮ Richer parameterizations yield less bias, but potentially higher variance
11 / 16
mit-logo.jpg
◮ Regularize the estimator
◮ The sparsity of the kth component of S depends on the sparsity of
◮ Quick heuristic: let each
◮ Explicit link between sparsity of a nonlinear S and conditional
[Inference via low-dimensional couplings, Spantini et al., 2018]
12 / 16
◮ A hard test-case configuration [*Bengtsson et al. 2003]:
◮ F = 8 (chaotic) and Ej ∼ N(0, 0.5) (small noise for PF) ◮ Time between observations: ∆obs = 0.4 (large) ◮ Results computed over 2000 assimilation cycles
◮ The nonlinear filter is ∼ 25% more accurate in RMSE than EnKF
mit-logo.jpg
◮ Observations were assimilated one at a time ◮ Impose sparsity of the map with a 5-way interaction model (figure) ◮ Fully separable and nonlinear parameterization of each component
i>1 ai exp( −(x − ci)2/σ). ◮ Much more general parameterizations are of course possible!
14 / 16
mit-logo.jpg
EnKF Order 1 Order 2 Order 3
15 / 16
◮ Nonlinear generalization of the EnKF: move the ensemble members
◮ Learn non-Gaussian features via convex optimization ◮ Easy to localize in high dimensions (sparse updates)
◮ There exists a square-root version of the filter ◮ Generalization to nonlinear smoothing ◮ How much nonlinearity in the update vs. ensemble size & structure? ◮ How to regularize the estimation? (e.g., LASSO) ◮ Accuracy and stability. Is it possible to establish consistency by