Stochastic model reduction: from nonlinear Galerkin to parametric inference
Fei Lu
Department of Mathematics, Johns Hopkins Joint work with: Alexandre J. Chorin (UC Berkeley) Kevin K. Lin (U. of Arizona)
May 22, 2019 SIAM DS19, Snowbird
1 / 24
Stochastic model reduction: from nonlinear Galerkin to parametric - - PowerPoint PPT Presentation
Stochastic model reduction: from nonlinear Galerkin to parametric inference Fei Lu Department of Mathematics, Johns Hopkins Joint work with: Alexandre J. Chorin (UC Berkeley) Kevin K. Lin (U. of Arizona) May 22, 2019 SIAM DS19, Snowbird 1 /
1 / 24
2 / 24
k
3 / 24
Discrete partial data High-dimensional Full system Prediction
n=1.
◮ can only afford to resolve x′ = f(x) online ◮ y: unresolved variables (subgrid-scales)
4 / 24
5 / 24
◮ non-linear/Petrov- Galerkin: y(t) = F(x(t)) ◮ Mori-Zwanzig formalism (memory)
◮ relaxation approximations ◮ linear response / filtering / feedback control ◮ . . .
◮ hypoellitpic SDEs, GLEs and SDDEs ◮ discrete-time (time series) models ◮ data-driven: POD, DMD, Kooperman operator ◮ nonparametric inference ◮ machine learning (NN’s) . . . 6 / 24
7 / 24
1Brockwell, Sørensen, Pokern, Wiberg, Samson,. . . 2Milstein, Tretyakov, Talay, Mattingly, Stuart, Higham, . . . 8 / 24
9 / 24
10 / 24
k
11 / 24
dw dt ≈ 0 ⇒ w ≈ A−1QB(u + w) ⇒ w ≈ ψ(u)
1Foias, Constantin, Temam, Sell, Jolly, Kevrekidis, Titi et al (88-94) 12 / 24
dw dt ≈ 0 ⇒ w ≈ A−1QB(u + w) ⇒ w ≈ ψ(u)
k = Rδ(un−1 k
k + Φn k,
k := Φn k(un−p:n−1, f n−p:n−1) in form of
k = p
k,jun−j k
k,jRδ(un−j k
k,j
l
k−l
1Foias, Constantin, Temam, Sell, Jolly, Kevrekidis, Titi et al (88-94) 13 / 24
−0.4 −0.2 0.2 0.4 0.6 10
−2
10 Real v4 pdf Data Truncated system NARMA
10 20 30 40 50 −0.2 0.2 0.4 0.6 0.8 time ACF Data Truncated system NARMA
14 / 24
20 40 60 80 −0.5 0.5 v4 20 40 60 80 −0.4 −0.2 0.2 0.4 time t v4 the truncated system NARMA
20 40 60 80 5 10 15 lead time RMSE
NARMA the truncated system
15 / 24
16 / 24
p
17 / 24
18 / 24
1 2 3 4 5 6 7 8 Wavenumber 10-2 10-1 100 Spectrum Spectrum True Truncated NAR
19 / 24
0.5 1 1.5 2 2.5 10 20
ACF
10-3
cov(|u
2|2,|uk|2) k=1
0.5 1 1.5 2 2.5 0.02 0.04 0.06
cov(|u
2|2,|uk|2) k=2 True Truncated NAR
0.5 1 1.5 2 2.5
2
ACF
10-3
cov(|u
2|2,|uk|2) k=3
0.5 1 1.5 2 2.5
1 2 10-3
cov(|u
2|2,|uk|2) k=4
0.5 1 1.5 2 2.5 10 20
ACF
10-4
cov(|u
2|2,|uk|2) k=5
0.5 1 1.5 2 2.5 10 20 10-4
cov(|u
2|2,|uk|2) k=6
0.5 1 1.5 2 2.5
Time Lag
10 20
ACF
10-4
cov(|u
2|2,|uk|2) k=7
0.5 1 1.5 2 2.5
Time Lag
2 4 10-3
cov(|u
2|2,|uk|2) k=8
20 / 24
5 10 15 20 25
0.5 1
Abs of Mode k=1
True Truncated NAR
5 10 15 20 25
0.5
Abs of Mode k=2
5 10 15 20 25
0.5 1
Abs of Mode k=3
5 10 15 20 25
0.2 0.4
Abs of Mode k=4
5 10 15 20 25
0.5
Abs of Mode k=5
5 10 15 20 25
0.2 0.4
Abs of Mode k=6
5 10 15 20 25
Time
0.5
Abs of Mode k=7
5 10 15 20 25
Time
0.5
Abs of Mode k=8
21 / 24
n=1
22 / 24
◮ distance between the two stochastic processes? 23 / 24
◮ Chorin-Lu: Discrete approach to stochastic parametrization and dimension
◮ Lu-Lin-Chorin: Comparison of continuous and discrete-time data-based
◮ Lu-Lin-Chorin: Data-based stochastic model reduction for the Kuramoto –
◮ Lin-Lu: Data-driven model reduction, Wiener projections, and the
◮ Lu-Tu-Chorin: Accounting for model error from unresolved scales in
24 / 24