Mathematical Strategies for Filtering Turbulent Systems: Sparse - - PowerPoint PPT Presentation
Mathematical Strategies for Filtering Turbulent Systems: Sparse - - PowerPoint PPT Presentation
Mathematical Strategies for Filtering Turbulent Systems: Sparse Observations, Model Errors, and Stochastic Parameter Estimation John Harlim Courant Institute of Mathematical Sciences, New York University July 1, 2009 What is filtering? 1.
What is filtering?
tm+1 tm
- bservation (vm+1)
true signal um|m (posterior) um+1|m (prior)
- 1. Forecast (Prediction)
tm+1 tm
- bservation (vm+1)
true signal um+1|m (prior) um+1|m+1 (posterior)
- 2. Analysis (Correction)
The correction step is an application of Bayesian update p(um+1|m+1) ≡ p(um+1|m|vm+1) ∼ p(um+1|m)p(vm+1|um+1|m) Kalman filter formula produces the optimal unbiased posterior mean and covariance by assuming linear model and Gaussian
- bservations and forecasts errors.
The standard Kalman filter algorithm for solving: um+1 = Fum + ¯ fm + σm+1 vm = Gum + σo
m
The standard Kalman filter algorithm for solving: um+1 = Fum + ¯ fm + σm+1 vm = Gum + σo
m
Forecast (Prediction) A) ¯ um+1|m = F ¯ um|m + ¯ fm, B) Rm+1|m = FRm|mF ∗ + R,
The standard Kalman filter algorithm for solving: um+1 = Fum + ¯ fm + σm+1 vm = Gum + σo
m
Forecast (Prediction) A) ¯ um+1|m = F ¯ um|m + ¯ fm, B) Rm+1|m = FRm|mF ∗ + R, Analysis (Correction) D) ¯ um+1|m+1 = (I − Km+1G)¯ um+1|m + Km+1vm+1 E) Rm+1|m+1 = (I − Km+1G)Rm+1|m, F) Km+1 = Rm+1|mG T(GRm+1|mG T + Ro)−1.
Example of application: predicting path of hurricane
Computational and Theoretical Issues:
◮ How to handle large system? Perhaps N = 106 state variables
(e.g., 200 km resolved Global Weather Model)
Computational and Theoretical Issues:
◮ How to handle large system? Perhaps N = 106 state variables
(e.g., 200 km resolved Global Weather Model)
◮ Where is the computational burden? Propagating covariance
matrix of size N × N (6N minutes = 300,000 hours).
Computational and Theoretical Issues:
◮ How to handle large system? Perhaps N = 106 state variables
(e.g., 200 km resolved Global Weather Model)
◮ Where is the computational burden? Propagating covariance
matrix of size N × N (6N minutes = 300,000 hours).
◮ Handling nonlinearity! Why not particle filter? Convergence
requires ensemble size that grows exponentially with respect to the ensemble spread relative to observation errors rather than to the state dimension per se(Bengtsson, Bickel, and Li 2008).
Computational and Theoretical Issues:
◮ How to handle large system? Perhaps N = 106 state variables
(e.g., 200 km resolved Global Weather Model)
◮ Where is the computational burden? Propagating covariance
matrix of size N × N (6N minutes = 300,000 hours).
◮ Handling nonlinearity! Why not particle filter? Convergence
requires ensemble size that grows exponentially with respect to the ensemble spread relative to observation errors rather than to the state dimension per se(Bengtsson, Bickel, and Li 2008).
◮ Some successful strategies: Ensemble Kalman filters (ETKF
- f Bishop et al. 2001, EAKF of Anderson 2001). Each
involves computing singular value decomposition (SVD).
Computational and Theoretical Issues:
◮ How to handle large system? Perhaps N = 106 state variables
(e.g., 200 km resolved Global Weather Model)
◮ Where is the computational burden? Propagating covariance
matrix of size N × N (6N minutes = 300,000 hours).
◮ Handling nonlinearity! Why not particle filter? Convergence
requires ensemble size that grows exponentially with respect to the ensemble spread relative to observation errors rather than to the state dimension per se(Bengtsson, Bickel, and Li 2008).
◮ Some successful strategies: Ensemble Kalman filters (ETKF
- f Bishop et al. 2001, EAKF of Anderson 2001). Each
involves computing singular value decomposition (SVD).
◮ However, these accurate filters are not immune from
”catastrophic filter divergence” (diverge beyond machine infinity) when observations are sparse, even when the true signal is a dissipative system with ”absorbing ball property”.
Filtering in Frequency space
Independent Fourier Coefcient: !an$e&in e)*ation Simplest Turbulent Model
- onstant -oe/!cient
!inear Stochastic 45E 78 Noisy Observations
- 9assica9
Ka9man 7i9ter Fourier Coefcients
- f the noisy observations
78 7o*rier 5omain Ka9man 7i9ter <nno&ati&e Strate$y Rea9 Space 7o*rier Space Ensemb9e Ka9man 7i9ter
Filtering Stochastically forced advection-diffusion equation ∂u(x, t) ∂t = − ∂ ∂x u(x, t) + ¯ F(x, t)
Filtering Stochastically forced advection-diffusion equation ∂u(x, t) ∂t = − ∂ ∂x u(x, t) + ¯ F(x, t) + µ ∂2 ∂x2 u(x, t) + σ(x) ˙ W (t)
Filtering Stochastically forced advection-diffusion equation ∂u(x, t) ∂t = − ∂ ∂x u(x, t) + ¯ F(x, t) + µ ∂2 ∂x2 u(x, t) + σ(x) ˙ W (t) v(˜ xj, tm) = u(˜ xj, tm) + σo
m,
˜ xj = j˜ h, (2N + 1)˜ h = 2π. where σo
m ∼ N(0, ro).
Filtering Stochastically forced advection-diffusion equation ∂u(x, t) ∂t = − ∂ ∂x u(x, t) + ¯ F(x, t) + µ ∂2 ∂x2 u(x, t) + σ(x) ˙ W (t) v(˜ xj, tm) = u(˜ xj, tm) + σo
m,
˜ xj = j˜ h, (2N + 1)˜ h = 2π. where σo
m ∼ N(0, ro).
In Fourier Domain, we reduce filtering (2N+1) dimensional problem to filtering decoupled scalar stochastic Langevin equations: dˆ uk(t) = [(−µk2 − ik)ˆ uk(t) + ˆ Fk(t)]dt + σkdWk(t) ˆ vk,m = ˆ uk,m + ˆ σo
k,m
where ˆ σo
k,m ∼ N(0, ro/(2N + 1)).
How to deal with Sparse Regularly Spaced Observations? ALIASING !!
1 2 3 4 5 6 7 !1 !0.8 !0.6 !0.4 !0.2 0.2 0.4 0.6 0.8 1
m=25, l=3, k=2, N=5
sin(25x) sin(3x) sin(3xj)=sin(25xj)
Recall Aliasing Formula:
◮ Fine mesh: f (xj) = |k|≤N ˆ
ffine(k)eikxj where xj = jh and (2N + 1)h = 2π.
Recall Aliasing Formula:
◮ Fine mesh: f (xj) = |k|≤N ˆ
ffine(k)eikxj where xj = jh and (2N + 1)h = 2π.
◮ Coarse mesh: f (˜
xj) =
|ℓ|≤M ˆ
fcoarse(ℓ)eiℓ˜
xj where ˜
xj = j˜ h and (2M + 1)˜ h = 2π.
Recall Aliasing Formula:
◮ Fine mesh: f (xj) = |k|≤N ˆ
ffine(k)eikxj where xj = jh and (2N + 1)h = 2π.
◮ Coarse mesh: f (˜
xj) =
|ℓ|≤M ˆ
fcoarse(ℓ)eiℓ˜
xj where ˜
xj = j˜ h and (2M + 1)˜ h = 2π.
◮ Suppose the coarse grid points ˜
xj coincide with the fine mesh grid points xj at every P = (2N + 1)/(2M + 1) fine grid points.
Recall Aliasing Formula:
◮ Fine mesh: f (xj) = |k|≤N ˆ
ffine(k)eikxj where xj = jh and (2N + 1)h = 2π.
◮ Coarse mesh: f (˜
xj) =
|ℓ|≤M ˆ
fcoarse(ℓ)eiℓ˜
xj where ˜
xj = j˜ h and (2M + 1)˜ h = 2π.
◮ Suppose the coarse grid points ˜
xj coincide with the fine mesh grid points xj at every P = (2N + 1)/(2M + 1) fine grid points.
◮ Since eik˜ xj = ei(ℓ+q(2M+1))˜ xj = eiℓ˜ xj,
Recall Aliasing Formula:
◮ Fine mesh: f (xj) = |k|≤N ˆ
ffine(k)eikxj where xj = jh and (2N + 1)h = 2π.
◮ Coarse mesh: f (˜
xj) =
|ℓ|≤M ˆ
fcoarse(ℓ)eiℓ˜
xj where ˜
xj = j˜ h and (2M + 1)˜ h = 2π.
◮ Suppose the coarse grid points ˜
xj coincide with the fine mesh grid points xj at every P = (2N + 1)/(2M + 1) fine grid points.
◮ Since eik˜ xj = ei(ℓ+q(2M+1))˜ xj = eiℓ˜ xj, ◮ We deduce
ˆ fcoarse(ℓ) =
- kj∈A(ℓ)
ˆ ffine(kj), |ℓ| ≤ M, where A(ℓ) = {k : |k| ≤ N, k = ℓ + q(2M + 1), q ∈ Z}
Consider the following sparse observations: 123 grid pts (61 modes) but only 41 observations (20 modes) available
sparse observations for P=3
Physical Space Fourier Space
20
- 20
61
- 61
aliasing set !(1) = {1,-40,42} for P=3 and M=20 20
- 20
61
- 61
aliasing set !(11) = {11,-30,52} for P=3 and M=20
Aliasing Formula: Observation at time tm becomes: ˆ vℓ,m =
- kj∈A(ℓ)
ˆ ukj,m + ˆ σo
ℓ,m, = G
ˆ uℓ,m + ˆ σo
ℓ,m
where G = [1, 1, . . . , 1] and ˆ σo
ℓ,m ∼ N(0, ro/(2M + 1)).
Aliasing Formula: Observation at time tm becomes: ˆ vℓ,m =
- kj∈A(ℓ)
ˆ ukj,m + ˆ σo
ℓ,m, = G
ˆ uℓ,m + ˆ σo
ℓ,m
where G = [1, 1, . . . , 1] and ˆ σo
ℓ,m ∼ N(0, ro/(2M + 1)).
Reduced Filters
◮ With the aliasing formula above, we reduce filtering (2N + 1)
dimensional system with (2M + 1) observations, where M < N, to decoupled P = (2N + 1)/(2M + 1) dimensional problem with scalar observations (FDKF).
Aliasing Formula: Observation at time tm becomes: ˆ vℓ,m =
- kj∈A(ℓ)
ˆ ukj,m + ˆ σo
ℓ,m, = G
ˆ uℓ,m + ˆ σo
ℓ,m
where G = [1, 1, . . . , 1] and ˆ σo
ℓ,m ∼ N(0, ro/(2M + 1)).
Reduced Filters
◮ With the aliasing formula above, we reduce filtering (2N + 1)
dimensional system with (2M + 1) observations, where M < N, to decoupled P = (2N + 1)/(2M + 1) dimensional problem with scalar observations (FDKF).
◮ When the energy spectrum of the system decays as a function
- f wavenumbers, we can ignore the high wavenumbers (e.g.,
RFDKF, SDAF).
Decorrelation time vs observation time:
10 20 30 40 50 60 10
!2
10
!1
10 10
1
10
2
(a*e number 2 3e4orrelation time 3e4orrelation time * obser*ation time
:4orr=(! 22)!1 ! t=0.1
Ensemble Kalman Filter diverges with ensemble size 150 > N = 123. Extreme event, ∆t2 = 0.1, Ek = k−5/3, P = 3, ro = 2.05
20 40 60 80 100 2 4 6 8 10 time RMS ETKF, <corr>=0.86703 filtered unfiltered
- bs
1 2 3 4 5 6 !15 !10 !5 5 10 15 model space ETKF, at T=100 !t, corr=0.78891 u(x,100!t) true unfiltered filtered
- bs
1 2 3 4 5 6 !10 !5 5 10 15 20 model space ETKF, at T=500 !t, corr=0.88534 u(x,500!t) true unfiltered filtered
- bs
1 2 3 4 5 6 !15 !10 !5 5 10 15 20 25 model space ETKF, at T=1000 !t, corr=0.92479 u(x,1000!t) true unfiltered filtered
- bs
Reduced Filter produces high skill Spontaneous development of extreme event for ∆t2 = 0.1 and Ek = k−5/3, P = 3, ro = 2.05
20 40 60 80 100 1 2 3 4 5 )*+e RMS S0A2, 56orr9=0.<8516 =*>)ere? @A=*>)ere?
- bs
1 2 3 4 5 6 !15 !10 !5 5 10 15 +o?e> sDE6e S0A2, E) F=100 !), 6orr=0.<5751 @HI,100!)J )r@e @A=*>)ere? =*>)ere?
- bs
1 2 3 4 5 6 !10 !5 5 10 15 20 +o?e> sDE6e S0A2, E) F=500 !), 6orr=0.<<31< @HI,500!)J )r@e @A=*>)ere? =*>)ere?
- bs
1 2 3 4 5 6 !15 !10 !5 5 10 15 20 25 +o?e> sDE6e S0A2, E) F=1000 !), 6orr=0.<<703 @HI,1000!)J )r@e @A=*>)ere? =*>)ere?
- bs
Summary of Part I:
Summary of Part I:
◮ In our assessment, we find that filtering sparsely observed
linear problem with ensemble Kalman filter with ensemble size larger than the model dimensionality does not guaranteed convergence solution.
Summary of Part I:
◮ In our assessment, we find that filtering sparsely observed
linear problem with ensemble Kalman filter with ensemble size larger than the model dimensionality does not guaranteed convergence solution.
◮ FDKF suggests that ignoring the cross covariance between
different aliasing sets is not only computationally advantageous but it also produces more accurate solutions.
Summary of Part I:
◮ In our assessment, we find that filtering sparsely observed
linear problem with ensemble Kalman filter with ensemble size larger than the model dimensionality does not guaranteed convergence solution.
◮ FDKF suggests that ignoring the cross covariance between
different aliasing sets is not only computationally advantageous but it also produces more accurate solutions.
◮ Intuitively, this works because the reduced filter avoids the
spurious correlations between different wave numbers.
Nonlinearity
Stochastically forced linear PDE Uncoupled Langevin eqn FT Nonlinear Chaotic Dynamical Systems Coupled nonlinear ODE through nonlinear terms FT Replace the Nonlinear terms with an Ornstein-Uhlenbeck process Radical Filtering Strategy for Nonlinear System
Filtering turbulent nonlinear dynamical systems L-96 model (Lorenz 1996), 40-dim, “absorbing ball property”. duj dt = (uj+1 − uj−2)uj−1 − uj + F, j = 0, . . . , J − 1 F λ1 N+ KS Tcorr Weakly chaotic 6 1.02 12 5.547 8.23 Strongly chaotic 8 1.74 13 10.94 6.704 Fully turbulent 16 3.945 16 27.94 5.594
F=6
time
10 20 30 5 10 15 20
F=8
10 20 30 5 10 15 20
F=16
10 20 30 5 10 15 20
The “poorman’s” Climatological Stochastic Model (CSM):
◮ Fourier coefficients of normalized L-96 [MAG05]:
dˆ uk(t) = [(−dk + iωk)ˆ uk(t) + E −1
p (F − ¯
u)δk,0]dt + NL
The “poorman’s” Climatological Stochastic Model (CSM):
◮ Fourier coefficients of normalized L-96 [MAG05]:
dˆ uk(t) = [(−dk + iωk)ˆ uk(t) + E −1
p (F − ¯
u)δk,0]dt + NL
◮ Replace the nonlinearity with Ornstein-Uhlenbeck process:
dˆ uk(t) = [(−γk + iωk)ˆ uk(t) + E −1
p (F − ¯
u)δk,0]dt + σkdWk(t).
The “poorman’s” Climatological Stochastic Model (CSM):
◮ Fourier coefficients of normalized L-96 [MAG05]:
dˆ uk(t) = [(−dk + iωk)ˆ uk(t) + E −1
p (F − ¯
u)δk,0]dt + NL
◮ Replace the nonlinearity with Ornstein-Uhlenbeck process:
dˆ uk(t) = [(−γk + iωk)ˆ uk(t) + E −1
p (F − ¯
u)δk,0]dt + σkdWk(t).
◮ Fit the damping coefficient γk and stochastic noise strength
σk to the equilibrium variance and correlation time.
Equilibrium Variance and Correlation Time
2 4 6 8 10 12 14 16 18 20 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
Wavenumbers Variance
Rescaled variance spectrum
F=0, no damp F=6 F=8 F=16 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30
Wavenumbers Correlation time
Rescaled correlation time
F=0, no damp F=6 F=8 F=16
Regularly spaced sparse observations: weakly chaotic regime F = 6, P = 2, ro = 1.96, ∆t = 0.234 . This is a regime where EAKF true is superior. perfect model RMS corr. EAKF true 0.82 0.95 ETKF true ∞
- No Filter
2.8
- model error
RMS corr. EAKF CSM 2.20 0.64 ETKF CSM 2.50 0.55 FDKF CSM 2.07 0.69
Regularly spaced sparse observations: weakly chaotic regime F = 6, P = 2, ro = 1.96, ∆t = 0.234
5 10 15 20 25 30 35 40 !6 !4 !2 2 4 6 8 10 12 14 EAKF true space 5 10 15 20 25 30 35 40 !6 !4 !2 2 4 6 8 10 12 14 EAKF CSM space 5 10 15 20 25 30 35 40 !6 !4 !2 2 4 6 8 10 12 14 ETKF CSM space 5 10 15 20 25 30 35 40 !6 !4 !2 2 4 6 8 10 12 14 FDKF CSM space
Regularly spaced sparse observations: fully turbulent regime F = 16, P = 2, ro = 0.81, ∆t = 0.078. This is a regime where FDKF is superior. Scheme RMS corr. EAKF true ∞
- ETKF true
∞
- No Filter
6.3 model error RMS corr. EAKF CSM 5.15 0.61 ETKF CSM 5.80 0.54 FDKF CSM 4.80 0.66
Regularly spaced sparse observations: fully turbulent regime F = 16, P = 2, ro = 0.81, ∆t = 0.078
5 10 15 20 25 30 35 40 !15 !10 !5 5 10 15 20 FDKF CSM space 5 10 15 20 25 30 35 40 !15 !10 !5 5 10 15 20 EAKF true space 5 10 15 20 25 30 35 40 !15 !10 !5 5 10 15 20 EAKF CSM space 5 10 15 20 25 30 35 40 !15 !10 !5 5 10 15 20 ETKF CSM space
Summary of Part II:
Summary of Part II:
◮ We demonstrate that in the fully turbulent regime, perfect
model is not necessarily needed for filtering. In our example, we encounter an ensemble collapse which yields filter divergence beyond machine infinity.
Summary of Part II:
◮ We demonstrate that in the fully turbulent regime, perfect
model is not necessarily needed for filtering. In our example, we encounter an ensemble collapse which yields filter divergence beyond machine infinity.
◮ In the presence of model errors through CSM, our reduced
filtering strategy produces better solutions.
Summary of Part II:
◮ We demonstrate that in the fully turbulent regime, perfect
model is not necessarily needed for filtering. In our example, we encounter an ensemble collapse which yields filter divergence beyond machine infinity.
◮ In the presence of model errors through CSM, our reduced
filtering strategy produces better solutions.
◮ Practically, our radical strategy is independent of tunable
parameters, SVD, and ensemble size.
Online Model Error Estimation Strategy The simplest contemporary strategy to cope with model errors for filtering with an imperfect model nonlinear dynamical system depending on parameters, λ, du dt = F(u, λ) is to augment the state variable u, by the parameters λ, and adjoin an approximate dynamical equation for the parameters dλ dt = g(λ).
Climatological Stochastic Model du(t) =
- (−¯
γ + iω)u(t) + F(t)
- dt + σdW (t)
Climatological Stochastic Model du(t) =
- (−¯
γ + iω)u(t) + F(t)
- dt + σdW (t)
Nonlinear Extended Kalman Filter: du(t) =
- (−γ(t) + iω)u(t) + F(t)+b(t)
- dt + σdW (t)
db(t) = (−γb + iωb)b(t)dt + σbdWb(t) dγ(t) = −dγ(γ(t) − ˆ γ)dt + σγdWγ(t) We find stochastic parameters {γb, ωb, σb, dγ, σγ} that are robust for high filter skill beyond CSM and in many occasions comparable to the perfect model.
Nature Signals for Unforced and Forced cases
!1 !0.5 0.5 1 Re(u(+)- ./for3ed s6s+em 1 2 !(+) !1.5 !1 !0.5 0.5 1 1.5 Re(u(+)- For3ed s6s+em 50 100 150 200 250 300 350 400 450 500 1 2 + !(+)
One mode demonstration of the filtered solution
70 75 80 85 90 95 100 !1.5 !1 !0.5 0.5 1 1.5
! t=0.25, ro=E=0.008, perfect model, RMS x=0.042
x true signal
- bservation
posterior mean 70 75 80 85 90 95 100 !1.5 !1 !0.5 0.5 1 1.5
N E K F
- C
: d
γ =
0.01¯ d, σγ = 5σ, γ
b =
0.1¯ d, σb = 5σ, R M S x= 0.052
x 70 75 80 85 90 95 100 !1.5 !1 !0.5 0.5 1 1.5
MSMD RMS x= 0.14
x time
One mode demonstration of the filtered solution
70 75 80 85 90 95 100 !3 !2.5 !2 !1.5 !1 !0.5 0.5 1 1.5 2
NEK F-C: dγ =0.01 ¯ d, σ γ =5σ , γ
b =0.1 ¯
d, σ b =5σ
Real[b] 70 75 80 85 90 95 100 !0.5 0.5 1 1.5 2 2.5 3 3.5
NEK F-C: dγ =0.01 ¯ d, σ γ =5σ , γ
b =0.1 ¯
d, σ b =5σ , RMS γ=0.7
! true signal posterior mean
Turbulent system of externally forced barotropic Rossby wave equation with instability through intermittent of negative damping.
240 245 250 255 260 265 270 1 2 3 4 5 6
Nature u(x,t)
time (in days) x 240 245 250 255 260 265 270 !0.5 0.5 1 1.5 2 2.5
fluctuations of damping coefficient for modes 3!5
damping coefficient time (in days)
Incorrectly specified forcings, observed only 15 observations
- f 105 grid points
1 2 3 4 5 10 10
!4
10
!3
10
!2
10
!1
10 k Energy Spectrum (b) 1 2 3 4 5 6 7 8 9 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 RMS Error k (a) nature NEKF perfect CSM f CSM
- bs
References:
- 1. Castronovo, Harlim, and Majda, ”Mathematical test criteria for filtering
complex systems: Plentiful observations”, J. Comp. Phys., 227(7), 3678-3714, 2008.
- 2. Harlim and Majda, ”Mathematical test criteria for filtering complex
systems: Regularly sparse observations”, J. Comp. Phys., 227(10), 5304-5341, 2008.
- 3. Harlim and Majda, ”Filtering nonlinear dynamical systems with linear
stochastic models”, Nonlinearity 21(6), 1281-1306, 2008.
- 4. Harlim and Majda, ”Catastrophic filter divergence in filtering nonlinear
dissipative systems”, to appear in Comm. Math. Sci., 2009.
- 5. Gershgorin, Harlim, and Majda, ”Test models for improving filtering with
model errors through stochastic parameter estimation”, submitted to J. Comp Phys, 2009.
- 6. Gershgorin, Harlim, and Majda, ”Improving Filtering and Prediction of
Spatially Extended Turbulent Systems with Model Errors through Stochastic Parameter Estimation”, submitted to J. Comp Phys, 2009.
- 7. Majda and Harlim, ”Systematic Strategies for Real Time Filtering of
Turbulent Signals in Complex Systems”, in preparation (2010).
Latest results on a two-layer QG model (observe 36 uniformly distributed grid points)
Harlim and Majda, ”Filtering Turbulent Sparsely Observed Geophysical Flows”, submitted to Monthly Weather Review, 2009.
100 200 300 400 500 600 700 800 900 1000 0.1 0.2 0.3 0.4 0.5 0.6 RMS 36OBS F=4, Tobs=0.25, ro=0.17113, K=48, r=0.2, L=14 NEKF MSM1 MSM2 LLS!EAKF
- bs error
1 2 3 4 5 6 7 8 9 10 11 12 10
!4
10
!2
10 10
2
Spectra mode NEKF MSM1 MSM2 true LLS!EAKF
1 2 3 4 5 1 2 3 4 5 ! . 8 !0 8 ! . 8 !0.6 ! . 6 ! . 6 !0.6 !0.4 ! . 4 ! . 4 !0.4 !0.2 !0.2 !0.2 ! . 2 0.2 . 2 . 2 . 2 0.4 . 4 . 4 0.4 . 6 0.6 . 6 0.6 0.8 0.8 0.8 NEKF, T
- bs=0.25, RO=0.17113
1 2 3 4 5 1 2 3 4 5 !0.8 !0.8 !0.6 !0.6 !0.6 ! . 4 ! . 4 !0.4 ! . 4 !0.2 !0.2 !0.2 !0.2 . 2 . 2 0.2 . 2 0.4 0.4 . 4 0.4 . 6 . 6 0.6 . 8 0.8 0.8 1 CSM, Tobs=0.25, RO=0.17113 1 2 3 4 5 1 2 3 4 5 !1 !0.8 !0.8 ! . 6 ! . 6 !0.6 ! . 4 ! . 4 !0.4 !0.4 !0.2 !0.2 !0.2 !0.2 0.2 0.2 0.2 0.2 . 4 0.4 . 4 . 4 . 6 . 6 0.6 0.6 0.8 0.8 . 8 1 1 TRUE AT T=2500 1 2 3 4 5 1 2 3 4 5 !0.8 !0.6 ! . 6 ! . 6 !0.4 ! . 4 ! . 4 !0.2 ! . 2 !0.2 ! . 2 0.2 . 2 0.2 0.2 . 4 0.4 0.4 . 4 . 6 0.6 0.6 . 8 . 8 0.8 1 1 LLS!EAKF, T
- bs=0.25, RO=0.17113