Adaptive covariance inflation in the EnKF by Gaussian scale mixtures
x
Patrick N. Raanes, Marc Bocquet, Alberto Carrassi
patrick.n.raanes@gmail.com
NERSC NERSC
EnKF Workshop, Bergen, May 2018
Adaptive covariance inflation in the EnKF by Gaussian scale - - PowerPoint PPT Presentation
Adaptive covariance inflation in the EnKF by Gaussian scale mixtures pdf x Patrick N. Raanes, Marc Bocquet, Alberto Carrassi patrick.n.raanes@gmail.com NERSC NERSC EnKF Workshop, Bergen, May 2018 Questions answered in paper Nonlinear
x
Patrick N. Raanes, Marc Bocquet, Alberto Carrassi
patrick.n.raanes@gmail.com
NERSC NERSC
EnKF Workshop, Bergen, May 2018
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias
Pa)] < tr(Pa)
collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor
1 N−1 optimal?
How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?
x
Patrick N. Raanes, Marc Bocquet, Alberto Carrassi
patrick.n.raanes@gmail.com
NERSC NERSC
EnKF Workshop, Bergen, May 2018
Idealistic contexts (with sampling error)
Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)
With model error
Survey inflation estimation
ETKF-adaptive EAKF-adaptive
EnKF-N hybrid Benchmarks
Idealistic contexts (with sampling error)
Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)
With model error
Survey inflation estimation
ETKF-adaptive EAKF-adaptive
EnKF-N hybrid Benchmarks
Idealistic contexts (with sampling error)
Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)
With model error
Survey inflation estimation
ETKF-adaptive EAKF-adaptive
EnKF-N hybrid Benchmarks
Idealistic contexts (with sampling error)
Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)
With model error
Survey inflation estimation
ETKF-adaptive EAKF-adaptive
EnKF-N hybrid Benchmarks
Idealistic contexts (with sampling error)
Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)
With model error
Survey inflation estimation
ETKF-adaptive EAKF-adaptive
EnKF-N hybrid Benchmarks
Idealistic contexts (with sampling error)
Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)
With model error
Survey inflation estimation
ETKF-adaptive EAKF-adaptive
EnKF-N hybrid Benchmarks
Assume M, H, Q, R are perfectly known, and p(x) and p(y|x) are always Gaussian.
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)
Computational costs induce:
≈ p(x|E) =
= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =
. . . xn, . . . xN
But p(x|E) =
N(x|b, B) p(b, B|E) db dB (2) Recover standard EnKF by assuming N=∞ so that p(b, B|E) = δ(b − ¯ x)δ(B − ¯ B) , where ¯ x = 1 N
N
xn , ¯ B = 1 N − 1
N
(xn − ¯ x) (xn − ¯ x)T . (3) The EnKF-N does not make this approximation.
But p(x|E) =
N(x|b, B) p(b, B|E) db dB (2) Recover standard EnKF by assuming N=∞ so that p(b, B|E) = δ(b − ¯ x)δ(B − ¯ B) , where ¯ x = 1 N
N
xn , ¯ B = 1 N − 1
N
(xn − ¯ x) (xn − ¯ x)T . (3) The EnKF-N does not make this approximation.
But p(x|E) =
N(x|b, B) p(b, B|E) db dB (2) Recover standard EnKF by assuming N=∞ so that p(b, B|E) = δ(b − ¯ x)δ(B − ¯ B) , where ¯ x = 1 N
N
xn , ¯ B = 1 N − 1
N
(xn − ¯ x) (xn − ¯ x)T . (3) The EnKF-N does not make this approximation.
Prior: p(x|E) =
(4) (5) (6) (8)
Prior: p(x|E) =
(4) . . . . . . (5) . . . . . . (6) . . . ∝
1 N−1x − ¯
x2
εN ¯ B
−N/2
(7) (8)
Prior: p(x|E) =
(4) . . . ∝
N
x − ¯
xεN ¯
B|0, α
p α|E dα
(5) . . . . . . (6) . . . ∝
1 N−1x − ¯
x2
εN ¯ B
−N/2
(7) (8)
Prior: p(x|E) =
(4) . . . ∝
N
x − ¯
xεN ¯
B|0, α
p α|E dα
(5) . . . . . . (6) . . . ∝
1 N−1x − ¯
x2
εN ¯ B
−N/2
(7) Posterior: p(x|E, y) ∝ p(x|E) N
y|Hx, R
Prior: p(x|E) =
(4) . . . ∝
N
x − ¯
xεN ¯
B|0, α
p α|E dα
(5) . . . ∝ N
x|¯
x, α(x)¯ B
˜
p
α(x)|E
. . . ∝
1 N−1x − ¯
x2
εN ¯ B
−N/2
(7) Posterior: p(x|E, y) ∝ p(x|E) N
y|Hx, R
1 2 3 4 5 6
pdf λ
Prior Posterior Likelihood
Prior: p(α|E) = χ−2(α|1, N−1) Likelihood: p(x⋆, y|α, E) ∝ exp
−1
2y−H¯
x2
αεNH¯ BHT+R
⇒ Posterior: p(x⋆, α|y, E) ∝ exp
−1
2 D(α)
Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.
Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.
Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.
Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.
Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.
Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.
Because all models are wrong.
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β, ) = N
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
where ¯ C(β) = βH¯ BHT + R , (10)
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β, ) = N
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
where ¯ C(β) = βH¯ BHT + R , (10)
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β, ) = N
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
where ¯ C(β) = βH¯ BHT + R , (10)
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β, E) = N
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
where ¯ C(β) = βH¯ BHT + R , (10)
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β,
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
where ¯ C(β) = βH¯ BHT + R , (10)
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β,
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
where ¯ C(β) = βH¯ BHT + R , (10)
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β,
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
where ¯ C(β) = βH¯ BHT + R , (10)
Suppose xn ∼ N
b, B/β , and N = ∞.
Then there’s no mixture, but simply p(x|β,
x
x, β ¯ B
.
(9) Recall p(y|x) = N
y|Hx, R .
Then p(y|β) = N
y
x, ¯ C(β)
= N ¯
δ
C(β)
,
where ¯ C(β) = βH¯ BHT + R , ¯ δ = y − H¯ x . (10)
Again, p(y|β) = N
¯
δ
C(β)
,
(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2
R/P − 1
¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯
BHT, ˆ
β¯
C(1), ML, VB (EM).
Again, p(y|β) = N
¯
δ
C(β)
,
(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2
R/P − 1
¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯
BHT, ˆ
β¯
C(1), ML, VB (EM).
Again, p(y|β) = N
¯
δ
C(β)
,
(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2
R/P − 1
¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯
BHT, ˆ
β¯
C(1), ML, VB (EM).
Again, p(y|β) = N
¯
δ
C(β)
,
(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2
R/P − 1
¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯
BHT, ˆ
β¯
C(1), ML, VB (EM).
Again, p(y|β) = N
¯
δ
C(β)
,
(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2
R/P − 1
¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯
BHT, ˆ
β¯
C(1), ML, VB (EM).
Again, p(y|β) = N
¯
δ
C(β)
,
(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2
R/P − 1
¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯
BHT, ˆ
β¯
C(1), ML, VB (EM).
Again, p(y|β) = N
¯
δ
C(β)
,
(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2
R/P − 1
¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯
BHT, ˆ
β¯
C(1), ML, VB (EM).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Assume H¯ BHT ∝ R. The likelihood p(y|β) = N
¯
δ
C(β)
becomes
: p(y|β) ∝ χ+2 ¯ δ2
R/P
σ2β), P
(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).
Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).
Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).
Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).
Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).
Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:
Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR
Testing “improvements” did not yield significant gains.
Evolution
dxi dt = ψ+
i (x)
+ F − hc b
10
zj+10(i−1) , i = 1, . . . , 36, dzj dt = c bψ−
j (bz) + 0 + hc
bx1+(j−1)/
/10 ,
j = 1, . . . , 360, where ψi is the single-layer Lorenz-96 dynamics.
−4 −2 2 4 6 8 10 1 4 8 12 16 20 24 28 32 36
1 40 80 120 160 200 240 280 320 360
Example snapshots RMSE = 1 T
T
1
M ¯ xt − xt2
2 .
N = 20, no localization.
Evolution
dxi dt = ψ+
i (x)
+ F − hc b
10
zj+10(i−1) , i = 1, . . . , 36, dzj dt = c bψ−
j (bz) + 0 + hc
bx1+(j−1)/
/10 ,
j = 1, . . . , 360, where ψi is the single-layer Lorenz-96 dynamics.
−4 −2 2 4 6 8 10 1 4 8 12 16 20 24 28 32 36
1 40 80 120 160 200 240 280 320 360
Example snapshots RMSE = 1 T
T
1
M ¯ xt − xt2
2 .
N = 20, no localization.
0.5 1.0 1.5
ETKF tuned
0.5 1.0 1.5
EAKF adaptive
0.5 1.0 1.5
ETKF adaptive
2500 2600 2700 2800 2900 3000
DA cycle (k)
0.5 1.0 1.5
EnKF-N hybrid Inflation RMS Error RMS Spread
5 10 15 20 25 30
Forcing (F) both for truth and DA
0.0 0.2 0.4 0.6 0.8 1.0
RMSE ETKF tuned
5 10 15 20 25 30
Forcing (F) both for truth and DA
0.0 0.2 0.4 0.6 0.8 1.0
RMSE ETKF tuned ETKF excessive
5 10 15 20 25 30
Forcing (F) both for truth and DA
0.0 0.2 0.4 0.6 0.8 1.0
RMSE ETKF tuned ETKF excessive EAKF adaptive
5 10 15 20 25 30
Forcing (F) both for truth and DA
0.0 0.2 0.4 0.6 0.8 1.0
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive
5 10 15 20 25 30
Forcing (F) both for truth and DA
0.0 0.2 0.4 0.6 0.8 1.0
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid
5 10 15 20 25 30
Forcing (F) both for truth and DA
0.0 0.2 0.4 0.6 0.8 1.0
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid
Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).
Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).
Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).
Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).
Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).
Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).
https://github.com/nansencenter/DAPPER 2011: Marc Bocquet. Ensemble Kalman filtering without the intrinsic need for inflation. Nonlinear Processes in Geophysics. 2012: Marc Bocquet and Pavel Sakov. Combining inflation-free and iterative ensemble Kalman filters for strongly nonlinear systems. Nonlinear Processes in Geophysics 2015: Marc Bocquet, Patrick N. Raanes, and Alexis Hannart. Expanding the validity of the ensemble Kalman filter without the intrinsic need for inflation. Nonlinear Processes in Geophysics 2018: Patrick N. Raanes, Marc Bocquet, and Alberto Carrassi. Adaptive covariance inflation in the ensemble Kalman filter by Gaus- sian scale mixtures. QJRMS (minor rev), arxiv.org/abs/1801.08474
Jeffrey L. Anderson. An adaptive covariance inflation error correction algorithm for ensemble filters. Tellus A, 59(2):210–224, 2007.
Weather Review, In review(0):0–0, 2017. Takemasa Miyoshi. The Gaussian approach to adaptive covariance inflation and its implementation with the local ensemble transform Kalman filter. Monthly Weather Review, 139(5):1519–1535, 2011. Xuguang Wang and Craig H. Bishop. A comparison of breeding and ensemble transform Kalman filter ensemble forecast schemes. Journal of the Atmospheric Sciences, 60(9):1140–1158, 2003.
Instead, we assign the Jeffreys (hyper)prior: p(b, B) ∝ p(B) ∝ |B|−(M+1)/2 , (19) and recall the likelihood: p(E|b, B) ∝
N
N(xn|b, B) , (20) yielding p(b, B|E) = N(b|¯ x, B/N)
W−1(B|¯ B, N−1)
, (21) where W−1 is the inverse-Wishart distribution (c.f. Table 2).
Instead, we assign the Jeffreys (hyper)prior: p(b, B) ∝ p(B) ∝ |B|−(M+1)/2 , (19) and recall the likelihood: p(E|b, B) ∝
N
N(xn|b, B) , (20) yielding p(b, B|E) = N(b|¯ x, B/N)
W−1(B|¯ B, N−1)
, (21) where W−1 is the inverse-Wishart distribution (c.f. Table 2).
Instead, we assign the Jeffreys (hyper)prior: p(b, B) ∝ p(B) ∝ |B|−(M+1)/2 , (19) and recall the likelihood: p(E|b, B) ∝
N
N(xn|b, B) , (20) yielding p(b, B|E) = N(b|¯ x, B/N)
W−1(B|¯ B, N−1)
, (21) where W−1 is the inverse-Wishart distribution (c.f. Table 2).
100 101
Speed-scale ratio (c) both for truth and DA
0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44
RMSE ETKF tuned
100 101
Speed-scale ratio (c) both for truth and DA
0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44
RMSE ETKF tuned ETKF excessive
100 101
Speed-scale ratio (c) both for truth and DA
0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44
RMSE ETKF tuned ETKF excessive EAKF adaptive
100 101
Speed-scale ratio (c) both for truth and DA
0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive
100 101
Speed-scale ratio (c) both for truth and DA
0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid
100 101
Speed-scale ratio (c) both for truth and DA
0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid
7.0 7.5 8.0 8.5 9.0
Forcing (F) DA assumes F = 8
0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70
RMSE ETKF tuned
7.0 7.5 8.0 8.5 9.0
Forcing (F) DA assumes F = 8
0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70
RMSE ETKF tuned ETKF excessive
7.0 7.5 8.0 8.5 9.0
Forcing (F) DA assumes F = 8
0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70
RMSE ETKF tuned ETKF excessive EAKF adaptive
7.0 7.5 8.0 8.5 9.0
Forcing (F) DA assumes F = 8
0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive
7.0 7.5 8.0 8.5 9.0
Forcing (F) DA assumes F = 8
0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid
7.0 7.5 8.0 8.5 9.0
Forcing (F) DA assumes F = 8
0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid
10-4 10-3 10-2 10-1 100 101
Magnitude (Qii/∆) of noise on truth
0.4 0.5 0.6 0.7 0.8 0.9
RMSE ETKF tuned
10-4 10-3 10-2 10-1 100 101
Magnitude (Qii/∆) of noise on truth
0.4 0.5 0.6 0.7 0.8 0.9
RMSE ETKF tuned ETKF excessive
10-4 10-3 10-2 10-1 100 101
Magnitude (Qii/∆) of noise on truth
0.4 0.5 0.6 0.7 0.8 0.9
RMSE ETKF tuned ETKF excessive EAKF adaptive
10-4 10-3 10-2 10-1 100 101
Magnitude (Qii/∆) of noise on truth
0.4 0.5 0.6 0.7 0.8 0.9
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive
10-4 10-3 10-2 10-1 100 101
Magnitude (Qii/∆) of noise on truth
0.4 0.5 0.6 0.7 0.8 0.9
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid
10-4 10-3 10-2 10-1 100 101
Magnitude (Qii/∆) of noise on truth
0.4 0.5 0.6 0.7 0.8 0.9
RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid