Adaptive covariance inflation in the EnKF by Gaussian scale - - PowerPoint PPT Presentation

adaptive covariance inflation in the enkf by gaussian
SMART_READER_LITE
LIVE PREVIEW

Adaptive covariance inflation in the EnKF by Gaussian scale - - PowerPoint PPT Presentation

Adaptive covariance inflation in the EnKF by Gaussian scale mixtures pdf x Patrick N. Raanes, Marc Bocquet, Alberto Carrassi patrick.n.raanes@gmail.com NERSC NERSC EnKF Workshop, Bergen, May 2018 Questions answered in paper Nonlinear


slide-1
SLIDE 1

Adaptive covariance inflation in the EnKF by Gaussian scale mixtures

pdf

x

Patrick N. Raanes, Marc Bocquet, Alberto Carrassi

patrick.n.raanes@gmail.com

NERSC NERSC

EnKF Workshop, Bergen, May 2018

slide-2
SLIDE 2

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-3
SLIDE 3

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-4
SLIDE 4

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-5
SLIDE 5

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-6
SLIDE 6

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-7
SLIDE 7

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-8
SLIDE 8

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-9
SLIDE 9

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-10
SLIDE 10

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-11
SLIDE 11

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-12
SLIDE 12

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-13
SLIDE 13

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-14
SLIDE 14

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-15
SLIDE 15

Questions ∼answered in paper

Nonlinear models cause sampling error. Why and how? Can it be dissociated from non-Gaussianity? Does the inherent bias

  • E[tr(¯

Pa)] < tr(Pa)

  • cause

collapse ? divergence ? Other reasons for inflating in nonlinear contexts. Linear models attenuate sampling error. How? Is the covariance factor

1 N−1 optimal?

How does localization affect inflation ? How should inflation be defined as a parameter, rather than just a target statistic? How does the feedback of the EnKF-N compare to “unbiased” updates. What is the bias of the estimator ˆ βR ? Why is it better than ˆ βI or ˆ βML ?

slide-16
SLIDE 16

Adaptive covariance inflation in the EnKF by Gaussian scale mixtures

pdf

x

Patrick N. Raanes, Marc Bocquet, Alberto Carrassi

patrick.n.raanes@gmail.com

NERSC NERSC

EnKF Workshop, Bergen, May 2018

slide-17
SLIDE 17

Outline

Idealistic contexts (with sampling error)

Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)

With model error

Survey inflation estimation

ETKF-adaptive EAKF-adaptive

EnKF-N hybrid Benchmarks

slide-18
SLIDE 18

Outline

Idealistic contexts (with sampling error)

Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)

With model error

Survey inflation estimation

ETKF-adaptive EAKF-adaptive

EnKF-N hybrid Benchmarks

slide-19
SLIDE 19

Outline

Idealistic contexts (with sampling error)

Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)

With model error

Survey inflation estimation

ETKF-adaptive EAKF-adaptive

EnKF-N hybrid Benchmarks

slide-20
SLIDE 20

Outline

Idealistic contexts (with sampling error)

Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)

With model error

Survey inflation estimation

ETKF-adaptive EAKF-adaptive

EnKF-N hybrid Benchmarks

slide-21
SLIDE 21

Outline

Idealistic contexts (with sampling error)

Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)

With model error

Survey inflation estimation

ETKF-adaptive EAKF-adaptive

EnKF-N hybrid Benchmarks

slide-22
SLIDE 22

Outline

Idealistic contexts (with sampling error)

Revisiting the EnKF assumptions = ⇒ Gaussian scale mixture (EnKF-N)

With model error

Survey inflation estimation

ETKF-adaptive EAKF-adaptive

EnKF-N hybrid Benchmarks

slide-23
SLIDE 23

Idealistic contexts (EnKF-N)

Assume M, H, Q, R are perfectly known, and p(x) and p(y|x) are always Gaussian.

slide-24
SLIDE 24

EnKF

slide-25
SLIDE 25

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-26
SLIDE 26

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-27
SLIDE 27

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-28
SLIDE 28

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-29
SLIDE 29

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-30
SLIDE 30

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-31
SLIDE 31

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-32
SLIDE 32

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-33
SLIDE 33

Revisiting EnKF assumptions

Denote yprior all prior information on the “true” state, x ∈ RM, and suppose that, with known mean (b) and cov (B), p(x|yprior) = N(x|b, B) . (1)

Computational costs induce:

≈ p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

= ⇒ “true” moments, b and B, are unknowns, to be estimated from E. Ensemble E =

  • x1,

. . . xn, . . . xN

  • also from (1) and iid.
slide-34
SLIDE 34

EnKF prior

But p(x|E) =

  • B
  • RM

N(x|b, B) p(b, B|E) db dB (2) Recover standard EnKF by assuming N=∞ so that p(b, B|E) = δ(b − ¯ x)δ(B − ¯ B) , where ¯ x = 1 N

N

  • n=1

xn , ¯ B = 1 N − 1

N

  • n=1

(xn − ¯ x) (xn − ¯ x)T . (3) The EnKF-N does not make this approximation.

slide-35
SLIDE 35

EnKF prior

But p(x|E) =

  • B
  • RM

N(x|b, B) p(b, B|E) db dB (2) Recover standard EnKF by assuming N=∞ so that p(b, B|E) = δ(b − ¯ x)δ(B − ¯ B) , where ¯ x = 1 N

N

  • n=1

xn , ¯ B = 1 N − 1

N

  • n=1

(xn − ¯ x) (xn − ¯ x)T . (3) The EnKF-N does not make this approximation.

slide-36
SLIDE 36

EnKF prior

But p(x|E) =

  • B
  • RM

N(x|b, B) p(b, B|E) db dB (2) Recover standard EnKF by assuming N=∞ so that p(b, B|E) = δ(b − ¯ x)δ(B − ¯ B) , where ¯ x = 1 N

N

  • n=1

xn , ¯ B = 1 N − 1

N

  • n=1

(xn − ¯ x) (xn − ¯ x)T . (3) The EnKF-N does not make this approximation.

slide-37
SLIDE 37

EnKF-N via scale mixture

Prior: p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

(4) (5) (6) (8)

slide-38
SLIDE 38

EnKF-N via scale mixture

Prior: p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

(4) . . . . . . (5) . . . . . . (6) . . . ∝

  • 1 +

1 N−1x − ¯

x2

εN ¯ B

−N/2

(7) (8)

slide-39
SLIDE 39

EnKF-N via scale mixture

Prior: p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

(4) . . . ∝

  • α>0

N

x − ¯

xεN ¯

B|0, α

p α|E dα

(5) . . . . . . (6) . . . ∝

  • 1 +

1 N−1x − ¯

x2

εN ¯ B

−N/2

(7) (8)

slide-40
SLIDE 40

EnKF-N via scale mixture

Prior: p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

(4) . . . ∝

  • α>0

N

x − ¯

xεN ¯

B|0, α

p α|E dα

(5) . . . . . . (6) . . . ∝

  • 1 +

1 N−1x − ¯

x2

εN ¯ B

−N/2

(7) Posterior: p(x|E, y) ∝ p(x|E) N

y|Hx, R

  • (8)
slide-41
SLIDE 41

EnKF-N via scale mixture

Prior: p(x|E) =

  • N(x|b, B) p(b, B|E) db dB

(4) . . . ∝

  • α>0

N

x − ¯

xεN ¯

B|0, α

p α|E dα

(5) . . . ∝ N

x|¯

x, α(x)¯ B

˜

p

α(x)|E

  • (6)

. . . ∝

  • 1 +

1 N−1x − ¯

x2

εN ¯ B

−N/2

(7) Posterior: p(x|E, y) ∝ p(x|E) N

y|Hx, R

  • (8)
slide-42
SLIDE 42

Mixing distributions – p(α| . . .)

1 2 3 4 5 6

pdf λ

Prior Posterior Likelihood

Prior: p(α|E) = χ−2(α|1, N−1) Likelihood: p(x⋆, y|α, E) ∝ exp

−1

2y−H¯

x2

αεNH¯ BHT+R

  • =

⇒ Posterior: p(x⋆, α|y, E) ∝ exp

−1

2 D(α)

slide-43
SLIDE 43

Summary – Perfect model scenario

Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.

slide-44
SLIDE 44

Summary – Perfect model scenario

Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.

slide-45
SLIDE 45

Summary – Perfect model scenario

Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.

slide-46
SLIDE 46

Summary – Perfect model scenario

Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.

slide-47
SLIDE 47

Summary – Perfect model scenario

Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.

slide-48
SLIDE 48

Summary – Perfect model scenario

Even with a perfect model, Gaussian forecasts, and a deterministic EnKF, “sampling error” arises for N < ∞ due to nonlinearity, and inflation is necessary. Not assuming ¯ B = B as in the EnKF leads to a Gaussian scale mixture. This leads to an adaptive inflation scheme, nullifying the need to tune the inflation factor, and yielding very strong benchmarks in idealistic settings. Excellent training for EnKF theory. Especially general-purpose inflation estimation.

slide-49
SLIDE 49

With model error

Because all models are wrong.

slide-50
SLIDE 50

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β, ) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

  • ,

where ¯ C(β) = βH¯ BHT + R , (10)

slide-51
SLIDE 51

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β, ) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

  • ,

where ¯ C(β) = βH¯ BHT + R , (10)

slide-52
SLIDE 52

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β, ) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

  • ,

where ¯ C(β) = βH¯ BHT + R , (10)

slide-53
SLIDE 53

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β, E) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

  • ,

where ¯ C(β) = βH¯ BHT + R , (10)

slide-54
SLIDE 54

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β,

  • E) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

  • ,

where ¯ C(β) = βH¯ BHT + R , (10)

slide-55
SLIDE 55

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β,

  • E) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

  • ,

where ¯ C(β) = βH¯ BHT + R , (10)

slide-56
SLIDE 56

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β,

  • E) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

  • ,

where ¯ C(β) = βH¯ BHT + R , (10)

slide-57
SLIDE 57

Fundamentals

Suppose xn ∼ N

b, B/β , and N = ∞.

Then there’s no mixture, but simply p(x|β,

  • E) = N

x

  • ¯

x, β ¯ B

.

(9) Recall p(y|x) = N

y|Hx, R .

Then p(y|β) = N

y

x, ¯ C(β)

= N ¯

δ

  • 0, ¯

C(β)

,

where ¯ C(β) = βH¯ BHT + R , ¯ δ = y − H¯ x . (10)

slide-58
SLIDE 58

ETKF adaptive inflation

Again, p(y|β) = N

¯

δ

  • 0, ¯

C(β)

,

(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2

R/P − 1

¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯

BHT, ˆ

β¯

C(1), ML, VB (EM).

slide-59
SLIDE 59

ETKF adaptive inflation

Again, p(y|β) = N

¯

δ

  • 0, ¯

C(β)

,

(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2

R/P − 1

¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯

BHT, ˆ

β¯

C(1), ML, VB (EM).

slide-60
SLIDE 60

ETKF adaptive inflation

Again, p(y|β) = N

¯

δ

  • 0, ¯

C(β)

,

(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2

R/P − 1

¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯

BHT, ˆ

β¯

C(1), ML, VB (EM).

slide-61
SLIDE 61

ETKF adaptive inflation

Again, p(y|β) = N

¯

δ

  • 0, ¯

C(β)

,

(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2

R/P − 1

¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯

BHT, ˆ

β¯

C(1), ML, VB (EM).

slide-62
SLIDE 62

ETKF adaptive inflation

Again, p(y|β) = N

¯

δ

  • 0, ¯

C(β)

,

(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2

R/P − 1

¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯

BHT, ˆ

β¯

C(1), ML, VB (EM).

slide-63
SLIDE 63

ETKF adaptive inflation

Again, p(y|β) = N

¯

δ

  • 0, ¯

C(β)

,

(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2

R/P − 1

¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯

BHT, ˆ

β¯

C(1), ML, VB (EM).

slide-64
SLIDE 64

ETKF adaptive inflation

Again, p(y|β) = N

¯

δ

  • 0, ¯

C(β)

,

(11) where ¯ C(β) = βH¯ BHT + R ≈ ¯ δ¯ δT . (12) “yielding” (Wang and Bishop, 2003) ˆ βR = ¯ δ2

R/P − 1

¯ σ2 , where P = length(y) and ¯ σ2 = tr(H¯ BHTR−1)/P. Also considered: ˆ βI, ˆ βH¯

BHT, ˆ

β¯

C(1), ML, VB (EM).

slide-65
SLIDE 65

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-66
SLIDE 66

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-67
SLIDE 67

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-68
SLIDE 68

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-69
SLIDE 69

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-70
SLIDE 70

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-71
SLIDE 71

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-72
SLIDE 72

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-73
SLIDE 73

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-74
SLIDE 74

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-75
SLIDE 75

Renouncing Gaussianity

Assume H¯ BHT ∝ R. The likelihood p(y|β) = N

¯

δ

  • 0, ¯

C(β)

becomes

: p(y|β) ∝ χ+2 ¯ δ2

R/P

  • (1 + ¯

σ2β), P

  • .

(13) Surprise !!!: argmax p(y|β) = ˆ βR , A further approximation is fitted: p(y|β) ≈ χ+2(ˆ βR|β, ˆ ν) . (14) Likelihood (14) fits mode of (13). Fitting curvature = ⇒ ˆ ν = ⇒ same variance as in Miyoshi (2011) !!! Likelihood (14) conjugate to p(β) = χ−2(β|βf, νf) , yielding νa = νf + ˆ ν , (15) βa = (νfβf + ˆ ν ˆ βR)/νa , (16) again, as in Miyoshi (2011).

slide-76
SLIDE 76

EAKF adaptive inflation

Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).

slide-77
SLIDE 77

EAKF adaptive inflation

Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).

slide-78
SLIDE 78

EAKF adaptive inflation

Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).

slide-79
SLIDE 79

EAKF adaptive inflation

Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).

slide-80
SLIDE 80

EAKF adaptive inflation

Anderson (2007) assigns Gaussian prior: p(β) = N(β|βf, V f) , (17) and fits the posterior by a “Gaussian”: p(β|yi) ≈ N(β|ˆ βMAP, V a) , (18) where ˆ βMAP and V a are fitted using the exact posterior (“easy” by virtue of serial update). Gharamti (2017) improves via χ−2 and χ+2 (Gamma).

slide-81
SLIDE 81

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-82
SLIDE 82

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-83
SLIDE 83

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-84
SLIDE 84

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-85
SLIDE 85

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-86
SLIDE 86

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-87
SLIDE 87

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-88
SLIDE 88

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-89
SLIDE 89

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-90
SLIDE 90

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-91
SLIDE 91

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-92
SLIDE 92

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-93
SLIDE 93

EnKF-N hybrid

Use two inflation factors: α and β, dedicated to sampling and model error, respectively. For β, pick simplest (and ∼best) scheme: ˆ βR. Algorithm: Find β (via ˆ βR) Find α given β (via EnKF-N) Potential improvements:

Determining (α, β) jointly (simultaneously). Rather than fitting the likelihood parameters, fit posterior parameters (similarly to EAKF). Matching moments via quadrature Non-parametric (grid- or MC- based) De-biasing ˆ βR

Testing “improvements” did not yield significant gains.

slide-94
SLIDE 94

Two-layer Lorenz-96

Evolution

dxi dt = ψ+

i (x)

+ F − hc b

10

  • j=1

zj+10(i−1) , i = 1, . . . , 36, dzj dt = c bψ−

j (bz) + 0 + hc

bx1+(j−1)/

/10 ,

j = 1, . . . , 360, where ψi is the single-layer Lorenz-96 dynamics.

−4 −2 2 4 6 8 10 1 4 8 12 16 20 24 28 32 36

1 40 80 120 160 200 240 280 320 360

Example snapshots RMSE = 1 T

T

  • t=1

1

M ¯ xt − xt2

2 .

N = 20, no localization.

slide-95
SLIDE 95

Two-layer Lorenz-96

Evolution

dxi dt = ψ+

i (x)

+ F − hc b

10

  • j=1

zj+10(i−1) , i = 1, . . . , 36, dzj dt = c bψ−

j (bz) + 0 + hc

bx1+(j−1)/

/10 ,

j = 1, . . . , 360, where ψi is the single-layer Lorenz-96 dynamics.

−4 −2 2 4 6 8 10 1 4 8 12 16 20 24 28 32 36

1 40 80 120 160 200 240 280 320 360

Example snapshots RMSE = 1 T

T

  • t=1

1

M ¯ xt − xt2

2 .

N = 20, no localization.

slide-96
SLIDE 96

Illustration of time series

0.5 1.0 1.5

ETKF tuned

0.5 1.0 1.5

EAKF adaptive

0.5 1.0 1.5

ETKF adaptive

2500 2600 2700 2800 2900 3000

DA cycle (k)

0.5 1.0 1.5

EnKF-N hybrid Inflation RMS Error RMS Spread

slide-97
SLIDE 97

Benchmarks

5 10 15 20 25 30

Forcing (F) both for truth and DA

0.0 0.2 0.4 0.6 0.8 1.0

RMSE ETKF tuned

slide-98
SLIDE 98

Benchmarks

5 10 15 20 25 30

Forcing (F) both for truth and DA

0.0 0.2 0.4 0.6 0.8 1.0

RMSE ETKF tuned ETKF excessive

slide-99
SLIDE 99

Benchmarks

5 10 15 20 25 30

Forcing (F) both for truth and DA

0.0 0.2 0.4 0.6 0.8 1.0

RMSE ETKF tuned ETKF excessive EAKF adaptive

slide-100
SLIDE 100

Benchmarks

5 10 15 20 25 30

Forcing (F) both for truth and DA

0.0 0.2 0.4 0.6 0.8 1.0

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive

slide-101
SLIDE 101

Benchmarks

5 10 15 20 25 30

Forcing (F) both for truth and DA

0.0 0.2 0.4 0.6 0.8 1.0

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid

slide-102
SLIDE 102

Benchmarks

5 10 15 20 25 30

Forcing (F) both for truth and DA

0.0 0.2 0.4 0.6 0.8 1.0

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid

slide-103
SLIDE 103

Summary

Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).

slide-104
SLIDE 104

Summary

Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).

slide-105
SLIDE 105

Summary

Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).

slide-106
SLIDE 106

Summary

Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).

slide-107
SLIDE 107

Summary

Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).

slide-108
SLIDE 108

Summary

Paper highlights: Cataloguing of reasons to inflate. Inflation-centric re-derivation of the dual EnKF-N. Formal survey of adaptive inflation methods. A simple hybrid of EnKF-N and ˆ βR, which is shown to systematically (but moderately) improve filter accuracy (no re-tuning!).

slide-109
SLIDE 109

References 1

https://github.com/nansencenter/DAPPER 2011: Marc Bocquet. Ensemble Kalman filtering without the intrinsic need for inflation. Nonlinear Processes in Geophysics. 2012: Marc Bocquet and Pavel Sakov. Combining inflation-free and iterative ensemble Kalman filters for strongly nonlinear systems. Nonlinear Processes in Geophysics 2015: Marc Bocquet, Patrick N. Raanes, and Alexis Hannart. Expanding the validity of the ensemble Kalman filter without the intrinsic need for inflation. Nonlinear Processes in Geophysics 2018: Patrick N. Raanes, Marc Bocquet, and Alberto Carrassi. Adaptive covariance inflation in the ensemble Kalman filter by Gaus- sian scale mixtures. QJRMS (minor rev), arxiv.org/abs/1801.08474

slide-110
SLIDE 110

References 2

Jeffrey L. Anderson. An adaptive covariance inflation error correction algorithm for ensemble filters. Tellus A, 59(2):210–224, 2007.

  • M. E. Gharamti. Enhanced adaptive inflation algorithm for ensemble filters. Monthly

Weather Review, In review(0):0–0, 2017. Takemasa Miyoshi. The Gaussian approach to adaptive covariance inflation and its implementation with the local ensemble transform Kalman filter. Monthly Weather Review, 139(5):1519–1535, 2011. Xuguang Wang and Craig H. Bishop. A comparison of breeding and ensemble transform Kalman filter ensemble forecast schemes. Journal of the Atmospheric Sciences, 60(9):1140–1158, 2003.

slide-111
SLIDE 111

UKF reinventing localization

slide-112
SLIDE 112

Appendix

slide-113
SLIDE 113

Parametric distributions – Table

slide-114
SLIDE 114

Parametric distributions – Properties

slide-115
SLIDE 115

EnKF-N mixing distribution

Instead, we assign the Jeffreys (hyper)prior: p(b, B) ∝ p(B) ∝ |B|−(M+1)/2 , (19) and recall the likelihood: p(E|b, B) ∝

N

  • n=1

N(xn|b, B) , (20) yielding p(b, B|E) = N(b|¯ x, B/N)

  • p(b|B,E)

W−1(B|¯ B, N−1)

  • p(B|E)

, (21) where W−1 is the inverse-Wishart distribution (c.f. Table 2).

slide-116
SLIDE 116

EnKF-N mixing distribution

Instead, we assign the Jeffreys (hyper)prior: p(b, B) ∝ p(B) ∝ |B|−(M+1)/2 , (19) and recall the likelihood: p(E|b, B) ∝

N

  • n=1

N(xn|b, B) , (20) yielding p(b, B|E) = N(b|¯ x, B/N)

  • p(b|B,E)

W−1(B|¯ B, N−1)

  • p(B|E)

, (21) where W−1 is the inverse-Wishart distribution (c.f. Table 2).

slide-117
SLIDE 117

EnKF-N mixing distribution

Instead, we assign the Jeffreys (hyper)prior: p(b, B) ∝ p(B) ∝ |B|−(M+1)/2 , (19) and recall the likelihood: p(E|b, B) ∝

N

  • n=1

N(xn|b, B) , (20) yielding p(b, B|E) = N(b|¯ x, B/N)

  • p(b|B,E)

W−1(B|¯ B, N−1)

  • p(B|E)

, (21) where W−1 is the inverse-Wishart distribution (c.f. Table 2).

slide-118
SLIDE 118

Benchmarks

100 101

Speed-scale ratio (c) both for truth and DA

0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44

RMSE ETKF tuned

slide-119
SLIDE 119

Benchmarks

100 101

Speed-scale ratio (c) both for truth and DA

0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44

RMSE ETKF tuned ETKF excessive

slide-120
SLIDE 120

Benchmarks

100 101

Speed-scale ratio (c) both for truth and DA

0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44

RMSE ETKF tuned ETKF excessive EAKF adaptive

slide-121
SLIDE 121

Benchmarks

100 101

Speed-scale ratio (c) both for truth and DA

0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive

slide-122
SLIDE 122

Benchmarks

100 101

Speed-scale ratio (c) both for truth and DA

0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid

slide-123
SLIDE 123

Benchmarks

100 101

Speed-scale ratio (c) both for truth and DA

0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid

slide-124
SLIDE 124

Benchmarks (single-layer)

7.0 7.5 8.0 8.5 9.0

Forcing (F) DA assumes F = 8

0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70

RMSE ETKF tuned

slide-125
SLIDE 125

Benchmarks (single-layer)

7.0 7.5 8.0 8.5 9.0

Forcing (F) DA assumes F = 8

0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70

RMSE ETKF tuned ETKF excessive

slide-126
SLIDE 126

Benchmarks (single-layer)

7.0 7.5 8.0 8.5 9.0

Forcing (F) DA assumes F = 8

0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70

RMSE ETKF tuned ETKF excessive EAKF adaptive

slide-127
SLIDE 127

Benchmarks (single-layer)

7.0 7.5 8.0 8.5 9.0

Forcing (F) DA assumes F = 8

0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive

slide-128
SLIDE 128

Benchmarks (single-layer)

7.0 7.5 8.0 8.5 9.0

Forcing (F) DA assumes F = 8

0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid

slide-129
SLIDE 129

Benchmarks (single-layer)

7.0 7.5 8.0 8.5 9.0

Forcing (F) DA assumes F = 8

0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid

slide-130
SLIDE 130

Benchmarks (Lorenz-63) – N = 3

10-4 10-3 10-2 10-1 100 101

Magnitude (Qii/∆) of noise on truth

0.4 0.5 0.6 0.7 0.8 0.9

RMSE ETKF tuned

slide-131
SLIDE 131

Benchmarks (Lorenz-63) – N = 3

10-4 10-3 10-2 10-1 100 101

Magnitude (Qii/∆) of noise on truth

0.4 0.5 0.6 0.7 0.8 0.9

RMSE ETKF tuned ETKF excessive

slide-132
SLIDE 132

Benchmarks (Lorenz-63) – N = 3

10-4 10-3 10-2 10-1 100 101

Magnitude (Qii/∆) of noise on truth

0.4 0.5 0.6 0.7 0.8 0.9

RMSE ETKF tuned ETKF excessive EAKF adaptive

slide-133
SLIDE 133

Benchmarks (Lorenz-63) – N = 3

10-4 10-3 10-2 10-1 100 101

Magnitude (Qii/∆) of noise on truth

0.4 0.5 0.6 0.7 0.8 0.9

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive

slide-134
SLIDE 134

Benchmarks (Lorenz-63) – N = 3

10-4 10-3 10-2 10-1 100 101

Magnitude (Qii/∆) of noise on truth

0.4 0.5 0.6 0.7 0.8 0.9

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid

slide-135
SLIDE 135

Benchmarks (Lorenz-63) – N = 3

10-4 10-3 10-2 10-1 100 101

Magnitude (Qii/∆) of noise on truth

0.4 0.5 0.6 0.7 0.8 0.9

RMSE ETKF tuned ETKF excessive EAKF adaptive ETKF adaptive EnKF-N hybrid