Extending the square root method to account for model noise in the - - PowerPoint PPT Presentation

extending the square root method to account for model
SMART_READER_LITE
LIVE PREVIEW

Extending the square root method to account for model noise in the - - PowerPoint PPT Presentation

Extending the square root method to account for model noise in the ensemble Kalman filter Patrick Nima Raanes , 1 , 2 , Alberto Carrassi 1 , and Laurent Bertino 1 1 Nansen Environmental and Remote Sensing Center 2 Mathematical Institute,


slide-1
SLIDE 1

Extending the square root method to account for model noise in the ensemble Kalman filter

Patrick Nima Raanes∗,1,2, Alberto Carrassi1, and Laurent Bertino1

1Nansen Environmental and Remote Sensing Center 2Mathematical Institute, University of Oxford

Os, June 9, 2015

NERSC NERSC

∗email: patrick.n.raanes@gmail.com

1 / 34

slide-2
SLIDE 2

Paper

MWR special issue on DA, 2015 ?

Abstract: A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related

  • algorithms. Primarily intended to replace additive, pseudo-random noise simulation,

the core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise and multiplicative inflation.

2 / 34

slide-3
SLIDE 3

Model noise – Problem statement

Assume xt+1 = f(xt) + qt , where qt ∼ N(0,Q Q Q) , (1) with f and Q Q Q = Cov(q) perfectly known. Then we want the forecast ensemble to satisfy ¯ P ¯ P ¯ P

f = ¯

P ¯ P ¯ P + Q Q Q , (2) where ¯ P ¯ P ¯ P = 1 N − 1

  • n

(xn − ¯ x)(xn − ¯ x)T . (3)

3 / 34

slide-4
SLIDE 4

Model noise – Problem statement

Assume xt+1 = f(xt) + qt , where qt ∼ N(0,Q Q Q) , (1) with f and Q Q Q = Cov(q) perfectly known. Then we want the forecast ensemble to satisfy ¯ P ¯ P ¯ P

f = ¯

P ¯ P ¯ P + Q Q Q , (2) where ¯ P ¯ P ¯ P = 1 N − 1

  • n

(xn − ¯ x)(xn − ¯ x)T . (3)

3 / 34

slide-5
SLIDE 5

Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments

4 / 34

slide-6
SLIDE 6

Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments

5 / 34

slide-7
SLIDE 7

Lessons learnt the past 15 years

Any square root update, A A A → A A AT T T, will

◮ deterministically match covariance relations ◮ preserve the ensemble subspace ◮ satisfy linear, homogeneous, equality constraints ⋆

Furthermore, the “symmetric choice”, A A A → A A AT T Ts, will

◮ preserve the mean ◮ satisfy linear, inhomogeneous constraints ⋆ ◮ satisfy the first-order approximation to non-linear constraints ⋆ ◮ minimise ensemble displacement ⋆ ◮ yield equally likely realisations ⋆

⋆: (plausibly) improves “dynamical consistency” of realisations.

6 / 34

slide-8
SLIDE 8

Lessons learnt the past 15 years

Any square root update, A A A → A A AT T T, will

◮ deterministically match covariance relations ◮ preserve the ensemble subspace ◮ satisfy linear, homogeneous, equality constraints ⋆

Furthermore, the “symmetric choice”, A A A → A A AT T Ts, will

◮ preserve the mean ◮ satisfy linear, inhomogeneous constraints ⋆ ◮ satisfy the first-order approximation to non-linear constraints ⋆ ◮ minimise ensemble displacement ⋆ ◮ yield equally likely realisations ⋆

⋆: (plausibly) improves “dynamical consistency” of realisations.

6 / 34

slide-9
SLIDE 9

Sqrt-Core

¯ P ¯ P ¯ P

f = ¯

P ¯ P ¯ P + Q Q Q can be rewritten using ¯ P ¯ P ¯ P =

1 N−1A

A AA A AT, yielding: A A AfA A Af T = A A AA A AT + (N−1)Q Q Q . (4) (Brutally) factorising out A A A using the M-P pseudoinverse, A A A+: A A AfA A Af T = A A A

  • I

I IN + (N−1)A A A+Q Q Q(A A AT)+ A A AT , (5) we get Sqrt-Core: A A Af = A A AT T Tf

s

(6) where T T Tf

s is the sym. square root of the middle factor in eqn. (5).

We also see that the problem of eqn. (4) is ill-posed. . .

7 / 34

slide-10
SLIDE 10

Sqrt-Core

¯ P ¯ P ¯ P

f = ¯

P ¯ P ¯ P + Q Q Q can be rewritten using ¯ P ¯ P ¯ P =

1 N−1A

A AA A AT, yielding: A A AfA A Af T = A A AA A AT + (N−1)Q Q Q . (4) (Brutally) factorising out A A A using the M-P pseudoinverse, A A A+: A A AfA A Af T = A A A

  • I

I IN + (N−1)A A A+Q Q Q(A A AT)+ A A AT , (5) we get Sqrt-Core: A A Af = A A AT T Tf

s

(6) where T T Tf

s is the sym. square root of the middle factor in eqn. (5).

We also see that the problem of eqn. (4) is ill-posed. . .

7 / 34

slide-11
SLIDE 11

Sqrt-Core

¯ P ¯ P ¯ P

f = ¯

P ¯ P ¯ P + Q Q Q can be rewritten using ¯ P ¯ P ¯ P =

1 N−1A

A AA A AT, yielding: A A AfA A Af T = A A AA A AT + (N−1)Q Q Q . (4) (Brutally) factorising out A A A using the M-P pseudoinverse, A A A+: A A AfA A Af T = A A A

  • I

I IN + (N−1)A A A+Q Q Q(A A AT)+ A A AT , (5) we get Sqrt-Core: A A Af = A A AT T Tf

s

(6) where T T Tf

s is the sym. square root of the middle factor in eqn. (5).

We also see that the problem of eqn. (4) is ill-posed. . .

7 / 34

slide-12
SLIDE 12

Sqrt-Core

¯ P ¯ P ¯ P

f = ¯

P ¯ P ¯ P + Q Q Q can be rewritten using ¯ P ¯ P ¯ P =

1 N−1A

A AA A AT, yielding: A A AfA A Af T = A A AA A AT + (N−1)Q Q Q . (4) (Brutally) factorising out A A A using the M-P pseudoinverse, A A A+: A A AfA A Af T = A A A

  • I

I IN + (N−1)A A A+Q Q Q(A A AT)+ A A AT , (5) we get Sqrt-Core: A A Af = A A AT T Tf

s

(6) where T T Tf

s is the sym. square root of the middle factor in eqn. (5).

We also see that the problem of eqn. (4) is ill-posed. . .

7 / 34

slide-13
SLIDE 13

Sqrt-Core

In fact, Sqrt-Core only satisfies A A AfA A Af T = A A AA A AT + (N−1)ˆ Q ˆ Q ˆ Q (7) where ˆ Q ˆ Q ˆ Q = Π Π ΠA

A AQ

Q QΠ Π ΠA

A A, and Π

Π ΠA

A A = A

A AA A A+ is the orthogonal projector

  • nto the column space of A

A A.

8 / 34

slide-14
SLIDE 14

Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments

9 / 34

slide-15
SLIDE 15

Overview of alternatives

Method A A Af = where thus satisfying Add-Q A A A + D D D D D D = Q Q Q1/2Ξ, ξn ∼ N(0,I I Im) ED

D D(eqn. (4))

Mult-1 λA A A λ2 = trace(¯

P ¯ P ¯ P+Q Q Q) trace(¯ P ¯ P ¯ P)

trace(eqn. (4)) Mult-m Λ Λ ΛA A A Λ Λ Λ2 = diag(¯ P ¯ P ¯ P)−1 diag(¯ P ¯ P ¯ P + Q Q Q) diag(eqn. (4)) Sqrt-Core A A AT T T T T T =

  • I

I IN + (N−1)A A A+Q Q QA A A+T1/2

s

Π Π ΠA

A A(eqn. (4))Π

Π ΠA

A A

Also:

◮ Complete resampling ◮ 2nd-order exact sampling (Pham, 2001) ◮ A similar (but distinct) square root method (Nakano, 2013) ◮ Relaxation (Zhang et al., 2004) ◮ Forcings fields or boundary conditions (Shutts, 2005) ◮ SEIK, with forgetting factor (Pham, 2001) ◮ RRSQRT, with orthogonal ensemble (Heemink et al., 2001)

10 / 34

slide-16
SLIDE 16

Overview of alternatives

Method A A Af = where thus satisfying Add-Q A A A + D D D D D D = Q Q Q1/2Ξ, ξn ∼ N(0,I I Im) ED

D D(eqn. (4))

Mult-1 λA A A λ2 = trace(¯

P ¯ P ¯ P+Q Q Q) trace(¯ P ¯ P ¯ P)

trace(eqn. (4)) Mult-m Λ Λ ΛA A A Λ Λ Λ2 = diag(¯ P ¯ P ¯ P)−1 diag(¯ P ¯ P ¯ P + Q Q Q) diag(eqn. (4)) Sqrt-Core A A AT T T T T T =

  • I

I IN + (N−1)A A A+Q Q QA A A+T1/2

s

Π Π ΠA

A A(eqn. (4))Π

Π ΠA

A A

Also:

◮ Complete resampling ◮ 2nd-order exact sampling (Pham, 2001) ◮ A similar (but distinct) square root method (Nakano, 2013) ◮ Relaxation (Zhang et al., 2004) ◮ Forcings fields or boundary conditions (Shutts, 2005) ◮ SEIK, with forgetting factor (Pham, 2001) ◮ RRSQRT, with orthogonal ensemble (Heemink et al., 2001)

10 / 34

slide-17
SLIDE 17

Overview of alternatives

Method A A Af = where thus satisfying Add-Q A A A + D D D D D D = Q Q Q1/2Ξ, ξn ∼ N(0,I I Im) ED

D D(eqn. (4))

Mult-1 λA A A λ2 = trace(¯

P ¯ P ¯ P+Q Q Q) trace(¯ P ¯ P ¯ P)

trace(eqn. (4)) Mult-m Λ Λ ΛA A A Λ Λ Λ2 = diag(¯ P ¯ P ¯ P)−1 diag(¯ P ¯ P ¯ P + Q Q Q) diag(eqn. (4)) Sqrt-Core A A AT T T T T T =

  • I

I IN + (N−1)A A A+Q Q QA A A+T1/2

s

Π Π ΠA

A A(eqn. (4))Π

Π ΠA

A A

Also:

◮ Complete resampling ◮ 2nd-order exact sampling (Pham, 2001) ◮ A similar (but distinct) square root method (Nakano, 2013) ◮ Relaxation (Zhang et al., 2004) ◮ Forcings fields or boundary conditions (Shutts, 2005) ◮ SEIK, with forgetting factor (Pham, 2001) ◮ RRSQRT, with orthogonal ensemble (Heemink et al., 2001)

10 / 34

slide-18
SLIDE 18

Snapshot comparison

−25 −20 −15 −10 −5 5 10 15 20 25 30 −30 −20 −10 10 20 30

x y

None Add-Q Mult-1 Mult-m Sqrt-Core

Figure: Snapshot of ensemble forecasts with the Lorenz-63 system after model noise incorporation.

11 / 34

slide-19
SLIDE 19

Experimental setup

◮ Twin experiment: tracking a simulated “truth”, xt ◮ RMSE =

  • 1

xt − xt2

2 ◮ Analysis update:

◮ ETKF (using the symmetric square root) ◮ No localisation ◮ Inflation (for analysis update errors): tuned for Add-Q

◮ Baselines

12 / 34

slide-20
SLIDE 20

Lorenz-63 – system

Integrated with RK4: r = 28, σ = 10, and b = 8/3. ˙ x = σ(y − x) , ˙ y = rx − y − xz , ˙ z = xy − bz , Direct observations of the entire state, with R R R = 2I I I3. Q Q Q =

   

10 −2 3 −2 5 3 3 3 5

    /10 .

−15 −10 −5 5 10 15 −15 −10 −5 5 10 15 10 15 20 25 30 35 40 45

x(t) y(t) z(t)

Example trajectories

13 / 34

slide-21
SLIDE 21

Lorenz-63 – vs N, with ∆tobs = 0.05

5 10 15 20 50 0.4 0.42 0.44 0.46 0.48 0.5 0.52

Ensemble size (N ) RMSE

ExtKF PartFilt Add-Q Mult-m Sqrt-Core

where the particle filter uses N = 104.

14 / 34

slide-22
SLIDE 22

Lorenz-63 – vs N, with ∆tobs = 0.25

5 10 15 20 50 0.65 0.7 0.75 0.8 0.85 0.9

Ensemble size (N ) RMSE

Add-Q Mult-m Sqrt-Core

Particle filter RMSE: 0.57. Extended Kalman filter RMSE: 1.4.

15 / 34

slide-23
SLIDE 23

Lorenz-63 – vs ∆tobs , with N = 12

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8

Time between observations (∆tobs) RMSE

ExtKF PartFilt Add-Q Mult-m Sqrt-Core

dts

16 / 34

slide-24
SLIDE 24

Lorenz-63 – vs Q Q Q multiplier, with N = 12, ∆tobs = 21

10

−1

10 10

1

10

2

0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4

Noise strength (multiplier to Q) RMSE

3D-Var ExtKF PartFilt Add-Q Mult-m Sqrt-Core

17 / 34

slide-25
SLIDE 25

Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments

18 / 34

slide-26
SLIDE 26

Improving Sqrt-Core: Residual noise treatment

After Sqrt-Core there is still [Q Q Q − ˆ Q Q Q] unaccounted for. = ⇒ Residual noise problem: A A AfA A Af T = A A AA A AT + (N−1)[Q Q Q − ˆ Q Q Q] . (8) Note: notation recycled from original problem.

19 / 34

slide-27
SLIDE 27

A first approach – Sqrt-Add-Z

  • 1. Define Z

Z Z = (I I Im − Π Π ΠA

A A)Q

Q Q1/2.

  • 2. Add ˜

qn = Z Z Z˜ ξn to realisation n, with ˜ ξn ∼ N(0,I I Im). But due to cross-terms, Z Z Z is not a square root of [Q Q Q − ˆ Q Q Q], and therefore Sqrt-Add-Z is biased: E{˜

ξ}

  • A

A AfA A Af T = A A AA A AT + (N−1)[Q Q Q − ˆ Q Q Q] −

  • ˆ

Q ˆ Q ˆ Q

1/2Z

Z ZT + Z Z Zˆ Q ˆ Q ˆ Q

T/2

. Compare to eqn. (8).

20 / 34

slide-28
SLIDE 28

A first approach – Sqrt-Add-Z

  • 1. Define Z

Z Z = (I I Im − Π Π ΠA

A A)Q

Q Q1/2.

  • 2. Add ˜

qn = Z Z Z˜ ξn to realisation n, with ˜ ξn ∼ N(0,I I Im). But due to cross-terms, Z Z Z is not a square root of [Q Q Q − ˆ Q Q Q], and therefore Sqrt-Add-Z is biased: E{˜

ξ}

  • A

A AfA A Af T = A A AA A AT + (N−1)[Q Q Q − ˆ Q Q Q] −

  • ˆ

Q ˆ Q ˆ Q

1/2Z

Z ZT + Z Z Zˆ Q ˆ Q ˆ Q

T/2

. Compare to eqn. (8).

20 / 34

slide-29
SLIDE 29

The underlying problem: replacing one draw with two

As an analogy to the “core+residual” problem, define q = ˆ Q ˆ Q ˆ Q

1/2ξ + Z

Z Zξ , (9) q⊥

⊥ = ˆ

Q ˆ Q ˆ Q

1/2 ˆ

ξ + Z Z Z˜ ξ , (10) where ξ, ˆ ξ, ˜ ξ ∼ N(0,I I Im) are all independent. Note that Cov(q) = Q Q Q = ˆ Q ˆ Q ˆ Q + Z Z ZZ Z ZT

  • Cov(q⊥

⊥)

+

  • ˆ

Q ˆ Q ˆ Q

1/2Z

Z ZT + Z Z Zˆ Q ˆ Q ˆ Q

T/2

. (11)

21 / 34

slide-30
SLIDE 30

Reintroducing dependence – Sqrt-Dep

Let Π Π Π be any orthogonal projection matrix, and define ξ⊥

⊥ = Π

Π Πˆ ξ + (I I Im − Π Π Π)˜ ξ , (12) where, as before, ˆ ξ, ˜ ξ ∼ N(0,I I Im) are independent. But, ξ⊥

⊥ ∼ N(0,I

I Im) too (no cross terms)! Choose Π Π Π so that Z Z ZΠ Π Π = 0. Rather than eqn. (9), redefine q: q = Q Q Q1/2ξ⊥

⊥ .

(13) Then, Cov(q) = Q Q Q . (14)

22 / 34

slide-31
SLIDE 31

The solution: reintroducing dependence – Sqrt-Dep

But also: q = (ˆ Q ˆ Q ˆ Q

1/2 + Z

Z Z)

  • Π

Π Πˆ ξ + (I I Im − Π Π Π)˜ ξ

  • (15)

= ˆ Q ˆ Q ˆ Q

1/2 ˆ

ξ + Z Z Z

  • Π

Π Πˆ ξ + (I I Im − Π Π Π)˜ ξ

  • .

(16) Hence, while maintaining Cov(q) = Q Q Q, the influence of ˜ ξ has been confined to span(Z Z Z) = span(A A A)⊥ . Algorithm: for each realisation:

  • 1. Compute ˆ

ξn corresponding to Sqrt-Core

  • 2. Draw ˜

ξn

  • 3. Total (core+residual) update: eqn. (16)

23 / 34

slide-32
SLIDE 32

The solution: reintroducing dependence – Sqrt-Dep

But also: q = (ˆ Q ˆ Q ˆ Q

1/2 + Z

Z Z)

  • Π

Π Πˆ ξ + (I I Im − Π Π Π)˜ ξ

  • (15)

= ˆ Q ˆ Q ˆ Q

1/2 ˆ

ξ + Z Z Z

  • Π

Π Πˆ ξ + (I I Im − Π Π Π)˜ ξ

  • .

(16) Hence, while maintaining Cov(q) = Q Q Q, the influence of ˜ ξ has been confined to span(Z Z Z) = span(A A A)⊥ . Algorithm: for each realisation:

  • 1. Compute ˆ

ξn corresponding to Sqrt-Core

  • 2. Draw ˜

ξn

  • 3. Total (core+residual) update: eqn. (16)

23 / 34

slide-33
SLIDE 33

Outline Sqrt-Core Initial comparisons Residual noise treatment Further experiments

24 / 34

slide-34
SLIDE 34

Overview of alternatives

Method A A Af = where Add-Q A A A + D D D D D D = Q Q Q1/2Ξ, each column of Ξ drawn from N(0,I I Im) Mult-1 λA A A λ2 = trace(¯

P ¯ P ¯ P+Q Q Q) trace(¯ P ¯ P ¯ P)

Mult-m Λ Λ ΛA A A Λ Λ Λ2 = diag(¯ P ¯ P ¯ P)−1 diag(¯ P ¯ P ¯ P + Q Q Q) Sqrt-Core A A AT T T T T T =

  • I

I IN + (N−1)A A A+Q Q QA A A+T1/2

s

Also:

◮ Sqrt-Add-Z ◮ Sqrt-Dep

25 / 34

slide-35
SLIDE 35

Linear advection – system

For t = 0, 1, . . . and i = 1, . . . , m, and with periodic BCs, xt+1

i

= 0.98xt

i−1 .

(17) Direct observation of the truth at p = 40 equidistant locations with R R R = 0.01I I Ip, every seventh time step: ∆tobs = 7∆t = 4.9.

100 200 300 400 500 600 700 800 900 1000 −1.5 −1 −0.5 0.5 1 1.5

State component index Amplitude

Example snapshots Q Q Q is such that although m = 1000, the system subspace only has 50 dimensions.

26 / 34

slide-36
SLIDE 36

Linear advection – results

10 20 30 40 50 60 80 400 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65

Ensemble size (N ) RMSE

Climatology 3D-Var ExtKF Add-Q Mult-m Mult-1 Sqrt-Core Sqrt-Add-Z Sqrt-Dep

27 / 34

slide-37
SLIDE 37

Lorenz-96 – system

Integrated with RK4, dxi dt = (xi+1 − xi−2) xi−1 − xi + F , (18) with periodic BCs, i = 1, . . . , m, m = 40, and Qi,j = exp

  • −1/30i − j2

2

  • + 0.1δi,j .

(19)

5 10 15 20 25 30 35 40 −6 −4 −2 2 4 6 8 10

State component index Amplitude

Example snapshots

28 / 34

slide-38
SLIDE 38

Lorenz-96 – vs N, with ∆tobs = 0.05

20 25 30 35 40 60 100 400 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1

Ensemble size (N ) RMSE

3D-Var ExtKF Add-Q Mult-m Mult-1 Sqrt-Core Sqrt-Add-Z Sqrt-Dep

29 / 34

slide-39
SLIDE 39

Lorenz-96 – vs N, with ∆tobs = 0.15

20 25 30 35 40 60 100 400 0.5 0.6 0.7 0.8 0.9 1 1.1

Ensemble size (N ) RMSE

3D-Var ExtKF Add-Q Mult-m Mult-1 Sqrt-Core Sqrt-Add-Z Sqrt-Dep

30 / 34

slide-40
SLIDE 40

Lorenz-96 – vs ∆tobs , with N = 30

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3

Time between observations (∆tobs) RMSE

3D-Var ExtKF Add-Q Mult-m Mult-1 Sqrt-Core Sqrt-Add-Z Sqrt-Dep

31 / 34

slide-41
SLIDE 41

Lorenz-96 – vs Q Q Q multiplier, with N = 25, ∆tobs = 0.05

10

−3

10

−2

10

−1

10 10

1

0.2 0.4 0.6 0.8 1 1.2

Noise strength (multiplier to Q) RMSE

3D-Var ExtKF Add-Q Mult-m Mult-1 Sqrt-Core Sqrt-Add-Z Sqrt-Dep

32 / 34

slide-42
SLIDE 42

Lorenz-96 – vs F, with N = 25, ∆tobs = 0.05

0.1 0.2 0.5 1 2 4 8 15 25 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

Forcing (F ) RMSE

3D-Var ExtKF Add-Q Mult-m Mult-1 Sqrt-Core Sqrt-Add-Z Sqrt-Dep

33 / 34

slide-43
SLIDE 43

Summary

◮ Extending the square root method to the forecast step

◮ Main aim: eliminate sampling errors ◮ Secondary benefit: dynamical consistency

◮ Sqrt-Core is deficient when [Q

Q Q − ˆ Q Q Q] is significant

◮ Sqrt-Add-Z is simple and efficient, but biased ◮ Sqrt-Dep is costly, but more satisfactory ◮ Both methods perform robustly better than

Mult-m and Add-Q

◮ Future directions

◮ Experiments on larger models and more realistic model error ◮ Improvements to Sqrt-Add-Z and Sqrt-Dep ◮ Investigate perspectives from Nakano (2013) and M. Bocquet

34 / 34

slide-44
SLIDE 44

Summary

◮ Extending the square root method to the forecast step

◮ Main aim: eliminate sampling errors ◮ Secondary benefit: dynamical consistency

◮ Sqrt-Core is deficient when [Q

Q Q − ˆ Q Q Q] is significant

◮ Sqrt-Add-Z is simple and efficient, but biased ◮ Sqrt-Dep is costly, but more satisfactory ◮ Both methods perform robustly better than

Mult-m and Add-Q

◮ Future directions

◮ Experiments on larger models and more realistic model error ◮ Improvements to Sqrt-Add-Z and Sqrt-Dep ◮ Investigate perspectives from Nakano (2013) and M. Bocquet

34 / 34

slide-45
SLIDE 45

Summary

◮ Extending the square root method to the forecast step

◮ Main aim: eliminate sampling errors ◮ Secondary benefit: dynamical consistency

◮ Sqrt-Core is deficient when [Q

Q Q − ˆ Q Q Q] is significant

◮ Sqrt-Add-Z is simple and efficient, but biased ◮ Sqrt-Dep is costly, but more satisfactory ◮ Both methods perform robustly better than

Mult-m and Add-Q

◮ Future directions

◮ Experiments on larger models and more realistic model error ◮ Improvements to Sqrt-Add-Z and Sqrt-Dep ◮ Investigate perspectives from Nakano (2013) and M. Bocquet

34 / 34

slide-46
SLIDE 46

References

Heemink, A. W., M. Verlaan, and A. J. Segers, 2001: Variance reduced ensemble Kalman filtering. Monthly Weather Review, 129, 1718–1728. Nakano, S., 2013: A prediction algorithm with a limited number of particles for state estimation of high-dimensional systems. Information Fusion (FUSION), 2013 16th International Conference on, IEEE, 1356–1363. Pham, D. T., 2001: Stochastic methods for sequential data assimilation in strongly nonlinear systems. Monthly Weather Review, 129, 1194–1207. Shutts, G., 2005: A kinetic energy backscatter algorithm for use in ensemble prediction

  • systems. Quarterly Journal of the Royal Meteorological Society, 131, 3079–3102.

Whitaker, J. S. and T. M. Hamill, 2012: Evaluating methods to account for system errors in ensemble data assimilation. Monthly Weather Review, 140, 3078–3089. Zhang, F., C. Snyder, and J. Sun, 2004: Impacts of initial estimate and observation availability on convective-scale data assimilation with an ensemble Kalman filter. Monthly Weather Review, 132, 1238–1253.

1 / 1