Aspects of sequential and simultaneous assimilation May 29, 2013 - - PowerPoint PPT Presentation

aspects of sequential and simultaneous assimilation
SMART_READER_LITE
LIVE PREVIEW

Aspects of sequential and simultaneous assimilation May 29, 2013 - - PowerPoint PPT Presentation

Aspects of sequential and simultaneous assimilation May 29, 2013 Motivation There exist several algorithms for conditioning a model to data, s.a. EnKF, RML, ES, EnRML, MDA,... All methods generate approximate samples from the same


slide-1
SLIDE 1

Aspects of sequential and simultaneous assimilation

May 29, 2013

slide-2
SLIDE 2

Motivation

◮ There exist several algorithms for conditioning a model to data, s.a.

EnKF, RML, ES, EnRML, MDA,...

◮ All methods generate approximate samples from the same

distribution

◮ Methods sample correctly for linear problems ◮ Methods give different results for non-linear problems

◮ Differences between methods are defined by some key characteristics ◮ Focus on: Sequential vs. simultaneous assimilation of data for

updating static parameters

slide-3
SLIDE 3

Motivation

◮ Formal Bayesian expression

◮ Seq. data assimilation = sim. data assimilation

slide-4
SLIDE 4

Motivation

◮ Formal Bayesian expression

◮ Seq. data assimilation = sim. data assimilation

◮ Approximate methods

◮ Linear forward models: seq. data assimilation = sim. data

assimilation

slide-5
SLIDE 5

Motivation

◮ Formal Bayesian expression

◮ Seq. data assimilation = sim. data assimilation

◮ Approximate methods

◮ Linear forward models: seq. data assimilation = sim. data

assimilation

◮ Non-linear forward models: seq. data assimilation = sim. data

assimilation

slide-6
SLIDE 6

Motivation

◮ Formal Bayesian expression

◮ Seq. data assimilation = sim. data assimilation

◮ Approximate methods

◮ Linear forward models: seq. data assimilation = sim. data

assimilation

◮ Non-linear forward models: seq. data assimilation = sim. data

assimilation

5 10 15 5 10 15 20 25 0.05 0.1 0.15 0.2

  • Seq. scheme

5 10 15 5 10 15 20 25 0.05 0.1 0.15 0.2

  • Sim. scheme
slide-7
SLIDE 7

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

slide-8
SLIDE 8

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data

slide-9
SLIDE 9

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

slide-10
SLIDE 10

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES

slide-11
SLIDE 11

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES ◮ Analyze differences between the methods

slide-12
SLIDE 12

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES ◮ Analyze differences between the methods ◮ Extend linear RML result to new RML variants for combination of

linear and non-linear data

slide-13
SLIDE 13

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES ◮ Analyze differences between the methods ◮ Extend linear RML result to new RML variants for combination of

linear and non-linear data

◮ Extend linear RML result for variants of EnKF/ES

slide-14
SLIDE 14

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES ◮ Analyze differences between the methods ◮ Extend linear RML result to new RML variants for combination of

linear and non-linear data

◮ Extend linear RML result for variants of EnKF/ES

slide-15
SLIDE 15

Characteristics & Algorithms

◮ Define variants of EnKF and the RML method ◮ Remove impact of other characteristics than seq./sim. by ensuring

  • 1. Updates based on ensemble
  • 2. Perform one complete run
  • 3. Focus on static parameters

◮ Choose versions of RML and EnKF honoring 1-3

slide-16
SLIDE 16

Characteristics & Algorithms

◮ EnKF honors

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

slide-17
SLIDE 17

Characteristics & Algorithms

◮ EnKF honors

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ EnKF does not honor

◮ Point 3: Focus on static parameters

slide-18
SLIDE 18

Characteristics & Algorithms

◮ EnKF honors

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ EnKF does not honor

◮ Point 3: Focus on static parameters

◮ Solution

◮ Restart from initial time after each assimilation

slide-19
SLIDE 19

Characteristics & Algorithms

◮ EnKF honors

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ EnKF does not honor

◮ Point 3: Focus on static parameters

◮ Solution

◮ Restart from initial time after each assimilation ◮ EnKF → Half-iterative EnKS (Hi-EnKS)

slide-20
SLIDE 20

Characteristics & Algorithms

◮ EnKF honors

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ EnKF does not honor

◮ Point 3: Focus on static parameters

◮ Solution

◮ Restart from initial time after each assimilation ◮ EnKF → Half-iterative EnKS (Hi-EnKS)

◮ If data are assimilated simultaneously: Hi-EnKS → ES

slide-21
SLIDE 21

Characteristics & Algorithms

◮ EnKF honors

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ EnKF does not honor

◮ Point 3: Focus on static parameters

◮ Solution

◮ Restart from initial time after each assimilation ◮ EnKF → Half-iterative EnKS (Hi-EnKS)

◮ If data are assimilated simultaneously: Hi-EnKS → ES ◮ Hi-EnKS: sequential scheme honoring 1-3 ◮ ES: simultaneous scheme honoring 1-3

slide-22
SLIDE 22

Characteristics & Algorithms

◮ RML honors

◮ Point 3: Focus on static parameters

slide-23
SLIDE 23

Characteristics & Algorithms

◮ RML honors

◮ Point 3: Focus on static parameters

◮ RML does not honor

◮ Point 1: Updates based on ensemble

slide-24
SLIDE 24

Characteristics & Algorithms

◮ RML honors

◮ Point 3: Focus on static parameters

◮ RML does not honor

◮ Point 1: Updates based on ensemble

◮ Solution

◮ EnRML updates using an ensemble approximation to gradient: RML

→ EnRML

slide-25
SLIDE 25

Characteristics & Algorithms

◮ RML honors

◮ Point 3: Focus on static parameters

◮ RML does not honor

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ Solution

◮ EnRML updates using an ensemble approximation to gradient: RML

→ EnRML

slide-26
SLIDE 26

Characteristics & Algorithms

◮ RML honors

◮ Point 3: Focus on static parameters

◮ RML does not honor

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ Solution

◮ EnRML updates using an ensemble approximation to gradient: RML

→ EnRML

◮ Minimize utilizing one full Gauss-Newton step: EnRML

→GN-EnRML

slide-27
SLIDE 27

Characteristics & Algorithms

◮ RML honors

◮ Point 3: Focus on static parameters

◮ RML does not honor

◮ Point 1: Updates based on ensemble ◮ Point 2: Perform one complete run

◮ Solution

◮ EnRML updates using an ensemble approximation to gradient: RML

→ EnRML

◮ Minimize utilizing one full Gauss-Newton step: EnRML

→GN-EnRML

◮ Sim. GN-EnRML: Simultaneous scheme honoring 1-3 ◮ Seq. GN-EnRML: Sequential scheme honoring 1-3

slide-28
SLIDE 28

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES ◮ Analyze differences between the methods ◮ Extend linear RML result to new RML variants for combination of

linear and non-linear data

◮ Extend linear RML result for variants of EnKF/ES

slide-29
SLIDE 29

Comparing Hi-EnKS & GN-EnRML

◮ GN-EnRML update:

ma

j = mf j + Cm ˜

G T ˜ GCm ˜ G T + Cd −1 dj − g

  • mf

j

  • ◮ Hi-EnKS parameter update

ma

j = mf j + ˜

Cmg

  • ˜

Cgg + Cd −1 dj − g

  • mf

j

slide-30
SLIDE 30

Comparing Hi-EnKS & GN-EnRML

◮ GN-EnRML update:

ma

j = mf j + Cm ˜

G T ˜ GCm ˜ G T + Cd −1 dj − g

  • mf

j

  • ◮ Hi-EnKS parameter update

ma

j = mf j + ˜

Cmg

  • ˜

Cgg + Cd −1 dj − g

  • mf

j

  • ◮ The two methods are equal if

◮ Cm ˜

G T = ˜ Cmg

◮ ˜

GCm ˜ G T = ˜ C gg

slide-31
SLIDE 31

Comparing Hi-EnKS & GN-EnRML

✞ ✝ ☎ ✆ ˜ Cm ˜ G T = ˜ Cmg

◮ Ensemble gradient given by the pseudo inverse

˜ G = ∆d∆m†

◮ Ensemble covariance

˜ Cm = 1 Ne − 1∆m∆mT

slide-32
SLIDE 32

Comparing Hi-EnKS & GN-EnRML

✞ ✝ ☎ ✆ ˜ Cm ˜ G T = ˜ Cmg

◮ Ensemble gradient given by the pseudo inverse

˜ G = ∆d∆m†

◮ Ensemble covariance

˜ Cm = 1 Ne − 1∆m∆mT

◮ Rewriting

˜ Cm ˜ G T = 1 Ne − 1∆m∆mT ∆m†T ∆dT ⇒ ˜ Cm ˜ G T = 1 Ne − 1∆m∆dT = ˜ Cmg

slide-33
SLIDE 33

Comparing Hi-EnKS & GN-EnRML

✞ ✝ ☎ ✆ ˜ G ˜ Cm ˜ G T = ˜ Cgg

◮ Inserting for ˜

G and ˜ Cm ˜ G ˜ Cm ˜ G T = 1 Ne − 1∆dVpV T

p ∆dT

slide-34
SLIDE 34

Comparing Hi-EnKS & GN-EnRML

✞ ✝ ☎ ✆ ˜ G ˜ Cm ˜ G T = ˜ Cgg

◮ Inserting for ˜

G and ˜ Cm ˜ G ˜ Cm ˜ G T = 1 Ne − 1∆dVpV T

p ∆dT ◮ Ne ≤ Nm =

⇒ VpV T

p = I =

⇒ ˜ G ˜ Cm ˜ G T =

1 Ne−1∆d∆dT = ˜

Cgg

slide-35
SLIDE 35

Comparing Hi-EnKS & GN-EnRML

✞ ✝ ☎ ✆ ˜ G ˜ Cm ˜ G T = ˜ Cgg

◮ Inserting for ˜

G and ˜ Cm ˜ G ˜ Cm ˜ G T = 1 Ne − 1∆dVpV T

p ∆dT ◮ Ne ≤ Nm =

⇒ VpV T

p = I =

⇒ ˜ G ˜ Cm ˜ G T =

1 Ne−1∆d∆dT = ˜

Cgg ✓ ✒ ✏ ✑ Ne ≤ Nm ∧ ˜ Cm =

1 Ne−1∆m∆mT

Hi-EnKS = GN-EnRML

slide-36
SLIDE 36

Comparing Hi-EnKS & GN-EnRML

✞ ✝ ☎ ✆ ˜ G ˜ Cm ˜ G T = ˜ Cgg

◮ Inserting for ˜

G and ˜ Cm ˜ G ˜ Cm ˜ G T = 1 Ne − 1∆dVpV T

p ∆dT ◮ Ne ≤ Nm =

⇒ VpV T

p = I =

⇒ ˜ G ˜ Cm ˜ G T =

1 Ne−1∆d∆dT = ˜

Cgg ✓ ✒ ✏ ✑ Ne ≤ Nm ∧ ˜ Cm =

1 Ne−1∆m∆mT

Hi-EnKS = GN-EnRML

◮ Same result when comparing ES & sim. GN-EnRML

slide-37
SLIDE 37

Comparing Hi-EnKS & GN-EnRML

◮ For Ne > Nm the difference between GN-EnRML and Hi-EnKS

determined by ˜ Cgg = 1 Ne − 1∆dt∆dt ˜ G ˜ Cm ˜ G T = 1 Ne − 1∆dp∆dT

p ◮ ∆dt: True predicted data pertubation ◮ ∆dp: Predicted linearized data pertubation

∆dp = ˜ G∆m = ∆dtVpV T

p

slide-38
SLIDE 38

Comparing Hi-EnKS & GN-EnRML

◮ For Ne > Nm the difference between GN-EnRML and Hi-EnKS

determined by ˜ Cgg = 1 Ne − 1∆dt∆dt ˜ G ˜ Cm ˜ G T = 1 Ne − 1∆dp∆dT

p ◮ ∆dt: True predicted data pertubation ◮ ∆dp: Predicted linearized data pertubation

∆dp = ˜ G∆m = ∆dtVpV T

p ◮ Difference depends on the non-linearity of the data:

∆dt − ∆dp = ∆e

  • I − VpV T

p

  • ◮ ∆e = ej − e

◮ ej: Truncation error

slide-39
SLIDE 39

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES ◮ Analyze differences between the methods ◮ Extend linear RML result to new RML variants for combination of

linear and non-linear data

◮ Extend linear RML result for variants of EnKF/ES

slide-40
SLIDE 40

Sequential & Simultaneous assimilation (GN-EnRML)

◮ Utilizing seq./sim. GN-EnRML extend linear RML result for

combination of linear and non-linear data

◮ Choice of covariance update for seq. GN-EnRML

  • 1. C a

m = C f m − C f mG T

GC f

mG T + Cd

−1 GC f

m

  • 2. ˜

C a

m = 1 Ne−1∆m∆mT

slide-41
SLIDE 41

Sequential & Simultaneous assimilation (GN-EnRML)

◮ Utilizing seq./sim. GN-EnRML extend linear RML result for

combination of linear and non-linear data

◮ Choice of covariance update for seq. GN-EnRML

  • 1. C a

m = C f m − C f mG T

GC f

mG T + Cd

−1 GC f

m

  • 2. ˜

C a

m = 1 Ne−1∆m∆mT ◮ We choose 1 for seq. GN-EnRML

slide-42
SLIDE 42

Sequential & Simultaneous assimilation (GN-EnRML)

◮ Compare sim. assimilation with seq. assimilation of two data groups

slide-43
SLIDE 43

Sequential & Simultaneous assimilation (GN-EnRML)

◮ Compare sim. assimilation with seq. assimilation of two data groups

Case 1: d1 d2

  • =

g (mj) Gmj

  • After assimilation:

mseq

j

= msim

j

slide-44
SLIDE 44

Sequential & Simultaneous assimilation (GN-EnRML)

◮ Compare sim. assimilation with seq. assimilation of two data groups

Case 1: d1 d2

  • =

g (mj) Gmj

  • After assimilation:

mseq

j

= msim

j

Case 2: d1 d2

  • =
  • Gmj

g (mj)

  • After assimilation:

mseq

j

= msim

j

slide-45
SLIDE 45

Sequential & Simultaneous assimilation (GN-EnRML)

◮ Compare sim. assimilation with seq. assimilation of two data groups

Case 1: d1 d2

  • =

g (mj) Gmj

  • After assimilation:

mseq

j

= msim

j

Case 2: d1 d2

  • =
  • Gmj

g (mj)

  • After assimilation:

mseq

j

= msim

j ◮ GN-EnRML: seq. = sim. for non-linear → linear ◮ GN-EnRML: seq. = sim. for linear → non-linear

slide-46
SLIDE 46

Analytical strategy

Goal: Understand the importance of seq. and sim. assimilation when combining data with different degrees of non-linearity

◮ Note: Analytical result exist for seq. vs sim. RML with linear data ◮ Strategy:

◮ Define comparable variants of seq./sim. RML and EnKF/ES ◮ Analyze differences between the methods ◮ Extend linear RML result to new RML variants for combination of

linear and non-linear data

◮ Extend linear RML result for variants of EnKF/ES

slide-47
SLIDE 47

Analytical results

◮ Remember:

✓ ✒ ✏ ✑ Ne ≤ Nm ∧ ˜ Cm =

1 Ne−1∆m∆mT

Hi-EnKS = GN-EnRML

slide-48
SLIDE 48

Analytical results

◮ Remember:

✓ ✒ ✏ ✑ Ne ≤ Nm ∧ ˜ Cm =

1 Ne−1∆m∆mT

Hi-EnKS = GN-EnRML

◮ Seq. GN-EnRML updated covariance as:

C a

m = C f m − C f mG T

GC f

mG T + Cd

−1 GC f

m

slide-49
SLIDE 49

Analytical results

◮ Remember:

✓ ✒ ✏ ✑ Ne ≤ Nm ∧ ˜ Cm =

1 Ne−1∆m∆mT

Hi-EnKS = GN-EnRML

◮ Seq. GN-EnRML updated covariance as:

C a

m = C f m − C f mG T

GC f

mG T + Cd

−1 GC f

m ◮ Need: Ne → ∞ for C a m = ˜

Cm

◮ GN-EnRML = Hi-EnKS

slide-50
SLIDE 50

Analytical results

◮ Remember:

✓ ✒ ✏ ✑ Ne ≤ Nm ∧ ˜ Cm =

1 Ne−1∆m∆mT

Hi-EnKS = GN-EnRML

◮ Seq. GN-EnRML updated covariance as:

C a

m = C f m − C f mG T

GC f

mG T + Cd

−1 GC f

m ◮ Need: Ne → ∞ for C a m = ˜

Cm

◮ GN-EnRML = Hi-EnKS

◮ Difference between methods depend on non-linearity

◮ Perform numerical studies for Hi-EnKS with weakly non-linear data

slide-51
SLIDE 51

Numerical studies

◮ Difference between Hi-EnKS and GN-EnRML varies with ∆e ◮ Numerical study to investigate:

◮ Optimal assimilation strategy ◮ GN-EnRML result valid for Hi-EnKS

slide-52
SLIDE 52

Numerical studies

◮ Difference between Hi-EnKS and GN-EnRML varies with ∆e ◮ Numerical study to investigate:

◮ Optimal assimilation strategy ◮ GN-EnRML result valid for Hi-EnKS

◮ Numerical experiments

◮ Univariate: ◮ Simple forward model ◮ One linear data group ◮ One non-linear data group ◮ Multivariate ◮ 1D Reservoir ◮ One weakly non-linear data group ◮ One data group with stronger non-linearity

slide-53
SLIDE 53

Numerical studies

◮ Difference between Hi-EnKS and GN-EnRML varies with ∆e ◮ Numerical study to investigate:

◮ Optimal assimilation strategy ◮ GN-EnRML result valid for Hi-EnKS

◮ Numerical experiments

◮ Univariate: ◮ Simple forward model ◮ One linear data group ◮ One non-linear data group ◮ Multivariate ◮ 1D Reservoir ◮ One weakly non-linear data group ◮ One data group with stronger non-linearity

◮ Assess quality of Hi-EnKS/ES by Kullback-Leibler Divergence (KLD)

to McMC samples

◮ Nearest Neighbor kernel density estimator

slide-54
SLIDE 54

Univariate example

◮ Simple forward model

✞ ✝ ☎ ✆ di = mri

◮ Assimilate d1 → d2 and d2 → d1 with Hi-EnKS, and assimilate

simultaneously with ES mref 3 r1 1 Ensemble size 1 × 105 Prior mean 8 r2 2 McMC iterations 1 × 105 Prior Var 1 σ2

d1/2

0.1 McMC acceptance rate 0.2267

Table: Numerical details

slide-55
SLIDE 55

Univariate example

2 3 4 5 6 2 4 6 8 10

McMC

2 3 4 5 6 2 4 6 8 10

ES (1)

2 3 4 5 6 2 4 6 8 10

Hi-EnKS d1 → d2 (2)

2 3 4 5 6 2 4 6 8 10

Hi-EnKS d2 → d1 (3)

slide-56
SLIDE 56

Univariate example

2 3 4 5 6 2 4 6 8 10

McMC

2 3 4 5 6 2 4 6 8 10

ES (1)

2 3 4 5 6 2 4 6 8 10

Hi-EnKS d1 → d2 (2)

2 3 4 5 6 2 4 6 8 10

Hi-EnKS d2 → d1 (3)

KLD(1) 11.96 KLD(2) 0.079 KLD(3) 11.96

Table: Univariate results

slide-57
SLIDE 57

Multivariate example

◮ 1D reservoir consisting of 31 unknown parameters ◮ Two data groups

◮ d1 : 6 measurements of log(perm) field. Measurements made at wells

marked Hard obs

✞ ✝ ☎ ✆

d1 = log(perm)1.2

◮ d2 : 6 pressure observations, made at a single well marked Pres obs

1 Inj Hard obs 2 · · · 6 7 Hard obs 8 · · · 12 13 Hard obs 14 15 16 Pres obs 17 18 19 Hard obs 20 · · · 24 25 Hard obs 26 · · · 30 31 Prod Hard obs

Figure: Grid blocks & well placement

slide-58
SLIDE 58

Multivariate example

Ensemble size 5 × 104 McMC proposals 5 × 105 McMC acceptance rate 0.238

Table: Numerical details

10 20 30 40 4 5 6 7

McMC Hi-EnKS d1 → d2 Hi-EnKS d2 → d1 ES

Figure: Mean values

slide-59
SLIDE 59

Multivariate example

0.5 1 1.5 2

Hi-EnKS d1 → d2 Hi-EnKS d2 → d1 ES

Figure: KLD

KLDd1→d2 0.48 KLDd2→d1 1.58 KLDES 1.66

Table: Multivariate results

slide-60
SLIDE 60

Conclusions

Analysis shows:

◮ GN-EnRML: seq. = sim. for non-linear data before linear data ◮ For Ne ≤ Nm ∧ ˜

Cm =

1 Ne−1∆m∆mT: GN-EnRML = Hi-EnKS ◮ For Ne > Nm: (GN-EnRML - Hi-EnKS) ∝ ∆e

slide-61
SLIDE 61

Conclusions

Analysis shows:

◮ GN-EnRML: seq. = sim. for non-linear data before linear data ◮ For Ne ≤ Nm ∧ ˜

Cm =

1 Ne−1∆m∆mT: GN-EnRML = Hi-EnKS ◮ For Ne > Nm: (GN-EnRML - Hi-EnKS) ∝ ∆e

Numerical experiments show:

◮ Hi-EnKS: seq. = sim. non-linear data before linear data ◮ Data with weakest non-linearity first:

◮ Best univariate and multivariate mean ◮ Best univariate and multivariate KLD

slide-62
SLIDE 62

Thank you