Implementation of IES in ERT and validation Geir Evensen, Patrick - - PowerPoint PPT Presentation

implementation of ies in ert and validation
SMART_READER_LITE
LIVE PREVIEW

Implementation of IES in ERT and validation Geir Evensen, Patrick - - PowerPoint PPT Presentation

Implementation of IES in ERT and validation Geir Evensen, Patrick Raanes, and Andreas Stordal NORCENorwegian Research Center Nansen Environmental and Remote Sensing Center EnKF Workshop Voss, June 36, 2019


slide-1
SLIDE 1

Implementation of IES in ERT and validation

Geir Evensen, Patrick Raanes, and Andreas Stordal

NORCE–Norwegian Research Center Nansen Environmental and Remote Sensing Center

EnKF Workshop Voss, June 3–6, 2019

https://www.nonlin-processes-geophys-discuss.net/npg-2019-10/

Geir Evensen Slide 1

slide-2
SLIDE 2

Some definitions

Prior ensemble and perturbed measurements X =

  • xf

1, xf 2, . . . , xf N

  • and

D =

  • d1, d2, . . . , dN
  • Ensemble means

x = 1 N

N

  • j=1

xj and d = 1 N

N

  • j=1

dj Ensemble anomaly matrices and covariances A = X

  • IN − 1

N 11T √ N − 1 → Cxx = AAT E = D

  • IN − 1

N 11T √ N − 1 → Cdd = EET

Geir Evensen Slide 2

slide-3
SLIDE 3

IES: Ensemble subspace version

Original cost functions J (xj) =

  • xj − xf

j

TC−1

xx

  • xj − xf

j

  • +
  • g(xj) − dj

TC−1

dd

  • g(xj) − dj
  • .

Solution is contained in the ensemble subspace, thus xa

j = xf j + Awj,

and, J (wj) = wT

j wj +

  • g
  • xf

j + Awj

  • − dj

T C−1

dd

  • g
  • xf

j + Awj

  • − dj
  • Reduces dimension of problem from state size to ensemble size.

wi+1

j

= wi

j − γ∇J i j

Geir Evensen Slide 3

slide-4
SLIDE 4

Gradient and Hessian of cost function

Gradient ∇J (wj) = 2wj + 2

  • GjA

TC−1

dd

  • g
  • xf

j + Awj

  • − dj
  • ,

Hessian (approximate) ∇∇J (wj) ≈ 2I + 2

  • GjA

TC−1

dd

  • GjA
  • Geir Evensen

Slide 4

slide-5
SLIDE 5

Gauss-Newton iterations

wi+1

j

= wi

j − γ∆wi j

∆wi

j =

  • wi

j −

  • Gi

jA

T Gi

jA

  • Gi

jA

T + Cdd −1 ×

  • Gi

jA

  • wi

j + dj − g

  • xf

j + Awi j

  • .

with Gi

j =

  • ∇g|xf

j+Awi j

T.

Geir Evensen Slide 5

slide-6
SLIDE 6

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1

Geir Evensen Slide 6

slide-7
SLIDE 7

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

Geir Evensen Slide 6

slide-8
SLIDE 8

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

Geir Evensen Slide 6

slide-9
SLIDE 9

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

= Y iA+

i A

Geir Evensen Slide 6

slide-10
SLIDE 10

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

= Y iA+

i A

(Ai = AΩi)

Geir Evensen Slide 6

slide-11
SLIDE 11

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

= Y iA+

i A

(Ai = AΩi) = Y iA+

i AiΩ−1 i

Geir Evensen Slide 6

slide-12
SLIDE 12

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

= Y iA+

i A

(Ai = AΩi) = Y iA+

i AiΩ−1 i

A+

i Ai = ΠAT =

  • I − 1

N 11T

Geir Evensen Slide 6

slide-13
SLIDE 13

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

= Y iA+

i A

(Ai = AΩi) = Y iA+

i AiΩ−1 i

A+

i Ai = ΠAT =

  • I − 1

N 11T = Si =      Y iΩ−1

i

linear case

Geir Evensen Slide 6

slide-14
SLIDE 14

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

= Y iA+

i A

(Ai = AΩi) = Y iA+

i AiΩ−1 i

A+

i Ai = ΠAT =

  • I − 1

N 11T = Si =      Y iΩ−1

i

linear case Y iΩ−1

i

n ≥ N − 1,

Geir Evensen Slide 6

slide-15
SLIDE 15

Gi

jA Define the linear regression Ci

yx = GiCi xx

  • r

Gi = Ci

yx

  • Ci

xx

−1 and write Gi

jA GiA = Ci yx(Ci xx)−1A

Average sensitivity

≈ GiA = Y iAT

i

  • AiAT

i

+A

Ensemble repr.

= Y iA+

i A

(Ai = AΩi) = Y iA+

i AiΩ−1 i

A+

i Ai = ΠAT =

  • I − 1

N 11T = Si =      Y iΩ−1

i

linear case Y iΩ−1

i

n ≥ N − 1, Y iA+

i AiΩ−1 i

n < N − 1,

Geir Evensen Slide 6

slide-16
SLIDE 16

Equation for W

Matrix form with Si = Y iΩ−1

i

W i+1 = W i− γ

  • W i − ST

i

  • SiST

i + Cdd

−1 SiW i − D + g(Xi)

  • Geir Evensen

Slide 7

slide-17
SLIDE 17

IES ensemble subspace algorithm

1: Inputs: X, D, (and Cdd) 2: W 1 = 0 3: for i = 1, Convergence do 4:

Y i = g(Xi)

  • I − 1

N 11T

/ √ N − 1

5:

Ωi = I + W i

  • I − 1

N 11T√

N − 1

6:

ΩT

i ST i = Y T i

O(mN 2)

7:

Hi = SiW i + D − g(Xi)

O(mN 2)

8:

W i+1 = W i − γ

  • W i − ST

i

  • SiST

i + Cdd

−1Hi

  • O(mN 2)

9:

Xi+1 = X

  • I + W i+1/

√ N − 1

  • O(nN 2)

10: end for

◮ Order O(mN 2) and O(nN 2) ◮ No pseudo inversions of large matrices.

Geir Evensen Slide 8

slide-18
SLIDE 18

Example nonlinear model

x Marginal PDF

  • 2.00

0.00 2.00 4.00 0.0 0.2 0.4 0.6 PRIOR BAYES IES_7_0.0 IES_4_0.0 SIES_4_0.0

Geir Evensen Slide 9

slide-19
SLIDE 19

Iterations nonlinear model

x

Marginal PDF

  • 2.00
  • 1.00

0.00 1.00 2.00 3.00 4.00 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Prior s-IES_1 s-IES_2 s-IES_3 s-IES_4 s-IES_5 s-IES_6 s-IES_7

Geir Evensen Slide 10

slide-20
SLIDE 20

Equation for W

Standard form (O(m3)) W i+1 = W i − γ

  • W i − ST

i

  • SiST

i + Cdd

−1Hi

  • From Woodbury, rewrite as

W i+1 = W i − γ

  • W i −
  • ST

i C−1 dd Si + IN

−1ST

i C−1 dd H

  • For Cdd = Im we have (O(mN 2))

W i+1 = W i − γ

  • W i −
  • ST

i Si + IN

−1ST

i H

  • Geir Evensen

Slide 11

slide-21
SLIDE 21

Subspace inversion (Evensen, 2004)

◮ Why invert m-dimensional matrix when solving for N coefficients?

  • SST + Cdd
  • =
  • UΣΣTU T + Cdd
  • ≈ UΣ
  • IN + Σ+U TCddU(Σ+)T

ΣTU T = SST + (SS+)Cdd(SS+)T = UΣ

  • IN + ZΛZT

ΣTU T = UΣZ

  • IN + Λ
  • ZTΣTU T
  • SST + Cdd

−1 ≈ U(Σ+)TZ

  • IN + Λ

−1 U(Σ+)TZ T ◮ Cost is O(m2N).

Geir Evensen Slide 12

slide-22
SLIDE 22

Subspace inversion with Cdd ≈ EET.

◮ Do not form Cdd but work directly with E.

  • SST + EET

=

  • UΣΣTU T + EET

≈ UΣ

  • IN + Σ+U TEETU(Σ+)T

ΣTU T = SST + (SS+)EET(SS+)T = UΣ

  • IN + ZΛZT

ΣTU T = UΣZ

  • IN + Λ
  • ZTΣTU T
  • SST + EET−1 ≈ U(Σ+)TZ
  • IN + Λ

−1 U(Σ+)TZ T ◮ Cost is O(mN 2).

Geir Evensen Slide 13

slide-23
SLIDE 23

IES costfunctions: Linear case

x Cost function value

  • 2.5
  • 2
  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 2 4 6 8 10 12 14

Geir Evensen Slide 14

slide-24
SLIDE 24

IES costfunctions: Nonlinear case

x Cost function value

  • 2
  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 4 6 8 10 12 14 16

Geir Evensen Slide 15

slide-25
SLIDE 25

Steplength scheme

Iteration Steplength

1 2 3 4 5 6 7 8 9 10 0.2 0.4 0.6 0.8 1

(a=0.6, b=0.6, c=0.0) (a=1.0, b=0.3, c=1.1) (a=0.6, b=0.3, c=2.0) (a=0.6, b=0.2, c=3.0)

γi = b + (a − b)2

  • −(i−1)/(c−1)
  • Geir Evensen

Slide 16

slide-26
SLIDE 26

ERT: https://github.com/equinor/ert

Geir Evensen Slide 17

slide-27
SLIDE 27

ERT: https://github.com/equinor/ert

Geir Evensen Slide 18

slide-28
SLIDE 28

Poly case

Several simple tests are run using a “linear” model y(x) = ax2 + bx + c (1) ◮ Coefficients a, b, and c are random Gaussian variables. ◮ Measurements (d1, . . . , d5) at x = (0, 2, 4, 6, 8). ◮ Polynomial curve fitting to the 5 data points. ◮ Gauss-linear problem solved exactly by the ES.

Geir Evensen Slide 19

slide-29
SLIDE 29

Subspace IES verification

Cases A

  • 2

2

ES_0 STD_ENKF ES_1 IES iterations Cases B

  • 6
  • 4
  • 2

2 4 6

Cases C

  • 10
  • 5

5 10

Geir Evensen Slide 20

slide-30
SLIDE 30

Subspace IES verification

A B

  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 2.5

STD_ENKF ES_1 IES_12 A C

2.6 2.7 2.8 2.9 3 3.1

B C

2.6 2.7 2.8 2.9 3 3.1

Geir Evensen Slide 21

slide-31
SLIDE 31

Subspace IES vs. ESMDA

A B

  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 2.5

ES_1 ESMDA_5 A C

2.6 2.7 2.8 2.9 3 3.1

ES_1 ESMDA_5 B C

2.6 2.7 2.8 2.9 3 3.1

ES_1 ESMDA_5 Geir Evensen Slide 22

slide-32
SLIDE 32

Subspace IES vs. EnRML implementation

A B

  • 1.5
  • 1
  • 0.5

0.5 1 1.5 2 2.5

ES_1 RML_7 A C

2.6 2.7 2.8 2.9 3 3.1

ES_1 RML_7 B C

2.6 2.7 2.8 2.9 3 3.1

ES_1 RML_7 Geir Evensen Slide 23

slide-33
SLIDE 33

Reek case: Ensemble of cost functions

Iteration number Cost function value

1 2 3 4 5 6 7 8 9 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Geir Evensen Slide 24

slide-34
SLIDE 34

Reek case: Averaged cost function

Iterations Cost fuction Delta W

1 2 3 4 5 6 7 8 600 800 1000 1200 1400 1600 1800 2000 2200 2400 0.05 0.1 0.15 0.2 0.25 IES0 DiffW1 IES1_999 DiffW1 IES3_999 DiffW1 Geir Evensen Slide 25

slide-35
SLIDE 35

Reek case: Fault multiplier

I E S _ . 6 L U _ I E S _ . 6 L U _ 8 I E S 1 _ . 6 _ . 9 9 9 L U _ 8 I E S 3 _ . 6 _ . 9 9 9 L U _ 8 Case 0.0 0.2 0.4 0.6 0.8 1.0 Value MULTFLT:F2 I E S _ . 6 L U _ I E S _ . 6 L U _ 8 I E S 1 _ . 6 _ . 9 9 9 L U _ 8 I E S 3 _ . 6 _ . 9 9 9 L U _ 8 Case 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Value MULTFLT:F5 I E S _ . 6 L U _ I E S _ . 6 L U _ 8 I E S 1 _ . 6 _ . 9 9 9 L U _ 8 I E S 3 _ . 6 _ . 9 9 9 L U _ 8 Case 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Value MULTFLT:F7

Geir Evensen Slide 26

slide-36
SLIDE 36

Reek case: Oil production

Time (days) WOPR:OP_3 (m^3/day)

200 400 600 800 1000 2000 4000 6000 8000

Time (days) WOPR:OP_5 (m^3/day)

200 400 600 800 1000 1000 2000 3000 4000 5000 6000 7000

Time (days) WOPR:OP_3 (m^3/day)

200 400 600 800 1000 2000 4000 6000 8000

Time (days) WOPR:OP_5 (m^3/day)

200 400 600 800 1000 1000 2000 3000 4000 5000 6000 7000

Geir Evensen Slide 27

slide-37
SLIDE 37

Summary

◮ Robust implementation of a robust IES formulation in ERT. ◮ IES algorithm formulated for big data and big models. ◮ Convergence properties meet requirements for operational use. ◮ Pointed out the value of test-based code development. ◮ ERT is a flexible tool for reservoir HM.

Geir Evensen Slide 28

slide-38
SLIDE 38

References

http://digires.no http://digires.no/research-/publications

Chen, Y., and D. S. Oliver, Ensemble randomized maximum likelihood method as an iterative ensemble smoother, Math. Geosci., 44, 1–26, 2012. Chen, Y., and D. S. Oliver, Levenberg-Marquardt forms of the iterative ensemble smoother for efficient history matching and uncertainty quantification, Computat Geosci, 17, 689–703, 2013. Evensen, G., Sampling strategies and square root analysis schemes for the EnKF, Ocean Dynamics, 54, 539–560, 2004. Evensen, G., Analysis of iterative ensemble smoothers for solving inverse problems, Computat Geosci, 22, pp. 885–908, 2018, https://doi.org/10.1007/s10596-018-9731-y. Evensen, G., Accounting for model errors in iterative ensemble smoothers, Computat Geosci, 2019, https://doi.org/10.1007/s10596-019-9819-z. Raanes, P . N., A. S. Stordal, and G. Evensen, Revising the stochastic iterative ensemble smoother, Nonlinear Processes in Geophysics Discussions, 2019, 1–22, 2019.

Geir Evensen Slide 29