implementation of ies in ert and validation
play

Implementation of IES in ERT and validation Geir Evensen, Patrick - PowerPoint PPT Presentation

Implementation of IES in ERT and validation Geir Evensen, Patrick Raanes, and Andreas Stordal NORCENorwegian Research Center Nansen Environmental and Remote Sensing Center EnKF Workshop Voss, June 36, 2019


  1. Implementation of IES in ERT and validation Geir Evensen, Patrick Raanes, and Andreas Stordal NORCE–Norwegian Research Center Nansen Environmental and Remote Sensing Center EnKF Workshop Voss, June 3–6, 2019 https://www.nonlin-processes-geophys-discuss.net/npg-2019-10/ Geir Evensen Slide 1

  2. Some definitions Prior ensemble and perturbed measurements � � � � x f 1 , x f 2 , . . . , x f X = D = d 1 , d 2 , . . . , d N and N Ensemble means N N x = 1 d = 1 � � and x j d j N N j =1 j =1 Ensemble anomaly matrices and covariances �� √ � I N − 1 N 11 T C xx = AA T A = X N − 1 → �� √ � I N − 1 N 11 T C dd = EE T E = D N − 1 → Geir Evensen Slide 2

  3. IES: Ensemble subspace version Original cost functions � T C − 1 � T C − 1 x j − x f x j − x f � � � � � � J ( x j ) = + g ( x j ) − d j g ( x j ) − d j . j xx j dd Solution is contained in the ensemble subspace , thus x a j = x f j + Aw j , and, � T � � � J ( w j ) = w T x f C − 1 x f � � � � j w j + g j + Aw j − d j g j + Aw j − d j dd Reduces dimension of problem from state size to ensemble size. w i +1 = w i j − γ ∇J i j j Geir Evensen Slide 3

  4. Gradient and Hessian of cost function Gradient � T C − 1 x f � � � � � ∇J ( w j ) = 2 w j + 2 j + Aw j − d j , G j A g dd Hessian (approximate) � T C − 1 � � � ∇∇J ( w j ) ≈ 2 I + 2 G j A G j A dd Geir Evensen Slide 4

  5. Gauss-Newton iterations w i +1 = w i j − γ ∆ w i j j � � − 1 � T + C dd � T �� ∆ w i w i G i G i G i � �� j = j − j A j A j A ��� �� G i w i x f j + Aw i � � × j A j + d j − g . j with � T . G i � j = ∇ g | x f j + Aw i j Geir Evensen Slide 5

  6. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx Geir Evensen Slide 6

  7. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity Geir Evensen Slide 6

  8. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i Geir Evensen Slide 6

  9. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i = Y i A + i A Geir Evensen Slide 6

  10. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i = Y i A + i A ( A i = A Ω i ) Geir Evensen Slide 6

  11. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i = Y i A + i A ( A i = A Ω i ) = Y i A + i A i Ω − 1 i Geir Evensen Slide 6

  12. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i = Y i A + i A ( A i = A Ω i ) I − 1 = Y i A + i A i Ω − 1 A + N 11 T � � i A i = Π A T = i Geir Evensen Slide 6

  13. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i = Y i A + i A ( A i = A Ω i ) I − 1 = Y i A + i A i Ω − 1 A + N 11 T � � i A i = Π A T = i  Y i Ω − 1 linear case i   = S i =   Geir Evensen Slide 6

  14. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i = Y i A + i A ( A i = A Ω i ) I − 1 = Y i A + i A i Ω − 1 A + N 11 T � � i A i = Π A T = i  Y i Ω − 1 linear case i   Y i Ω − 1 = S i = n ≥ N − 1 , i   Geir Evensen Slide 6

  15. G i j A Define the linear regression � − 1 C i yx = G i C i G i = C i C i � or xx yx xx and write G i j A � G i A = C i yx ( C i xx ) − 1 A Average sensitivity � + A ≈ G i A = Y i A T A i A T � Ensemble repr. i i = Y i A + i A ( A i = A Ω i ) I − 1 = Y i A + i A i Ω − 1 A + N 11 T � � i A i = Π A T = i  Y i Ω − 1 linear case i   Y i Ω − 1 = S i = n ≥ N − 1 , i Y i A + i A i Ω − 1  n < N − 1 ,  i Geir Evensen Slide 6

  16. Equation for W Matrix form with S i = Y i Ω − 1 i W i +1 = W i − � � − 1 � �� W i − S T S i S T � γ i + C dd S i W i − D + g ( X i ) i Geir Evensen Slide 7

  17. IES ensemble subspace algorithm 1: Inputs: X , D , (and C dd ) 2: W 1 = 0 3: for i = 1 , Convergence do √ � I − 1 N 11 T � Y i = g ( X i ) / N − 1 4: N 11 T �� √ � I − 1 Ω i = I + W i N − 1 5: Ω T i S T i = Y T O ( mN 2 ) 6: i 7: H i = S i W i + D − g ( X i ) O ( mN 2 ) � � � − 1 H i W i − S T � S i S T W i +1 = W i − γ i + C dd 8: O ( mN 2 ) i √ � � X i +1 = X I + W i +1 / N − 1 O ( nN 2 ) 9: 10: end for ◮ Order O ( mN 2 ) and O ( nN 2 ) ◮ No pseudo inversions of large matrices. Geir Evensen Slide 8

  18. Example nonlinear model PRIOR 0.6 BAYES IES_7_0.0 IES_4_0.0 SIES_4_0.0 Marginal PDF 0.4 0.2 0.0 -2.00 0.00 2.00 4.00 x Geir Evensen Slide 9

  19. Iterations nonlinear model 0.7 Prior s-IES_1 s-IES_2 0.6 s-IES_3 s-IES_4 0.5 s-IES_5 Marginal PDF s-IES_6 s-IES_7 0.4 0.3 0.2 0.1 0.0 -2.00 -1.00 0.00 1.00 2.00 3.00 4.00 x Geir Evensen Slide 10

  20. Equation for W Standard form ( O ( m 3 )) � � � − 1 H i W i − S T S i S T � W i +1 = W i − γ i + C dd i From Woodbury, rewrite as � � − 1 S T � S T i C − 1 i C − 1 � W i +1 = W i − γ W i − dd S i + I N dd H For C dd = I m we have ( O ( mN 2 )) � � − 1 S T � S T � W i +1 = W i − γ W i − i S i + I N i H Geir Evensen Slide 11

  21. Subspace inversion ( Evensen , 2004) ◮ Why invert m-dimensional matrix when solving for N coefficients? SS T + C dd � � U ΣΣ T U T + C dd � � = I N + Σ + U T C dd U ( Σ + ) T � Σ T U T � ≈ U Σ = SS T + ( SS + ) C dd ( SS + ) T I N + Z Λ Z T � Σ T U T � = U Σ Z T Σ T U T � � = U Σ Z I N + Λ � − 1 ≈ U ( Σ + ) T Z SS T + C dd � − 1 � � T U ( Σ + ) T Z � � I N + Λ ◮ Cost is O ( m 2 N ) . Geir Evensen Slide 12

  22. Subspace inversion with C dd ≈ EE T . ◮ Do not form C dd but work directly with E . SS T + EE T � � U ΣΣ T U T + EE T � � = I N + Σ + U T EE T U ( Σ + ) T � Σ T U T � ≈ U Σ = SS T + ( SS + ) EE T ( SS + ) T I N + Z Λ Z T � Σ T U T � = U Σ Z T Σ T U T � � = U Σ Z I N + Λ SS T + EE T � − 1 ≈ U ( Σ + ) T Z � − 1 � � T U ( Σ + ) T Z � � I N + Λ ◮ Cost is O ( mN 2 ) . Geir Evensen Slide 13

  23. IES costfunctions: Linear case 14 12 Cost function value 10 8 6 4 2 0 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x Geir Evensen Slide 14

  24. IES costfunctions: Nonlinear case 16 14 Cost function value 12 10 8 6 4 2 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 x Geir Evensen Slide 15

  25. Steplength scheme 1 (a=0.6, b=0.6, c=0.0) (a=1.0, b=0.3, c=1.1) (a=0.6, b=0.3, c=2.0) 0.8 (a=0.6, b=0.2, c=3.0) Steplength 0.6 0.4 0.2 1 2 3 4 5 6 7 8 9 10 Iteration � � − ( i − 1) / ( c − 1) γ i = b + ( a − b )2 Geir Evensen Slide 16

  26. ERT: https://github.com/equinor/ert Geir Evensen Slide 17

  27. ERT: https://github.com/equinor/ert Geir Evensen Slide 18

  28. Poly case Several simple tests are run using a “linear” model y ( x ) = ax 2 + bx + c (1) ◮ Coefficients a , b , and c are random Gaussian variables. ◮ Measurements ( d 1 , . . . , d 5 ) at x = (0 , 2 , 4 , 6 , 8) . ◮ Polynomial curve fitting to the 5 data points. ◮ Gauss-linear problem solved exactly by the ES. Geir Evensen Slide 19

  29. Subspace IES verification 6 2 4 2 A 0 B 0 ES_0 -2 STD_ENKF ES_1 -2 IES iterations -4 -6 Cases Cases 10 5 C 0 -5 -10 Cases Geir Evensen Slide 20

  30. Subspace IES verification 2.5 3.1 STD_ENKF 2 ES_1 IES_12 3 1.5 1 2.9 0.5 B C 2.8 0 -0.5 2.7 -1 -1.5 2.6 A A 3.1 3 2.9 C 2.8 2.7 2.6 B Geir Evensen Slide 21

  31. Subspace IES vs. ESMDA 2.5 3.1 ES_1 ES_1 2 ESMDA_5 ESMDA_5 3 1.5 1 2.9 0.5 B C 2.8 0 -0.5 2.7 -1 -1.5 2.6 A A 3.1 ES_1 ESMDA_5 3 2.9 C 2.8 2.7 2.6 B Geir Evensen Slide 22

  32. Subspace IES vs. EnRML implementation 2.5 3.1 ES_1 ES_1 2 RML_7 RML_7 3 1.5 1 2.9 0.5 B C 2.8 0 -0.5 2.7 -1 -1.5 2.6 A A 3.1 ES_1 RML_7 3 2.9 C 2.8 2.7 2.6 B Geir Evensen Slide 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend