an ensemble based kernel le learning
play

An ensemble-based kernel le learning framework to handle data - PowerPoint PPT Presentation

An ensemble-based kernel le learning framework to handle data assimilation problems wit ith im imperfect forw rward simulators Xiaodong Luo, NORCE Outline Background and motivation An ensemble-based kernel algorithm for supervised


  1. An ensemble-based kernel le learning framework to handle data assimilation problems wit ith im imperfect forw rward simulators Xiaodong Luo, NORCE

  2. Outline • Background and motivation • An ensemble-based kernel algorithm for supervised learning • From supervised learning to data assimilation with model errors • Synthetical examples and a real field application • Discussion and conclusion

  3. Seismic survey for hydrocarbon reservoir monitoring and management More advanced techniques available e.g., Ocean Bottom Cable (OBC) or even Permanent Reservoir Monitoring (PRM) system Source: https://oilnow.gy/

  4. Seismic history matching (S (SHM) SHM involves using (3D or 4D) seismic data to estimate properties of reservoir formations Forward seismic simulator Input: Output: petrophysical seismic data parameters History matching algorithm

  5. Forw rward seismic simulation and inversion Prone to model errors Impedance Impedance ( 𝑤 𝑞 , 𝑤 𝑡 , 𝜍 ) ( 𝑤 𝑞 , 𝑤 𝑡 , 𝜍 ) Forward simulation Rock physics Inversion model Saturation Pressure Reservoir simulation Petrophysical Petrophysical parameters parameters

  6. Motivation Develop a workflow to account for model errors in rock physics models (RPM)

  7. Outline • Background and motivation • An ensemble-based kernel algorithm for supervised learning • From supervised learning to data assimilation with model errors • Synthetical examples and a real field application • Discussion and conclusion

  8. Supervised learning (1 (1/3) Outputs (y) 𝑂 𝑡 with 𝑂 𝑡 • We have a set of inputs 𝑌 ≡ {𝑦 𝑗 } 𝑗=1 𝑧 1 samples; and a corresponding set of noisy 𝑧 3 𝑧 2 𝑧 𝑂 𝑡 𝑂 𝑡 outputs 𝑍 ≡ {𝑧 𝑗 } 𝑗=1 … • We want to learn a function ℎ so that ℎ 𝑦 𝑗 match 𝑧 𝑗 to a good extent, 𝑦 1 for 𝑗 = 1, 2, … , 𝑂 𝑡 𝑦 3 𝑦 2 𝑦 𝑂 𝑡 Inputs (x)

  9. Supervised learning (2 (2/3) To this end, we solve a functional optimization (known as empirical risk minimization , ERM ) problem to find the optimal ℎ ∗ 1 2 + 𝛿 𝑆(||ℎ||) ℎ ∗ = argmin ∑ 𝑧 𝑗 − ℎ 𝑦 𝑗 𝑂 𝑡 ℎ 𝑗 • 𝛿 : regularization parameter regularization functional to avoid overfitting, e.g., 𝑆 𝑦 = 𝑦 2 • R: • ||ℎ|| : functional norm in a certain function space

  10. Supervised learning (3 (3/3) To solve the ERM problem, in practice, one strategy is to adopt a parametric model that can be used to approximate a functional Then the ERM problem is converted to a parameter estimation problem, i.e., 1 2 + 𝛿 𝑆(||ℎ||) ℎ ∗ = argmin ∑ 𝑧 𝑗 − ℎ 𝑦 𝑗 𝑂 𝑡 ℎ 𝑗 ℎ 𝜄; 𝑦 𝑗 ≈ ℎ 𝑦 𝑗 1 2 + 𝛿 𝑆 𝜄 ; 𝜄 ∗ = argmin ∑ 𝑧 𝑗 − ℎ 𝜄; 𝑦 𝑗 𝑂 𝑡 𝑗 𝜄 ℎ 𝜄; 𝑦 𝑗 ≈ ℎ 𝑦 𝑗 Examples of parametric model for functional approximation: generalized linear models support vector machines (SVM) (shallow or deep) neural networks

  11. Ensemble-based supervised learning (1/2) (1 1 2 + 𝛿 𝑆 𝜄 𝜄 ∗ = argmin ∑ 𝑧 𝑗 − ℎ 𝜄; 𝑦 𝑗 𝑂 𝑡 𝑗 𝜄 Similar to a Variational Data vectorize Assimilation (Var) problem 2 + 𝛿 𝑆 𝜄 𝜄 ∗ = argmin 𝑍 − 𝐼 𝜄; 𝑌 𝜄 Naturally, in light of the developments of ensemble based data assimilation methods, instead of estimating a single set 𝜄 of parameters, we can estimate an ensemble 𝑂 𝑓 Θ ≡ {𝜄 𝑘 } 𝑘=1 of such parameters

  12. Ensemble-based supervised learning (2 (2/2) We will obtain all the benefits in using 2 + 𝛿 𝑆 𝜄 𝜄 ∗ = argmin ensemble based methods: 𝑍 − 𝐼 𝜄; 𝑌 • Adjoint free 𝜄 ensemblize • Uncertainty quantification 2 Θ ∗ = argmin 1 𝑂 𝑓 { ∑ 𝑍 − 𝐼 𝜄 𝑘 ; 𝑌 + 𝛿 𝑆 𝜄 𝑘 } • Fast implementation 𝑂𝑓 𝑘 Θ={𝜄 𝑘 } 𝑘=1 Iterative ensemble smoothers, e.g., Luo et al. 2015*, can be used to solve the ensemble-based (supervised) learning problem *Luo, X., Stordal, A. S., Lorentzen, R. J., & Naevdal, G. (2015). Iterative Ensemble Smoother as an Approximate Solution to a Regularized Minimum-Average-Cost Problem: Theory and Applications. SPE Journal , 20 , 962-982.

  13. Kernel method for fu functional approximation 𝑑𝑞 ℎ 𝑦; 𝜄 = ෍ 𝑑 𝑙 𝐿( 𝑦 − 𝑦 𝑙 ; 𝛾 𝑙 ) 𝑙 T 𝜄 = 𝑑 1 , 𝑑 2 , … 𝑑 𝑂 𝑡𝑞 ; 𝛾 1 , 𝛾 2 , … 𝛾 𝑂 𝑡𝑞 𝑑𝑞 ( 𝑙 = 1, 2, … , 𝑂 𝑡𝑞 ), where for a set of “center points” 𝑦 𝑙 • 𝑑 𝑙 and 𝛾 𝑙 are parameters associated with the k-th center point • K is a certain kernel function. Here we use Gaussian kernel 𝑑𝑞 2 2 𝑦−𝑦 𝑙 𝑑𝑞 = 𝑓 −𝛾 𝑙 𝐿 𝑦 − 𝑦 𝑙 ; 𝛾 𝑙

  14. Outline • Background and motivation • An ensemble-based kernel algorithm for supervised learning • From supervised learning to data assimilation with model errors • Synthetical examples and a real field application • Discussion and conclusion

  15. Problem statement (1 (1/3) • Problem in consideration: 𝒛 𝑝 = 𝒈 𝒚 𝑢𝑠 + 𝝑 where • 𝒛 𝑝 : observed output (observation) • 𝒚 𝑢𝑠 : underlying true model variables that generate 𝒛 𝒑 through the true forward simulator 𝒈 • 𝒈 : true (but unknown) forward simulator • 𝝑 : observation noise. 𝝑~𝑶(𝟏, 𝑫 𝒆 )

  16. Problem statement (2 (2/3) • In history matching (data assimilation), we may use the following forward simulation system 𝒛 𝑡𝑗𝑛 = 𝒉 𝒚 where • 𝒛 𝑡𝑗𝑛 : simulated observation • 𝒚 : model variables to be estimated • 𝒉 : imperfect forward simulator

  17. Problem statement (3 (3/3) 𝒛 𝒑 = 𝒉 𝒚 + 𝒛 𝒑 − 𝒉 𝒚 ≈ 𝒉 𝒚 + 𝒔(𝒚, 𝜾) Kernel methods (or other machine learning models) can be used to reparametrize/approximate the residual term* 𝒑 , 𝒚 𝒅𝒒 ) ≡ 𝒔(𝒚, 𝜾; 𝒛 𝒑 , 𝒛 𝒅𝒒 𝒔 𝒚, 𝜾 so instead of trying to find an optimal functional form for 𝒔 , we optimize/estimate a set 𝜾 of parameters (as well as 𝒚 ) instead. X. Luo, 2019. Ensemble-based kernel learning for a class of data assimilation problems with imperfect forward simulators. Available from arXiv:1901.10758

  18. Ensembled-based data assim imilation wit ith kernel approximation to the residual term 𝑈 −1 𝒛 𝒑 − 𝒉 𝒚 𝒌 Θ ∗ = 𝒛 𝒑 − 𝒉 𝒚 𝒌 argmin ∑ − 𝒔 𝒚 𝒌 , 𝜾 𝒌 𝐷 𝑒 − 𝒔 𝒚 𝒌 , 𝜾 𝒌 + 𝛿 𝑆 [𝒚 𝒌 ; 𝜾 𝒌 ] 𝑂𝑓 𝑘 Θ={[𝒚 𝒌 ;𝜾 𝒌 ]} 𝑘=1 • This optimization problem can still be solved through an iterative ensemble smoother • We need to jointly estimate/update 𝒚 𝒌 and 𝜾 𝒌 • In implementation, it just means that we augment 𝒚 𝒌 and 𝜾 𝒌 into model variable vectors that will be updated

  19. Outline • Background and motivation • An ensemble-based kernel algorithm for supervised learning • From supervised learning to data assimilation with model errors • Synthetical examples and a real field application • Discussion and conclusion

  20. Synthetic example 1: : superv rvised learning Blue: Ensemble of predicted functions Cyan (solid): Ensemble mean Red (dashed): reference function Red (dashed): reference function Green (dashed): biased function Green (dashed): biased function Initial ensemble Final ensemble

  21. Synthetic example 2: : data assimilation Truth Mean of initial ensemble Mean of final ensemble Mean of final ensemble (with model error correction) (no model error correction)

  22. More information and results of both synthetical examples (supervised learning and data assimilation) can be found in the preprint X. Luo, 2019. Ensemble-based kernel learning for a class of data assimilation problems with imperfect forward simulators. Available from arXiv:1901.10758

  23. Real l fie field ld ap appli licatio ion: : ac accountin ing for or roc ock-physics- model l im imperfection in in his istory matchin ing se seis ismic ic data fr from No Norne fie field ld In In coll ollaboratio ion wit ith my col olle leagues Rolf lf Lo Lorentzen, Tuhin in Bhakta

  24. Experimental settings Types of settings Values/Info Reservoir model size: 46 x 112 x 22 Seismic data (four surveys) Acoustic impedance on each active gridblock Total number: 453,376; reduced to 24,232 through wavelet-based sparse representation* Production data (1997 - 2006) WOPRH, WGPRH, WWPRH Total number: 5,038 Model variables to estimate PERM, PORO, NTG etc. Total number: 148,183 History matching algorithm Iterative ES (Luo et al. 2015) + correlation-based adaptive localization (Luo et al. 2018, 2019) *X Luo, T Bhakta, M Jakobsen, G Nævdal, 2017. An ensemble 4D-seismic history-matching framework with sparse representation based on wavelet multiresolution analysis. SPE Journal , 22, 985 - 1,010

  25. Experimental settings The setting without model error correction (MEC) Impedance Impedance ( 𝑤 𝑞 , 𝑤 𝑡 , 𝜍 ) ( 𝑤 𝑞 , 𝑤 𝑡 , 𝜍 ) Forward simulation Rock physics Inversion model Saturation Pressure Reservoir simulation Petrophysical Petrophysical parameters parameters

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend