An ensemble-based kernel le learning framework to handle data assimilation problems wit ith im imperfect forw rward simulators
Xiaodong Luo, NORCE
An ensemble-based kernel le learning framework to handle data - - PowerPoint PPT Presentation
An ensemble-based kernel le learning framework to handle data assimilation problems wit ith im imperfect forw rward simulators Xiaodong Luo, NORCE Outline Background and motivation An ensemble-based kernel algorithm for supervised
Xiaodong Luo, NORCE
Source: https://oilnow.gy/ More advanced techniques available e.g., Ocean Bottom Cable (OBC) or even Permanent Reservoir Monitoring (PRM) system
Output: seismic data Input: petrophysical parameters Forward seismic simulator History matching algorithm
SHM involves using (3D or 4D) seismic data to estimate properties of reservoir formations
Rock physics model
Impedance (π€π, π€π‘, π) Impedance (π€π, π€π‘, π) Petrophysical parameters Forward simulation Petrophysical parameters Saturation Pressure
Reservoir simulation
Inversion
β¦
π¦1 π§1 π¦2 π§2 π¦3 π§3 π¦ππ‘ π§ππ‘ Inputs (x) Outputs (y)
ππ‘ with ππ‘
samples; and a corresponding set of noisy
ππ‘
β π¦π match π§π to a good extent, for π = 1, 2, β¦ , ππ‘
To this end, we solve a functional optimization (known as empirical risk minimization, ERM) problem to find the optimal ββ ββ = argmin
β
1 ππ‘ β
π
π§π β β π¦π
2 + πΏ π(||β||)
regularization parameter
regularization functional to avoid overfitting, e.g., π π¦ = π¦2
functional norm in a certain function space
To solve the ERM problem, in practice, one strategy is to adopt a parametric model that can be used to approximate a functional Then the ERM problem is converted to a parameter estimation problem, i.e., ββ = argmin
β
1 ππ‘ β
π
π§π β β π¦π
2 + πΏ π(||β||)
πβ = argmin
π
1 ππ‘ β
π
π§π β β π; π¦π
2 + πΏ π π ;
Examples of parametric model for functional approximation: generalized linear models support vector machines (SVM) (shallow or deep) neural networks β π; π¦π β β π¦π β π; π¦π β β π¦π
πβ = argmin
π
1 ππ‘ β
π
π§π β β π; π¦π
2 + πΏ π π
πβ = argmin
π
π β πΌ π; π
2 + πΏ π π
vectorize Naturally, in light of the developments of ensemble based data assimilation methods, instead of estimating a single set π of parameters, we can estimate an ensemble Ξ β‘ {ππ}π=1
ππ
Similar to a Variational Data Assimilation (Var) problem
πβ = argmin
π
π β πΌ π; π
2 + πΏ π π
Ξβ = argmin
Ξ={ππ}π=1
ππ
1 ππ { β π
π β πΌ π
π; π 2
+ πΏ π π
π }
ensemblize We will obtain all the benefits in using ensemble based methods:
Iterative ensemble smoothers, e.g., Luo et al. 2015*, can be used to solve the ensemble-based (supervised) learning problem
*Luo, X., Stordal, A. S., Lorentzen, R. J., & Naevdal, G. (2015). Iterative Ensemble Smoother as an Approximate Solution to a Regularized Minimum-Average-Cost Problem: Theory and Applications. SPE Journal, 20, 962-982.
β π¦; π = ΰ·
π
ππ πΏ( π¦ β π¦π
ππ
; πΎπ) π = π1, π2, β¦ πππ‘π; πΎ1, πΎ2, β¦ πΎππ‘π
T
for a set of βcenter pointsβ π¦π
ππ (π = 1, 2, β¦ , ππ‘π), where
πΏ π¦ β π¦π
ππ
; πΎπ = πβπΎπ
2 π¦βπ¦π ππ 2
underlying true model variables that generate ππ through the true forward simulator π
true (but unknown) forward simulator
simulated observation
model variables to be estimated
imperfect forward simulator
ππ= π π + ππ β π π β π π + π(π, πΎ) Kernel methods (or other machine learning models) can be used to reparametrize/approximate the residual term* π π, πΎ β‘ π(π, πΎ; ππ, ππ π
π , ππ π)
so instead of trying to find an optimal functional form for π, we
Ξβ = argmin
Ξ={[ππ;πΎπ]}π=1
ππ
β
π
ππ β π ππ β π ππ, πΎπ
π
π·π
β1 ππ β π ππ
β π ππ, πΎπ + πΏ π [ππ; πΎπ]
vectors that will be updated
Blue: Ensemble of predicted functions Red (dashed): reference function Green (dashed): biased function Cyan (solid): Ensemble mean Red (dashed): reference function Green (dashed): biased function Initial ensemble Final ensemble
Truth Mean of initial ensemble Mean of final ensemble (no model error correction) Mean of final ensemble (with model error correction)
More information and results of both synthetical examples (supervised learning and data assimilation) can be found in the preprint
imperfect forward simulators. Available from arXiv:1901.10758
Real l fie field ld ap appli licatio ion: : ac accountin ing for
model l im imperfection in in his istory matchin ing se seis ismic ic data fr from No Norne fie field ld In In coll
ion wit ith my col
leagues Rolf lf Lo Lorentzen, Tuhin in Bhakta
Types of settings Values/Info Reservoir model size: 46 x 112 x 22 Seismic data (four surveys) Acoustic impedance on each active gridblock Total number: 453,376; reduced to 24,232 through wavelet-based sparse representation* Production data (1997 - 2006) WOPRH, WGPRH, WWPRH Total number: 5,038 Model variables to estimate PERM, PORO, NTG etc. Total number: 148,183 History matching algorithm Iterative ES (Luo et al. 2015) + correlation-based adaptive localization (Luo et al. 2018, 2019)
*X Luo, T Bhakta, M Jakobsen, G Nævdal, 2017. An ensemble 4D-seismic history-matching framework with sparse representation based on wavelet multiresolution analysis. SPE Journal, 22, 985 - 1,010
Rock physics model
Impedance (π€π, π€π‘, π) Impedance (π€π, π€π‘, π) Petrophysical parameters Petrophysical parameters Saturation Pressure
Reservoir simulation
Inversion Forward simulation
The setting without model error correction (MEC)
Rock physics model Kernel-based residual model
Impedance (π€π, π€π‘, π) Impedance (π€π, π€π‘, π) Petrophysical parameters Petrophysical parameters Saturation Pressure
Reservoir simulation
Inversion Forward simulation
The setting with model error correction (MEC)
Kernel-based residual model (inputs/output) at each active gridblock
Input 5: Water Saturation Input 4: Gas Saturation Input 3: Pressure
NTG Pressure PORO Gas Saturation Water Saturation
Output: Impedance
Input 1: PORO Input 2: NTG
Total number of kernel parameters: 120,000 (with 20,000 center points)
Results without MEC Results with MEC Seismic data mismatch (history matching) Production data mismatch (cross validation)
Reductions of average production data mismatch with respect to the initial ensemble
Initial ensemble Final ensemble (no MEC) Final ensemble (with MEC) Predicted water production rates (WPR) at well D-1H
assimilation; As such, it becomes natural for us to develop an ensemble-based framework for supervised learning problems
be extended to handle data assimilation problems in the presence of model errors
useful for improving DA performance in both synthetical and real-world problems presented here