Predicting Vertical Resistivity By Machine Learning - Presentation - - PDF document

predicting vertical resistivity by machine learning
SMART_READER_LITE
LIVE PREVIEW

Predicting Vertical Resistivity By Machine Learning - Presentation - - PDF document

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/333812114 Predicting Vertical Resistivity By Machine Learning - Presentation Presentation June 2019 CITATIONS READS 0 36 3 authors


slide-1
SLIDE 1 See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/333812114

Predicting Vertical Resistivity By Machine Learning - Presentation

Presentation · June 2019

CITATIONS READS

36

3 authors, including: Some of the authors of this publication are also working on these related projects: Continuous follow up of the performance of the CSEM technology as a de-risking from the exploration toolbox. View project Machine Learning in Geoscience View project Alexander Vereshagin M Vest Energy AS

46 PUBLICATIONS 156 CITATIONS SEE PROFILE

Torolf Wedberg smartFeatures.ai

24 PUBLICATIONS 227 CITATIONS SEE PROFILE

All content following this page was uploaded by Alexander Vereshagin on 17 June 2019.

The user has requested enhancement of the downloaded file.
slide-2
SLIDE 2

Predicting Vertical Resistivity By Machine Learning

1

slide-3
SLIDE 3

Th_R11_05 (Predicting Vertical Resistivity By Machine Learning)

Alexander Vereshagin*, Torolf Wedberg, Aristofanis Stefatos M Vest Energy

* alexandre@mvestenergy.no

2

slide-4
SLIDE 4

Background

3

slide-5
SLIDE 5

Motivation

Why measure anisotropy (Rv):

  • More precise estimation of SHC
  • Important for EM/CSEM:
  • (Anomaly in Rv, and/or anomaly in Rv/Rh) ≈ DHI
  • Dipping anisotropy challenge: high anisotropy → strong effect
  • It helps to cross-check with wells!

emgs.com

4

slide-6
SLIDE 6

Motivation

  • Rh:

normally available in wells (deep resistivity) Rv: scarce (triaxial logging tools).

  • 2017: analyzed all publicly available triaxial wells on

Norwegian continental shelf (NCS) – 18 wells by that time. (AAPG/SEG 2017, Wedberg et al.)

  • No machine learning, just data
  • Results: subsurface is mostly anisotropic
  • Median(Rv/Rh) ≈2.5 for available data (upscaled to formation

level)

5

slide-7
SLIDE 7

Limited availability of triaxial log (NCS: 25 wells vs.

  • s. 6000+ in

in DIS ISKOS!)

DISKOS database, NCS

6

slide-8
SLIDE 8

Machine Learning

  • Goal: Predict anisotropy for wells where triaxial data is

not available

  • What to predict?
  • Anisotropy (ani = Rv/Rh) vs. Rv :

ani has less value spread and less correlated to Rh

  • Type of model – key factors:
  • Input: basic composite logs
  • Not many wells, with missing intervals and bad points
  • → Recurrent, convolutional: heavy pre-processing
  • Most of the advanced algorithms work
  • Performance differ
  • “Sweet” functionalities

(error bars, feature importance, …)

7

slide-9
SLIDE 9

Features: correlations

  • Rv has strong correlation to RDEP,

contrary to anisotropy

  • Do not want dominating features,

so lets predict anisotropy!

8

slide-10
SLIDE 10

Feature selection

  • Composite log:

GR, RDEP, AC, ACS, NEU, DEN, PE, …

  • More features → better scores

But: missing data→ larger error, worse scores

  • What else?
  • Vertical depth (Compaction!)
  • Water depth
  • Combined features
  • Geographic coordinates (coverage)
  • Geological formation (availability → area of

application?)

To predict depth = z

9

slide-11
SLIDE 11

Model preparation

z Feature 1 Feature 2 … Anisotropy

  • Resampling
  • Some cleaning
  • Filtering: good for scores, BUT:

more filters → less feature values diversity → less prediction stability! depth = z To predict Gradient boosting/ XGBoost,

  • thers can be plugged in

10

slide-12
SLIDE 12

Boosted trees (e

(e.g., ., XGBo Boost)

http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html

Tree depth = 1 Tree depth = 2 x1 < p x0 a

True False

d c b a b x0 x2 = q x1 = p x1 = p

True False

x1 < p x2 < q x2 < q

11

slide-13
SLIDE 13

Training: cross-validation

K-fold cross-validation Estimate how model will perform once fully trained

Multiple (grid) runs: vary algorithm’s (hyper)parameters, feature combinations, train- test splitting Compute fit scores on test sets for each train-test data split (try multiple metrics!) Select the best (hyper)parameters, FIX THEM Re-train on entire database with these fixed

  • parameters. Fit should not change dramatically

Save the model (and models for error bars)

≈10^5 samples

25 wells = 25 “groups” Area (“class”) scikit-learn.org

12

slide-14
SLIDE 14

4 parameters with 7, 5, 3, 7 values, respectively 7*5*3*7 = 735 parameter combinations (“sets”)

  • Nr. of parameter set

1 735

13

slide-15
SLIDE 15

Run for every set and plot the fit score…

14

slide-16
SLIDE 16

… reorder the parameter sets

15

slide-17
SLIDE 17

… reorder the parameter sets

16

slide-18
SLIDE 18

… another score

17

slide-19
SLIDE 19

… and yet another score

18

slide-20
SLIDE 20

… and run for all train-test splits.

19

slide-21
SLIDE 21

…which sets look good?

20

slide-22
SLIDE 22

…which sets look good?

21

slide-23
SLIDE 23

Error bar (quantile regression)

Anisotropy Rv Rh, RDEP

22

slide-24
SLIDE 24

Feature importance (“sweet” functionality)

Scikit-learn Gradient Boosting feature importance metrics

“Combined” feature “Combined” feature depth depth below mudline NEU-DEN separation water depth RDEP DEN

Dictated by Physics and data availability

23

slide-25
SLIDE 25

Testing: before including in training set

Saturation? Anisotropy Rv Rh, RDEP

24

slide-26
SLIDE 26

Testing: aft fter including in training set

Saturation? Anisotropy Rv Rh, RDEP

25

slide-27
SLIDE 27

Testing: before including in training set

Anisotropy Rv Rh, RDEP

26

slide-28
SLIDE 28

Testing: aft fter including in training set

Anisotropy Rv Rh, RDEP

27

slide-29
SLIDE 29

Anisotropy fit: (measured – predicted) 𝞽 ≅ 1

Testing: final scores

Metrics Testing Final MedAE ≈ 0.08 ≈ 0.06 RMSE ≈ 0.14 ≈ 0.12 R2 ≈ 0.5 ≈ 0.6

Scores for log10(Ani) Why low R2? Effectively, anisotropy ratio error around 10 %

28

R2 = 0.57 ≈ 20m

slide-30
SLIDE 30

RV upscaled / RH upscaled

Upscaling to formation level

Outliers: saturation, ehnaced by up-scaling R2 (outliers removed) ≈ 0.8 - 0.95 R2 (outliers removed) ≈ 0.8 - 0.95

Physics

29

slide-31
SLIDE 31

Expanding predictions

1. Predicting ani (Rv) for entire interval of the training wells (beyond the training data interval) 2. CSEM resolves Rv → check with CSEM! Adding 15 wells without triax but with CSEM. Unfortunately, currently we do not have access to modern CSEM cubes for any of our triax wells (such cubes exist)

7324/7-2 7324/9-1 7324/2-1 7325/1-1 7324/8-1 7324/8-2 7220/4-1 7220/5-2 7220/5-1 7220/8-1 7324/7-1 7220/8-U-1 6608/8-1 6608/11-7 S 6608/11-4 6608/11-5 6608/11-2 6608/11-6 6608/11-3 6608/10-6 6608/11-1 6608/11-8 6705/7-1 6706/11-1 6706/11-2 6707/10-3 S 7122/2-1

Stordal Vema Ivory Phoenix Gymir Hammerfest basin Johan Castberg area Wisting area Atlantis Apollo

34/11-A-16 16/1-21 A 16/1-21 S 16/1-22 S 16/2-13 A 16/2-16 AT2 2/4-K-4 A 34/3-3 S 15/3-A-12 T2 15/3-A-5 6407/1-5 S 6506/12-P-1 AH 6305/8-2

Ivar Aasen Johan Svedrup Gudrun Ekofisk Kvitebjørn Knarr Ormen Lange Åsgard Maria : Training wells : M Vest recent CSEM inversions

30

slide-32
SLIDE 32

Comparing to CSEM inversion (Example)

Triaxial logs not available CSEM inversion has low vertical resolution, but resolve RTransverse and upscaled Rv and Rh

Guided (ρV) ρV Unconstrained (ρV) ρV Constrained top (ρV) ρV

Target interval

Missing features Missing features

Rv Rv

31 Rh, RDEP Rh, RDEP

slide-33
SLIDE 33

Transverse resistivity over 1.5 km depth interval

Inversions of different age Lacking features in shallow section Lacking CSEM sensitivity in deeper section

Missing features

32 Rh, RDEP

slide-34
SLIDE 34

Sampled to geological formation scale

Statistics for anisotropy @ different scales

Median: ≈2.2 Mean: ≈2.6

Sampled to measurement scale

Median: ≈ 2.65 Mean: ≈ 3.6

upscaling

33

slide-35
SLIDE 35

Expanded anisotropy statistics (NCS)

Wedberg et.al. 2017 Updated triax database Conservative prediction Handling missing features

  • Nr. Fm intervals

87 159 365 450

  • Nr. unique Fms

32 43 80 80

  • Nr. wells

18 25 40 40

No missing input features Handling missing features Prediction median: ≈ 2.5 Triax database median: ≈ 2.65

34

slide-36
SLIDE 36

Anisotropy vs HC

Not all HC-bearing formations show strong anisotropy. There are anisotropic formations which are not HC- bearing. But among the formations with high anisotropy, there are more HC-bearing ones

35

slide-37
SLIDE 37

Potential applications

There is a well? 3ax log present?

  • Create Rv log
  • Look for missed pay zones
  • Improve resource estimates
  • Make better background for future CSEM

QC the log, add to database, re-train

  • Analyze surrounding wells
  • Get Rv for the CSEM background
  • Raise alarm if chance for false-positive

CSEM inversion exist?

QC inversion, re-invert if needed Feasibility and sensitivity analysis

Approach will work for other measurements

36

slide-38
SLIDE 38

Conclusions

  • Predicting ani/Rv from basic composite logs with minimal feature selection
  • Optimization is important
  • Predicting very well when upscaled
  • Match with CSEM results
  • Which scale we can comfortably predict at?
  • Make a new score?
  • Useful for CSEM analysis, missed pays, log QC, etc.
  • Interesting anisotropy statistics over NCS, numbers stay stable
  • Future:
  • Mask before training?
  • to add latest triax wells
  • more CSEM results

37

slide-39
SLIDE 39

References

38

slide-40
SLIDE 40

Acknowledgements / Thank You / Questions

Alexander Goncearenco (The National Institutes of Health, US) Lars Lorenz (Geonautika) Daniel Shantsev (EMGS, Norway) Alexander Vereshagin*, Torolf Wedberg, Aristofanis Stefatos M Vest Energy

* alexandre@mvestenergy.no

39

View publication stats View publication stats