CONDITIONING STRATIGRAPHIC, RULE- BASED MODELS WITH GENERATIVE - - PowerPoint PPT Presentation

conditioning stratigraphic rule based models with
SMART_READER_LITE
LIVE PREVIEW

CONDITIONING STRATIGRAPHIC, RULE- BASED MODELS WITH GENERATIVE - - PowerPoint PPT Presentation

THE AAPG 2019 ANNUAL CONVENTION & EXHIBITION CONDITIONING STRATIGRAPHIC, RULE- BASED MODELS WITH GENERATIVE ADVERSARIAL NETWORKS: A DEEPWATER LOBE, DEEP LEARNING EXAMPLE HONGGEUN JO JAVIER E. SANTOS MICHAEL J. PYRCZ Agenda Rule


slide-1
SLIDE 1

HONGGEUN JO JAVIER E. SANTOS MICHAEL J. PYRCZ

THE AAPG 2019 ANNUAL CONVENTION & EXHIBITION

CONDITIONING STRATIGRAPHIC, RULE- BASED MODELS WITH GENERATIVE ADVERSARIAL NETWORKS:

A DEEPWATER LOBE, DEEP LEARNING EXAMPLE

slide-2
SLIDE 2

Agenda

  • Rule‐based Model: Deep‐water depositional setting
  • DCGAN and Image Inpainting
  • Proposed Method for Data Conditioning
  • Results
  • Conclusion

Presenter’s notes: I will start with basic idea of rule-based models, which includes literature review and the motivation of this study. And then two deep learning algorithms will be covered: DCGAN and semantic image inpainting, which are followed by proposed method

  • f this study for data conditioning in rule-based model.

After presenting results, I will conclude with implementation and key points of this study at the end.

slide-3
SLIDE 3

Rule-based model

  • Simulate sediment dynamics to generate numerical description of reservoir

architecture which captures geological processes informed features.

  • Enable geomodeller to integrate geological concepts
  • Preserve consequent geologic heterogeneity and continuity which are not readily

achievable with conventional geostatistical methods.

  • Referred to as:

‐ event‐based (Pyrcz and Strebelle, 2006) ‐ hybrid (Michael et al., 2010) ‐ surface‐based (Pyrcz et al., 2005; Bertoncello et al., 2013) ‐ process‐oriented (Wen, 2005) ‐ rule‐based modeling (Pyrcz et al., 2015; Jo et al., 2019).

Presenter’s notes: With RB model, we can 1)integrate geological concepts directly and 2) preserve realistic, geological heterogeneity/ continuity In recent work, rule-based modeling is referenced by a variety of names. Despite different names, these methods have a common point that they 1) apply depositional rule in temporal sequence and 2) update topographic surface accordingly.

slide-4
SLIDE 4

Rule-based model

  • Comparison between rule‐based model and conventional geostatistical models:

Architecture deposit, Heterogenity/Continuity of element

(Pyrcz et al., 2015)

Presenter’s notes: In this graph, Pyrcz compares rule-based model with other geostatistical modeling method. By integrating stacking pattern, forward model, topography and flow path of sediment, rule-based models can capture more realistic architecture of deposit while preserving heterogeneity and continuity in depositional elements

slide-5
SLIDE 5

Rule-based model – Input parameters

  • Geometry of depositional element:

– Ellipsoidal, lobate element (Similar to Zie et al., 2000) – Turbidite lobe complex – Controlled by its width, length, and thickness (Deptuck et al., 2008)

  • Depositional stacking pattern

– Random stacking vs. perfect compensational stacking – Measure tendency by compensation index (Straub et al., 2009) – Controlled by compositional exponent (0 for random, >5 for perfect comp. stack)

  • Distribution of petrophysical properties

– After build compositional surfaces, allocate petrophysical properties (i.e., porosity and permeability) by hierarchical trend model (Pyrcz, 2004) – Coarsening‐up in complex scale but fining‐up is expected within element scale

(Jo et al., 2019)

Presenter’s notes: Our rule-based model is designed for deep-water depositional setting, or distal submarine fans where turbidite lobe complex is dominant. Three input parameters should be defined in our model: 1) geometry of depositional element, 2) stacking pattern, and 3) distributions of reservoir properties. Geometry: Stacking: Compensational stacking, the tendency for sediments to preferentially deposit in topographic lows. Whereas random stacking means sediment are deposited regardless of topography. Different stacking patterns are commonly observed in different location and different scale and they can be measured by compensation index from Straub 2009. (Presenter’s notes continued on next slide) Compensation index is mainly controlled by reorganization of the sediment transport field to minimize potential energy of a natural system (Mutti and Normark 1987, Stow and Johansson 2000, Straub et al. 2012).

slide-6
SLIDE 6

/

(Presenter’s notes continued from previous slide) Distribution: After building compositional surface, hierarchical trend model is applied to allocate petrophysical properties. Compensation index is mainly controlled by reorganization of the sediment transport field to minimize potential energy of a natural system (Mutti and Normark 1987, Stow and Johansson 2000, Straub et al. 2012).

slide-7
SLIDE 7

Rule-based model – Flow chart

Presenter’s notes:

  • 1. First, we set initial bathymetry, reservoir extent and lobe element geometry.
  • 2. Which we then use to calculate the probability map of the center of the lobe.
  • 3. With the probability map, we apply MCS to stochastically locate a lobe element.
  • 4. Then we update the topography accordingly and recalculate the probability map.
  • 5. We do this repeatedly until it reaches the maximum iteration number we set.

6.6. After building the compositional surface, the hierarchical trend model is applied to allocate petrophysical properties.

slide-8
SLIDE 8

Rule-based model – Deep-water lobe reservoir

(Jo et al., 2019)

  • 5 km x 5 km x 60 m

reservoir extent

  • Lobe element: 750 m in

radius, 10 m in thickness

  • Perfect compensational

stacking

  • Two different scales of

hierarchical trends

Presenter’s notes: The figure shows an example of our rule-based model which has 5km x 5km x 60m dimension. Lobe element is 750 m in radius and 10 m in thickness. Perfect compensation assumed and two different scale of hierarchical trends are observed.

slide-9
SLIDE 9

Rule-based model - Limitation

  • Conditioning well data is an obstacle to broaden application of rule‐based models

to reservoir modeling

  • Pyrcz (2004) generated multiple candidate surfaces and accepted them based on

minimum misfit with adding stochastic residuals to match data

  • Michael et al. (2010) combined rule‐based model with conventional geostatistical

methods

  • Bertoncello et al. (2013) selected the most significant parameters and used

sequential optimization scheme.

  • However, all the attempts have strengths and weaknesses,

and robust, direct conditioning to dense well data is still unsolved.

Presenter’s notes: There have been several attempts to solve the data conditioning problem (Pyrcz 2004; Michael et al. 2010;; Bertoncello et al. 2013).

slide-10
SLIDE 10

Bridges from RB to ML

  • Machine that could Make the Model

– Learn the features – Put the features into the reservoir while conserve heterogeneity/continuity in RB

  • Broadening applications

– Conditioning hard date (e.g., well logs, core samples) – Navigating reservoir manifold

RB ML

Presenter’s notes: Overall goal of this study is putting a bridge between Rule-based model and Machine learning to broaden Rule-based model’s application. If a Machine can 1) learn the features of rule-based models and 2) put those features into the reservoir models directly, we can solve conditioning problem and navigate reservoir manifold. Moreover, we can use the machine for Dimensionality reduction and optimization problem such as history matching. In this study, we focus on the first two items.

slide-11
SLIDE 11

DCGAN

  • Generative Adversarial Networks (GANs): the framework for training generative

models in an adversarial manner against discriminative models (Goodfellow et al., 2014)

  • DCGAN: use of the Convolutional Neural Networks (CNNs) to GAN to improve its

performance for high‐resolution images (Radford et al., 2015)

  • DCGAN consists of

– Generative model (G): maps a latent vector z to image space – Discriminative model (D): maps an input image to a probability of true image

  • Loss function of DCGAN is:

min

max

  • , log log 1

where is the sample from real images and is random variables from the latent space.

Presenter’s notes:

  • Two machine learning algorithms are used in our study. First one is DCGAN and second one is semantic image inpainting.
  • Generative Adversarial Networks or GAN are framework for training generative models in an adversarial manner against

discriminative models.

  • After GAN is first suggested by Goodfellow in 2014, Radfold improve its performance by using CNN, and name their algorithm

DCGAN.

  • In DCGAN, there are two different models: Generative model and Discriminative model as shown in this figure.
  • Real images from training data set and fake images from Generative model are input to discriminative model sequentially.

(Presenter’s notes continued on next slide) The formula shown below represents these processes.

slide-12
SLIDE 12

/

(Presenter’s notes continued from previous slide) Discriminative model is trained to distinguish real from fake while generative model is trained to generate more realistic images to deceive discriminative model.

  • The formula shown below represents these processes.
slide-13
SLIDE 13

DCGAN for reservoir models

  • 3D Kernels for 3D feature reservoir

models

  • Rule‐based models have:

– 5.6 km x 5.6 km x 40 m – 28x28x20 grid cells (200m x 200m x 2m dimension in x, y, and z) – 1.7 km in lobe radius and 8 m in thickness – Perfect compensational stacking

  • 40,000 rule‐based models were

utilized to train DCGAN over 30,000 iteration with mini‐batch of 40

Presenter’s notes:

  • Unlike 2D images, reservoir models have 3D feature so 3D kernels should be used in CNNs.
  • The upper figure represents the schematic diagram of DCGAN.
  • The lower figure shows internal CNN structure of Generative model. These number indicates the number of channel and the

dimension of feature maps.

  • This approach includes activation functions, batch normalization, and pooling/striding methods, following Radford et al. (2015).

After tuning hyperparameters, this structure gives the best results for 28x28x20 grid cells reservoir model. However, if the extent of reservoir is changed, the feature map size of feature map and the number of channels must be updated accordingly.

slide-14
SLIDE 14

Semantic image inpainting

  • Image inpainting refers to the algorithms to restore lost or corrupt parts of the

image data

  • Two type of information should be considered in the inpainting task:

– Contextual: the missed pixels should be inferred based on the surrounding pixels – Perceptual: the filled parts should be “realistic” in that they have features like the training data

  • Semantic image inpainting integrate both information by defining loss function like

(Yeh et al., 2016): where M: the mask (matrix with elements 0 for missing portions and 1 for the rest), ⨀: element‐wise multiplication, λ: a hyperparameter, y: the corrupted image

≡ λ ∙ , ⨀ ⨀ , log1 ,

Presenter’s notes: λ: a hyperparameter to control significance of conceptual and perceptual information for the inpainting.

slide-15
SLIDE 15

Semantic image inpainting

  • Yeh et al. (2016) successfully demonstrated semantic image inpainting using

DCGAN for several different types of images

(Yeh et al., 2016) Real Input Result Real Input Result Real Input Result

Presenter’s notes:

  • including numbers, faces, and objects such as vehicles
  • Though the images are not perfectly replicated, this method understands the context of images and correctly fills in the void.
slide-16
SLIDE 16

Proposed method

Presenter’s notes: 1) We need to realize multiple reservoir models. Here we use the reservoir quality, which ranges from 0 being shale-like to 1 being more sandy, in order to represent the reservoir model. Afterwards, this can be transformed into either porosity or permeability for reservoir simulation. 2) Trained DCGAN based on the set of realizations. 3) Next, we generate a rule-based model with the voids near wells. Well data from either injector or producer is placed at the center of each void. 4) Lastly, all voids are restored through semantic image inpainting.

slide-17
SLIDE 17

Results – A realization from DCGAN

  • DCGAN successfully trains

rule‐based models.

  • A realization from DCGAN

has lobe element, same geometries, and fining/coarsening up trend

  • Horizontal and vertical view

clearly show that the realization captures the geometry of lobe element

Presenter’s notes: With that method, we created a lobe reservoir model. Top-left figure shows a rule-based model and next to it is a realization from DCGAN.

  • Two different scale of hierarchical trends are observed in the DCGAN realization.
  • Moreover, horizontal and vertical view clearly show that the realization captures the geometry of lobe element.
slide-18
SLIDE 18

Results – Realizations from DCGAN

  • Seed two

different latent vectors (z1 and z2) and visualize their continuous change by continuously change in z

Merged Split into three

Presenter’s notes:

  • Two different realizations are from z1 and z2.
  • First row shows 3D image and second and third rows show horizontal and vertical view, respectively.
  • With continuous changes of z from z1 to z2, realization also shows continuous transition.
  • This indicates that we can use DCGAN to navigate reservoir manifold and apply gradient based optimization to image inpainting task.
slide-19
SLIDE 19

Results – Well data conditioning

  • Make voids near

the well location in a rule‐based model

  • Mask is 3D matrix

which consists of 0 (for void) and 1 (for the rest)

  • After making voids,

input well data (i.e., quality of reservoir) in the center of each well.

Presenter’s notes: The element-wise multiplication of the rule-based model and the mask enable us to make the voids. The radius of a void area is defined as around 1 km and this should be adjusted depending on the density of well data.

slide-20
SLIDE 20

Results – Well data conditioning

  • With gradient descent
  • ptimization, we found the

latent variables (z), that minimize semantic error.

  • Reconstruct rule‐based

model with restored regions

Z‐axis, grid cell

[Qual. Res.]

Presenter’s notes: From an incomplete reservoir model, we can fill in the voids using semantic image inpainting. Here we used gradient based optimization to find optimum z. This restores the voids with appropriate images within the context. As observed by circular boundaries in horizontal view and parabolic boundaries in vertical view, we can infer that semantic image inpainting successfully solves the well data conditioning problem

slide-21
SLIDE 21

Conclusions

  • A trained DCGAN can successfully realize 3D lobe reservoir that conserve

realistic heterogeneity and continuity with 100 latent variable (z)

  • Continuous response of DCGAN realization from the latent z variables

makes navigating reservoir manifold possible

  • Semantic image inpainting enables us to condition well data directly
  • Extending this framework to other applications such as:

– Different depositional settings – History matching – Integration of stratigraphic surface in modeling

  • GIGO. Developing realistic rule‐based models/geological

interpretation should always be prerequisite to expand ML‐based applications.

slide-22
SLIDE 22

Reference

  • Bertoncello, A., Sun, T., Li, H., Mariethoz, G., & Caers, J. (2013). Conditioning surface‐based geological models to well and thickness data. Mathematical Geosciences, 45(7), 873‐893.
  • Denton, E. L., Chintala, S., & Fergus, R. (2015). Deep generative image models using a Laplacian pyramid of adversarial networks. Advances in neural information processing systems, 1486‐

1494.

  • Deptuck, M.E., Piper, D.J.W., Savoye, B., & Gervas, A. (2008). Dimensions and architecture of late Pleistocene submarine lobes off the northern margin of East Corsica. Sedimentology, 55,

869–898.

  • Goodfellow, I., Pouget‐Abadie, J., Mirza, M., Xu, B., Warde‐Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing

systems, 2672‐2680.

  • Jo, H. & Pyrcz, M. J. (2019). Robust Rule‐based Aggradiational Lobe Reservoir Models. In Proceedings of the Natural Resources Research.
  • Michael, H. A., Li, H., Boucher, A., Sun, T., Caers, J., & Gorelick, S. M. (2010). Combining geologic‐process models and geostatistics for conditional simulation of 3‐D subsurface
  • heterogeneity. Water Resources Research, 46(5).
  • Pyrcz, M. J. (2004). Integration of geologic information into geostatistical models. Ph.D. Thesis, University of Alberta.
  • Pyrcz, M. J., Sech, R. P., Covault, J. A., Willis, B. J., Sylvester, Z., & Sun, T. (2015). Stratigraphic rule‐based reservoir modeling. Bulletin of Canadian Petroleum Geology, 63(4), 287–303.
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Wen, R. (2005). SBED studio: an integrated workflow solution for multi‐scale geo modelling. 67th EAGE Conference and Exhibition, Madrid, Spain, 13–16 June 2005.
  • Wu, J., Zhang, C., Xue, T., Freeman, B., & Tenenbaum, J. (2016). Learning a probabilistic latent space of object shapes via 3d generative‐adversarial modeling. Advances in neural

information processing systems, 82‐90.

  • Xie, Y., Cullick, A. S., & Deutsch, C. V. (2001). Surface‐geometry and trend modeling for integration of stratigraphic data in reservoir models. SPE Western Regional Meeting, Bakersfield,

California, 26–30 March 2001.

  • Yeh, R. A., Chen, C., Lim, T. Y., Schwing, A. G., Hasegawa‐Johnson, M., & Do, M. N. (2016). Semantic Image Inpainting with Deep Generative Models. arXiv preprint arXiv:1607.07539.
slide-23
SLIDE 23

Acknowledgments:

Special thanks to

slide-24
SLIDE 24

Questions?