Can the SPDE approach replace traditional Geostatistics for - - PowerPoint PPT Presentation

can the spde approach replace traditional geostatistics
SMART_READER_LITE
LIVE PREVIEW

Can the SPDE approach replace traditional Geostatistics for - - PowerPoint PPT Presentation

RESSTE workshop - Avignon - November 2018 Can the SPDE approach replace traditional Geostatistics for industrial applications? N. Desassis, R. Carrizo Vergara, M. Pereira D. Renard, T. Romary, X. Freulon MINES-ParisTech - G eosciences


slide-1
SLIDE 1

RESSTE workshop - Avignon - November 2018

Can the SPDE approach replace traditional Geostatistics for industrial applications?

  • N. Desassis, R. Carrizo Vergara, M. Pereira
  • D. Renard, T. Romary, X. Freulon

MINES-ParisTech - G´ eosciences

RESSTE - Avignon 1 / 39

slide-2
SLIDE 2

Context

The Geostatistics team of Mines-ParisTech Production of methodology for the society Production of softwares (RGeostats, Geovariances) Mineral ressources oriented

RESSTE - Avignon 2 / 39

slide-3
SLIDE 3

Constraints imposed by the industry

Research of innovative solutions to increase productivity Quite conservative (changes allowed in a stable workflow)

RESSTE - Avignon 3 / 39

slide-4
SLIDE 4

Computational ressources

RESSTE - Avignon 4 / 39

slide-5
SLIDE 5

Computational ressources

Generally more limited

Uranium deposit - Arlit - Niger

RESSTE - Avignon 5 / 39

slide-6
SLIDE 6

Workflow

1) Modeling

RESSTE - Avignon 6 / 39

slide-7
SLIDE 7

Workflow

1) Modeling

RESSTE - Avignon 6 / 39

slide-8
SLIDE 8

Workflow

1) Modeling - Multivariate case

RESSTE - Avignon 7 / 39

slide-9
SLIDE 9

Workflow

1) Modeling - Multivariate case

RESSTE - Avignon 7 / 39

slide-10
SLIDE 10

Workflow

2) Conditional simulations

RESSTE - Avignon 8 / 39

Let Z =

  • ZD

ZT

  • where ZD is the vector of data and ZT the vector of targets
slide-11
SLIDE 11

Workflow

2) Conditional simulations

RESSTE - Avignon 8 / 39

Let Z =

  • ZD

ZT

  • where ZD is the vector of data and ZT the vector of targets

Covariance matrix Cov(Z) = Σ = ΣDD ΣDT ΣTD ΣTT

slide-12
SLIDE 12

Workflow

2) Conditional simulations

RESSTE - Avignon 8 / 39

Let Z =

  • ZD

ZT

  • where ZD is the vector of data and ZT the vector of targets

Covariance matrix Cov(Z) = Σ = ΣDD ΣDT ΣTD ΣTT

  • Conditional expectation (kriging)

Z ⋆

T = E[ZT|ZD] = ΣTDΣ−1 DDZD

slide-13
SLIDE 13

Workflow

2) Conditional simulations

RESSTE - Avignon 8 / 39

Let Z =

  • ZD

ZT

  • where ZD is the vector of data and ZT the vector of targets

Covariance matrix Cov(Z) = Σ = ΣDD ΣDT ΣTD ΣTT

  • Conditional expectation (kriging)

Z ⋆

T = E[ZT|ZD] = ΣTDΣ−1 DDZD

Conditional variance (covariance matrix of the errors) Var[ZT|ZD] = Cov(Z ⋆

T − ZT) = ΣTT − ΣTDΣ−1 DDΣDT

slide-14
SLIDE 14

Handling large data sets and large grid

Kriging with large data sets is performed by using moving neighborhoods Conditional simulations are performed by using non conditional simulations and kriging of the residuals

RESSTE - Avignon 9 / 39

slide-15
SLIDE 15

Principle

Let Z(x) = Z SK(x) + Z(x) − Z SK(x) where Z SK(x) = n

j=1 λj(x)Z(xj)

simple kriging Z(x) − Z SK(x) kriging residuals Z SK and Z − Z SK are two independent Gaussian random functions

RESSTE - Avignon 10 / 39

slide-16
SLIDE 16

Principle

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-17
SLIDE 17

Principle

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-18
SLIDE 18

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-19
SLIDE 19

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-20
SLIDE 20

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-21
SLIDE 21

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-22
SLIDE 22

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-23
SLIDE 23

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-24
SLIDE 24

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-25
SLIDE 25

Principle

−600

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-26
SLIDE 26

Principle

  • 200

400 600 800 1000 −600

RESSTE - Avignon 11 / 39

slide-27
SLIDE 27

Context

RESSTE - Avignon 12 / 39

Selective exploitation

Punctual grade Z(x), x ∈ D with mean m and covariance function C

slide-28
SLIDE 28

Context

RESSTE - Avignon 12 / 39

Selective exploitation

Punctual grade Z(x), x ∈ D with mean m and covariance function C Selective Mining Unit (SMU): v

slide-29
SLIDE 29

Context

RESSTE - Avignon 12 / 39

Selective exploitation

Punctual grade Z(x), x ∈ D with mean m and covariance function C Selective Mining Unit (SMU): v

slide-30
SLIDE 30

Context

RESSTE - Avignon 12 / 39

Selective exploitation

Punctual grade Z(x), x ∈ D with mean m and covariance function C Selective Mining Unit (SMU): v Regularized grade on SMUs Z(v) = 1 |v|

  • v

Z(x)dx

slide-31
SLIDE 31

Context

RESSTE - Avignon 12 / 39

Selective exploitation

Punctual grade Z(x), x ∈ D with mean m and covariance function C Selective Mining Unit (SMU): v Regularized grade on SMUs Z(v) = 1 |v|

  • v

Z(x)dx

slide-32
SLIDE 32

Context

RESSTE - Avignon 12 / 39

Selective exploitation

Punctual grade Z(x), x ∈ D with mean m and covariance function C Selective Mining Unit (SMU): v Regularized grade on SMUs Z(v) = 1 |v|

  • v

Z(x)dx From exploration data Z(x1), . . . , Z(xn)

slide-33
SLIDE 33

Context

RESSTE - Avignon 12 / 39

Selective exploitation

Punctual grade Z(x), x ∈ D with mean m and covariance function C Selective Mining Unit (SMU): v Regularized grade on SMUs Z(v) = 1 |v|

  • v

Z(x)dx From exploration data Z(x1), . . . , Z(xn)

slide-34
SLIDE 34

Support effet

RESSTE - Avignon 13 / 39

What can we say about Z(v)?

slide-35
SLIDE 35

Support effet

RESSTE - Avignon 13 / 39

What can we say about Z(v)?

Same mean m

slide-36
SLIDE 36

Support effet

RESSTE - Avignon 13 / 39

What can we say about Z(v)?

Same mean m Block covariance function Cv(h) = Cov(Z(v), Z(v + h)) = 1 |v|2

  • v
  • v+h

C(x − y)dxdy

slide-37
SLIDE 37

Support effet

RESSTE - Avignon 13 / 39

What can we say about Z(v)?

Same mean m Block covariance function Cv(h) = Cov(Z(v), Z(v + h)) = 1 |v|2

  • v
  • v+h

C(x − y)dxdy P(Z(v) ≥ z) for any cutoff z?

slide-38
SLIDE 38

Support effet

RESSTE - Avignon 13 / 39

What can we say about Z(v)?

Same mean m Block covariance function Cv(h) = Cov(Z(v), Z(v + h)) = 1 |v|2

  • v
  • v+h

C(x − y)dxdy P(Z(v) ≥ z) for any cutoff z?

slide-39
SLIDE 39

Support effet

RESSTE - Avignon 13 / 39

What can we say about Z(v)?

Same mean m Block covariance function Cv(h) = Cov(Z(v), Z(v + h)) = 1 |v|2

  • v
  • v+h

C(x − y)dxdy P(Z(v) ≥ z) for any cutoff z?

slide-40
SLIDE 40

Support effet

RESSTE - Avignon 13 / 39

What can we say about Z(v)?

Same mean m Block covariance function Cv(h) = Cov(Z(v), Z(v + h)) = 1 |v|2

  • v
  • v+h

C(x − y)dxdy P(Z(v) ≥ z) for any cutoff z? Block simulations are required to generate several scenarios

slide-41
SLIDE 41

Direct block simulations

The number of SMU can be large (e.g 1 million) Conditional simulations by using discretization of the blocks can be time consuming Solution : use of a change of support model to describe the multivariate distribution of the points and the blocks and perform conditional simulations of the regularized variable without discretization Several hours for 100 simulations with around 100 000 observations

RESSTE - Avignon 14 / 39

slide-42
SLIDE 42

Handling covariance non-stationarities

RESSTE - Avignon 15 / 39

slide-43
SLIDE 43

Handling covariance non-stationarities

Current solutions Deform the space Cut the domain into several sub-domains in which stationarity is acceptable

RESSTE - Avignon 15 / 39

slide-44
SLIDE 44

More complex environments

RESSTE - Avignon 16 / 39

slide-45
SLIDE 45

More complex environments

RESSTE - Avignon 16 / 39

slide-46
SLIDE 46

More complex environments

RESSTE - Avignon 16 / 39

slide-47
SLIDE 47

SPDE

Lindgren et al. (2011)

RESSTE - Avignon 17 / 39

Let Z = ZD ZT

  • where ZD is the vector of data and ZT the vector of targets

Covariance matrix Cov(Z) = Σ = ΣDD ΣDT ΣTD ΣTT

  • Conditional expectation (kriging)

Z ⋆

T = ΣTDΣ−1 DDZD

Conditional variance (covariance matrix of the errors) Cov(Z ⋆

T − ZT) = ΣTT − ΣTDΣ−1 DDΣDT

slide-48
SLIDE 48

SPDE

Lindgren et al. (2011)

RESSTE - Avignon 17 / 39

Let Z = ZD ZT

  • where ZD is the vector of data and ZT the vector of targets

Precision matrix Q = Σ−1 = QDD QDT QTD QTT

  • Conditional expectation (kriging)

Z ⋆

T = ΣTDΣ−1 DDZD

Conditional variance (covariance matrix of the errors) Cov(Z ⋆

T − ZT) = ΣTT − ΣTDΣ−1 DDΣDT

slide-49
SLIDE 49

SPDE

Lindgren et al. (2011)

RESSTE - Avignon 17 / 39

Let Z = ZD ZT

  • where ZD is the vector of data and ZT the vector of targets

Precision matrix Q = Σ−1 = QDD QDT QTD QTT

  • Conditional expectation (kriging)

Z ⋆

T = − Q−1 TTQTDZD

Conditional variance (covariance matrix of the errors) Cov(Z ⋆

T − ZT) = ΣTT − ΣTDΣ−1 DDΣDT

slide-50
SLIDE 50

SPDE

Lindgren et al. (2011)

RESSTE - Avignon 17 / 39

Let Z = ZD ZT

  • where ZD is the vector of data and ZT the vector of targets

Precision matrix Q = Σ−1 = QDD QDT QTD QTT

  • Conditional expectation (kriging)

Z ⋆

T = − Q−1 TTQTDZD

Conditional variance (covariance matrix of the errors) Cov(Z ⋆

T − ZT) = Q−1 TT

slide-51
SLIDE 51

Comparison with classical approach

RESSTE - Avignon 18 / 39

slide-52
SLIDE 52

Comparison with classical approach

RESSTE - Avignon 18 / 39

slide-53
SLIDE 53

Comparison with classical approach

RESSTE - Avignon 18 / 39

slide-54
SLIDE 54

Comparison with classical approach

RESSTE - Avignon 18 / 39

slide-55
SLIDE 55

Comparison of times

  • 100

200 300 400 500 50 100 150 Nb of samples Time (s)

  • Time for kriging

Kriging (unique neighborhood) Kriging SPDE

RESSTE - Avignon 19 / 39

slide-56
SLIDE 56

Comparison of times

  • 100

200 300 400 500 50 100 150 Nb of samples Time (s)

  • Time for kriging

Kriging (unique neighborhood) Kriging SPDE

  • 50

100 150 Nb of samples Time (s)

Time for kriging

1000 5000 10000 50000 100000

RESSTE - Avignon 19 / 39

slide-57
SLIDE 57

Varying anisotropy

RESSTE - Avignon 20 / 39

slide-58
SLIDE 58

Varying anisotropy

RESSTE - Avignon 20 / 39

slide-59
SLIDE 59

Varying anisotropy

RESSTE - Avignon 20 / 39

slide-60
SLIDE 60

Gartner Hype Cycle

RESSTE - Avignon 21 / 39

slide-61
SLIDE 61

Expectations

Outperform the time performances of “old geostatistics” in 3D Handle one million of targets OK to work with Mat´ ern only (or Markovian approximations) Handle nested models (nugget effect + 2 basic structures) Handle several variables (co-kriging with linear model of coregionalisation) Develop block simulation Handle varying anisotropies

RESSTE - Avignon 22 / 39

slide-62
SLIDE 62

Gartner Hype Cycle

RESSTE - Avignon 23 / 39

slide-63
SLIDE 63

Issues with the 3D

The system size quickly increases The sparsity of the precision matrix decreases The Cholesky factorization of QTT is not possible anymore for a system size greater than 200 000

RESSTE - Avignon 24 / 39

slide-64
SLIDE 64

Finite elements approximation

Cameletti et al. (2013)

For kriging, we can use a coarse meshing to reduce the system size and interpolate the result inside the elements Z(s) =

N

  • i=1

ziψi(s)

RESSTE - Avignon 25 / 39

slide-65
SLIDE 65

Finite elements approximation

Cameletti et al. (2013)

For kriging, we can use a coarse meshing to reduce the system size and interpolate the result inside the elements Z(s) =

N

  • i=1

ziψi(s) But simulations have to be performed on the final target grid in order to reproduce the local variability

RESSTE - Avignon 25 / 39

slide-66
SLIDE 66

Separate the problems

Work with several meshings : one for the simulation (fine) and one for the kriging (coarse) Find an efficient algorithm to perform non conditional simulation on the fine meshing (Pereira and Desassis, 2018) Perform the kriging of the residuals on the coarse mesh and interpolate linearly the result on the fine mesh

RESSTE - Avignon 26 / 39

slide-67
SLIDE 67

Nested Models

Measurement error

Model

˜ Z(si) = Z(si) + ε(si) with Z solution of a SPDE ε(si) is a measurement error with variance σ2

i

The errors are uncorrelated We want to predict ZT knowing the observations ˜ ZD Problem: the precision matrix of (ZT, ˜ ZD) is not sparse Solution: consider the larger vector (ZT∪D, ˜ ZD) Its precision matrix is sparse The size of the system to solve is NT + ND Can we avoid to put vertices at data locations?

RESSTE - Avignon 27 / 39

slide-68
SLIDE 68

Nested Models

Measurement error

Finite element formulation Z(s) =

N

  • i=1

ziψi(s) Z = (z1, . . . , zN) has covariance matrix Σ and precision matrix Q ε = (ε(s1), . . . , ε(sn)) has diagonal variance matrix E (with ith term σ2

i )

The data model is ˜ ZD = ATZ + ε where A is the N × n sparse matrix with elements aij = ψi(sj)

RESSTE - Avignon 28 / 39

slide-69
SLIDE 69

Covariance and precision matrices

The covariance matrix of (Z, ˜ ZD) is ˜ Σ =

  • Σ

ΣA ATΣ ATΣA + E

  • RESSTE - Avignon

29 / 39

slide-70
SLIDE 70

Covariance and precision matrices

The covariance matrix of (Z, ˜ ZD) is ˜ Σ =

  • Σ

ΣA ATΣ ATΣA + E

  • And the precision matrix is

˜ Q =

  • Q + AE −1AT

−AE −1 −E −1AT E −1

  • RESSTE - Avignon

29 / 39

slide-71
SLIDE 71

Covariance and precision matrices

The covariance matrix of (Z, ˜ ZD) is ˜ Σ =

  • Σ

ΣA ATΣ ATΣA + E

  • And the precision matrix is

˜ Q =

  • Q + AE −1AT

−AE −1 −E −1AT E −1

  • Therefore, the kriging of Z is given by

Z ⋆ = (Q + AE −1AT)−1AE −1 ˜ ZD

RESSTE - Avignon 29 / 39

slide-72
SLIDE 72

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 40)

RESSTE - Avignon 30 / 39

100 observations, 50 × 50 grid

slide-73
SLIDE 73

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 40)

RESSTE - Avignon 30 / 39

100 observations, 50 × 50 grid

slide-74
SLIDE 74

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 40)

RESSTE - Avignon 30 / 39

100 observations, 50 × 50 grid

slide-75
SLIDE 75

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 40)

RESSTE - Avignon 30 / 39

100 observations, 50 × 50 grid

slide-76
SLIDE 76

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 40)

RESSTE - Avignon 30 / 39

100 observations, 33 × 33 grid

slide-77
SLIDE 77

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 40)

RESSTE - Avignon 30 / 39

100 observations, 25 × 25 grid

slide-78
SLIDE 78

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 5)

RESSTE - Avignon 30 / 39

100 observations, 33 × 33 grid

slide-79
SLIDE 79

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 5)

RESSTE - Avignon 30 / 39

100 observations, 33 × 33 grid

slide-80
SLIDE 80

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 5)

RESSTE - Avignon 30 / 39

100 observations, 33 × 33 grid

slide-81
SLIDE 81

Does it work?

Comparison with kriging (Mat´ ern with smoothness ν = 1 and range = 5)

RESSTE - Avignon 30 / 39

100 observations, 33 × 33 grid

slide-82
SLIDE 82

First conclusions

When the range (or ν) is large, the meshing can be coarse When the range is small, it is useless to put vertices far from data locations (or we can patch the vertices with the mean)

RESSTE - Avignon 31 / 39

slide-83
SLIDE 83

Nested Models

Z(s) =

K

  • k=1

Zk(s) where the Zk are independant random fields with covariance Ck Z has covariance C = K

k=1 Ck

We don’t know how to approximate Z with a Markovian Random Field

RESSTE - Avignon 32 / 39

slide-84
SLIDE 84

Nested Models

Z(s) =

K

  • k=1

Zk(s) where the Zk are independant random fields with covariance Ck Z has covariance C = K

k=1 Ck

We don’t know how to approximate Z with a Markovian Random Field

Cameletti et al. (2013)

Z(s) =

K

  • k=1

Nk

  • i=1

z(k)

i

ψ(k)

i

(s)

RESSTE - Avignon 32 / 39

slide-85
SLIDE 85

Model

˜ Z(si) =

K

  • k=1

Zk(si) + ε(si) with Zk solution of a SPDE ε(si) is a measurement error with variance σ2

i

The errors are uncorrelated Zk = (z(k)

1 , . . . , z(k) N ) has covariance matrix Σk and precision matrix Qk

ε = (ε(s1), . . . , ε(sn)) has diagonal variance matrix E (with ith term σ2

i )

The data model is ˜ ZD = AT

k Zk + ε

where Ak is the Nk × n sparse matrix with elements aij = ψ(k)

i

(sj)

RESSTE - Avignon 33 / 39

slide-86
SLIDE 86

Covariance and precision matrices

  • f (Z1, . . . , ZK, ˜

ZD)

˜ Σ =        Σ1 . . . Σ1A1 Σ2 . . . Σ2A2 . . . . . . ... . . . . . . . . . ΣK ΣKAK AT

1 Σ1

AT

2 Σ2

. . . AT

KΣK

K

k=1 AT k ΣkAk + E

      

RESSTE - Avignon 34 / 39

slide-87
SLIDE 87

Covariance and precision matrices

  • f (Z1, . . . , ZK, ˜

ZD)

˜ Q =        Q1 + A1E −1AT

1

A1E −1AT

2

. . . A1E −1AT

K

−A1E −1 A2E −1AT

1

Q2 + A2E −1AT

2

. . . A2E −1AT

K

−A2E −1 . . . . . . ... . . . . . . AKE −1AT

1

AKE −1AT

2

. . . QK + AKE −1AT

K

−AKE −1 −E −1AT

1

−E −1AT

2

. . . −E −1AT

K

E −1       

RESSTE - Avignon 34 / 39

slide-88
SLIDE 88

Covariance and precision matrices

  • f (Z1, . . . , ZK, ˜

ZD)

˜ Q =        Q1 + A1E −1AT

1

A1E −1AT

2

. . . A1E −1AT

K

−A1E −1 A2E −1AT

1

Q2 + A2E −1AT

2

. . . A2E −1AT

K

−A2E −1 . . . . . . ... . . . . . . AKE −1AT

1

AKE −1AT

2

. . . QK + AKE −1AT

K

−AKE −1 −E −1AT

1

−E −1AT

2

. . . −E −1AT

K

E −1        Use block Gauss-Seidel algorithm to solve the system Each subsystem is solved from the Cholesky factorization of Q + AkE −1AT

k

The algorithm converges in a few iterations

RESSTE - Avignon 34 / 39

slide-89
SLIDE 89

Direct Block Simulation (stationary case)

The Discret Gaussian Model

We consider v1, . . . , vN a partition of the domain D where the sets vi are equal up to a translation

Hypothesis and notations

x is a fixed location and x is a uniform location within a block v Z(x) = ϕ(Y (x)) where Y (x) is a standard Gaussian variable CY is the covariance of the stationary random field {Y (x), x ∈ D} Z(v) = ϕv(Yv) where Yv is a standard Gaussian variable

RESSTE - Avignon 35 / 39

slide-90
SLIDE 90

Direct Block Simulation (stationary case)

The Discret Gaussian Model

We consider v1, . . . , vN a partition of the domain D where the sets vi are equal up to a translation

Hypothesis and notations

x is a fixed location and x is a uniform location within a block v Z(x) = ϕ(Y (x)) where Y (x) is a standard Gaussian variable CY is the covariance of the stationary random field {Y (x), x ∈ D} Z(v) = ϕv(Yv) where Yv is a standard Gaussian variable If the observation locations x1, . . . , xn are uniform within their block and mutually independant then (Y (x1), . . . , Y (xn), Yv1, . . . , YvN) is a Gaussian vector

RESSTE - Avignon 35 / 39

slide-91
SLIDE 91

Direct Block Simulation (stationary case)

The Discret Gaussian Model

We consider v1, . . . , vN a partition of the domain D where the sets vi are equal up to a translation

Hypothesis and notations

x is a fixed location and x is a uniform location within a block v Z(x) = ϕ(Y (x)) where Y (x) is a standard Gaussian variable CY is the covariance of the stationary random field {Y (x), x ∈ D} Z(v) = ϕv(Yv) where Yv is a standard Gaussian variable If the observation locations x1, . . . , xn are uniform within their block and mutually independant then (Y (x1), . . . , Y (xn), Yv1, . . . , YvN) is a Gaussian vector Y (xi) and Y (xj) are independant conditionally to Yv(i) and Yv(j) the Gaussian values of the blocks in which xi and xj belongs

RESSTE - Avignon 35 / 39

slide-92
SLIDE 92

Consequences (Emery, 2007)

The correlation r between Y (xi) and Yv(i) is deduced from the covariance function of the punctual Gaussian Y : r 2 = 1 |v|2

  • v
  • v

CY (x − y)dxdy The covariance Cv(h) between Yv and Yv+h is given by Cv(h) = Cov(Yv, Yv+h) = 1 r 2|v|2

  • v
  • v+h

CY (x − y)dxdy

RESSTE - Avignon 36 / 39

slide-93
SLIDE 93

Consequences (Emery, 2007)

The correlation r between Y (xi) and Yv(i) is deduced from the covariance function of the punctual Gaussian Y : r 2 = 1 |v|2

  • v
  • v

CY (x − y)dxdy The covariance Cv(h) between Yv and Yv+h is given by Cv(h) = Cov(Yv, Yv+h) = 1 r 2|v|2

  • v
  • v+h

CY (x − y)dxdy Cov(Y (xi), Yv) = rCov(Yv(i), Yv) Cov(Y (xi), Y (xj)) = r 2Cov(Yv(i), Yv(j))

v1 xi xj Cov(Yv1, Yv2) r r v2 xk r RESSTE - Avignon 36 / 39

slide-94
SLIDE 94

Covariance matrix of (Yv1, . . . , YvN, Y (x1), . . . , Y (xn))

  • Σv

rΣvAT rAΣv r 2AΣvAT + (1 − r 2)I

  • where

Σv is built from the block covariance Cv A is the n × N matrix defined by aij = 1xi∈vj

RESSTE - Avignon 37 / 39

slide-95
SLIDE 95

Precision matrix of (Yv1, . . . , YvN, Y (x1), . . . , Y (xn))

1 1 − r 2

  • (1 − r 2)Qv + r 2ATA

−rAT −rA I

  • where

Qv is the precision matrix built from SPDE A is the n × N matrix defined by aij = 1xi∈vj

RESSTE - Avignon 37 / 39

slide-96
SLIDE 96

Conclusions

The SPDE approach should be able to replace traditional geostatistics Direct Block Simulation for non-stationary models has to be developped Inference for varying parameters should be developped It should allow to integrate geological knowledge

RESSTE - Avignon 38 / 39

slide-97
SLIDE 97

References

M Cameletti, F Lindgren, D Simpson, H Rue (2013) Spatio-temporal modeling of particulate matter concentration through the SPDE

  • approach. AStA Advances in Statistical Analysis 97 (2), 109-131

X Emery (2007) On some consistency conditions for geostatistical change-of-support models. Mathematical geology F Lindgren, H Rue, J Lindstr¨

  • m (2011) An explicit link between

Gaussian Fields and Gaussian Markov random fields: the SPDE

  • approach. JRSS B 73 (4)

M Pereira and N Desassis (2018) Efficient simulation of Gaussian Markov random fields by Chebyshev polynomial approximation. arXiv preprint 1805.07423

RESSTE - Avignon 39 / 39