Assimilation of Multiple Linearly Dependent Data Vectors Trond - - PowerPoint PPT Presentation

assimilation of multiple linearly dependent data vectors
SMART_READER_LITE
LIVE PREVIEW

Assimilation of Multiple Linearly Dependent Data Vectors Trond - - PowerPoint PPT Presentation

Assimilation of Multiple Linearly Dependent Data Vectors Trond Mannseth NORCE Energy Linearly dependent data vectors Assume that we want to assimilate the data vectors { d l } L l =1 , where { d l = B l d L } L 1 l =1 and { B l } L 1 l


slide-1
SLIDE 1

Assimilation of Multiple Linearly Dependent Data Vectors

Trond Mannseth

NORCE Energy

slide-2
SLIDE 2

Linearly dependent data vectors

Assume that we want to assimilate the data vectors {dl}L

l=1, where

{dl = BldL}L−1

l=1 and {Bl}L−1 l=1 denotes a sequence of matrices

slide-3
SLIDE 3

Linearly dependent data vectors

Main issue

Assume that we want to assimilate the data vectors {dl}L

l=1, where

{dl = BldL}L−1

l=1 and {Bl}L−1 l=1 denotes a sequence of matrices

What is the appropriate way to assimilate such a data sequence, taking into account that some, but not necessarily all, information is used multiple times?

slide-4
SLIDE 4

Outline

Motivation for considering linearly dependent data vectors Relation to multiple data assimilation (MDA) Brief recap of MDA condition (ensuring correct sampling in linear-Gaussian case) Generalization of MDA condition to linearly dependent data vectors (PMDA condition) PMDA condition in practice - some issues

slide-5
SLIDE 5

Linearly dependent data vectors—example

Data grid

slide-6
SLIDE 6

Linearly dependent data vectors—example

Data grid

l = L

slide-7
SLIDE 7

Linearly dependent data vectors—example

Multilevel data

Data grid

l = L l = L − 1 l = L − 2

. . .

{dl = BldL}L−1

l=1

With multilevel data, Bl denotes an averaging operator from level L to level l

slide-8
SLIDE 8

Linearly dependent data vectors—example

Multilevel data

Data grid

l = L l = L − 1 l = L − 2

. . .

{dl = BldL}L−1

l=1

With multilevel data, Bl denotes an averaging operator from level L to level l Time-domain multilevel data is also a possibility

slide-9
SLIDE 9

Multilevel data

Why bother?

Data grid

l = L l = L − 1 l = L − 2

. . .

{dl = BldL}L−1

l=1

slide-10
SLIDE 10

Multilevel data

Why bother?

Data grid

l = L l = L − 1 l = L − 2

. . .

{dl = BldL}L−1

l=1

Gradually introducing more and more information, like with sequential assimilation of d1, d2, . . . , dL, can be advantageous for nonlinear problems

slide-11
SLIDE 11

Multilevel data

Why bother?

Data grid

l = L l = L − 1 l = L − 2

. . .

{dl = BldL}L−1

l=1

Gradually introducing more and more information, like with sequential assimilation of d1, d2, . . . , dL, can be advantageous for nonlinear problems Multilevel data are required in order to correspond to results from multilevel simulations

slide-12
SLIDE 12

Multilevel simulations

Sim.

  • utput

grid

E

E

E

. . .

slide-13
SLIDE 13

Multilevel simulations

. . . and corresponding multilevel data

Data grid Sim.

  • utput

grid

E

E

E

. . .

slide-14
SLIDE 14

Outline

Motivation for considering linearly dependent data vectors Relation to multiple data assimilation (MDA) Brief recap of MDA condition (ensuring correct sampling in linear-Gaussian case) Generalization of MDA condition to linearly dependent data vectors (PMDA condition) PMDA condition in practice - some issues

slide-15
SLIDE 15

Multiple data assimilation1 (MDA)

Brief description

With MDA, the same data are assimilated multiple times. Since the data are reused, the data-error covariances must be inflated. The motivation for MDA is to improve performance on nonlinear problems by gradually introducing the available information in the data, leading to a sequence

  • f smaller updates instead of a single large update

1Emerick and Reynolds, Computers & Geosci 55, 2013

slide-16
SLIDE 16

MDA

Multiple data assimilation {dl}L

l=1

{dl = dL}L−1

l=1

Multiple use of the same information Abbreviation: MDA

slide-17
SLIDE 17

MDA

. . . as a special case of assimilation of multiple linearly related data vectors

Multiple data assimilation {dl}L

l=1

{dl = dL}L−1

l=1

Multiple use of the same information Abbreviation: MDA Assimilation of multiple linearly related data vectors {dl}L

l=1

{dl = BldL}L−1

l=1

Partially multiple use of the same information Abbreviation: PMDA (Partially MDA)

slide-18
SLIDE 18

Outline

Motivation for considering linearly dependent data vectors Relation to multiple data assimilation (MDA) Brief recap of MDA condition (ensuring correct sampling in linear-Gaussian case) Generalization of MDA condition to linearly dependent data vectors (PMDA condition) PMDA condition in practice - some issues

slide-19
SLIDE 19

MDA condition

Brief recap

While the motivation for MDA is to improve performance on nonlinear problems, it is desirable that it samples correctly from the posterior PDF for the parameter vector in the linear-Gaussian case

slide-20
SLIDE 20

MDA condition

Brief recap

While the motivation for MDA is to improve performance on nonlinear problems, it is desirable that it samples correctly from the posterior PDF for the parameter vector in the linear-Gaussian case. This case can be analyzed using assembled quantities, where each row corresponds to an assimilation cycle

δ =    dL . . . dL    Γ =    GL . . . GL    Ξ =    CL . . . . . . ... . . . . . . CL   

slide-21
SLIDE 21

MDA condition

Brief recap

While the motivation for MDA is to improve performance on nonlinear problems, it is desirable that it samples correctly from the posterior PDF for the parameter vector, m, in the linear-Gaussian case. This case can be analyzed using assembled quantities, where each row corresponds to an assimilation cycle. The analysis2 leads to an inflated assembled covariance and the MDA condition for the inflation coefficients

δ =    dL . . . dL    Γ =    GL . . . GL    Ξ =    α1CL . . . . . . ... . . . . . . αLCL    L

l=1 α−1 l

= 1

2Emerick and Reynolds, Computers & Geosci 55, 2013

slide-22
SLIDE 22

Outline

Motivation for considering linearly dependent data vectors Relation to multiple data assimilation (MDA) Brief recap of MDA condition (ensuring correct sampling in linear-Gaussian case) Generalization of MDA condition to linearly dependent data vectors (PMDA condition) PMDA condition in practice - some issues

slide-23
SLIDE 23

MDA condition

Slight change of notation

To prepare for the description of the PMDA condition, which follows next, I use the subscript MDA for ‘MDA quantities’

δMDA =    dL . . . dL    ΓMDA =    GL . . . GL    ΞMDA =    α1CL . . . . . . ... . . . . . . αLCL    L

l=1 α−1 l

= 1

slide-24
SLIDE 24

MDA condition

Slight change of notation

To prepare for the description of the PMDA condition, which follows next, I use the subscript MDA for ‘MDA quantities’, I introduce the coefficients {λl = α1/2

l

}L

l=1 δMDA =    dL . . . dL    ΓMDA =    GL . . . GL    ΞMDA =    λ2

1CL

. . . . . . ... . . . . . . λ2

LCL

   L

l=1

  • λ2

l

−1 = 1

slide-25
SLIDE 25

MDA condition

Slight change of notation

To prepare for the description of the PMDA condition, which follows next, I use the subscript MDA for ‘MDA quantities’, I introduce the coefficients {λl = α1/2

l

}L

l=1, I multiply the MDA condition by C −1 L δMDA =    dL . . . dL    ΓMDA =    GL . . . GL    ΞMDA =    λ2

1CL

. . . . . . ... . . . . . . λ2

LCL

   C −1

L

L

l=1

  • λ2

l

−1 = C −1

L

slide-26
SLIDE 26

MDA condition

Slight change of notation

To prepare for the description of the PMDA condition, which follows next, I use the subscript MDA for ‘MDA quantities’, I introduce the coefficients {λl = α1/2

l

}L

l=1, I multiply the MDA condition by C −1 L , and I

reformulate the assembled data covariance and the MDA condition slightly

δMDA =    dL . . . dL    ΓMDA =    GL . . . GL    ΞMDA =    λ1CLλ1 . . . . . . ... . . . . . . λLCLλL    L

l=1 (λlCLλl)−1 = C −1 L

slide-27
SLIDE 27

MDA condition

δMDA =    dL . . . dL    ΓMDA =    GL . . . GL    ΞMDA =    λ1CLλ1 . . . . . . ... . . . . . . λLCLλL    L

l=1 (λlCLλl)−1 = C −1 L

slide-28
SLIDE 28

MDA condition

δMDA =    dL . . . dL    ΓMDA =    GL . . . GL    ΞMDA =    λ1CLλ1 . . . . . . ... . . . . . . λLCLλL    L

l=1 (λlCLλl)−1 = C −1 L

δPMDA =    d1 . . . dL    ΓPMDA =    G1 . . . GL   

slide-29
SLIDE 29

MDA condition and PMDA condition

δMDA =    dL . . . dL    ΓMDA =    GL . . . GL    ΞMDA =    λ1CLλ1 . . . . . . ... . . . . . . λLCLλL    L

l=1 (λlCLλl)−1 = C −1 L

δPMDA =    d1 . . . dL    ΓPMDA =    G1 . . . GL    ΞPMDA =    A1C1AT

1

. . . . . . ... . . . . . . ALCLAT

L

   L

l=1 BT l

  • AlClAT

l

−1 Bl = C −1

L

slide-30
SLIDE 30

Outline

Motivation for considering linearly dependent data vectors Relation to multiple data assimilation (MDA) Brief recap of MDA condition (ensuring correct sampling in linear-Gaussian case) Generalization of MDA condition to linearly dependent data vectors (PMDA condition) PMDA condition in practice - some issues

slide-31
SLIDE 31

PMDA condition in practice

Specification of ΞPMDA

ΞPMDA =    A1C1AT

1

. . . . . . ... . . . . . . ALCLAT

L

   L

l=1 BT l

  • AlClAT

l

−1 Bl = C −1

L

The specification of {αl}L

l=1 in ΞMDA raises no other issue than how to

make MDA perform optimally on a given nonlinear problem. Resolving this issue is not straightforward, but the specification of {Al}L

l=1 in

ΞPMDA raises some issues in addition

slide-32
SLIDE 32

PMDA condition in practice

Specification of ΞPMDA

ΞPMDA =    A1C1AT

1

. . . . . . ... . . . . . . ALCLAT

L

   L

l=1 BT l

  • AlClAT

l

−1 Bl = C −1

L

The specification of {αl}L

l=1 in ΞMDA raises no other issue than how to

make MDA perform optimally on a given nonlinear problem. Resolving this issue is not straightforward, but the specification of {Al}L

l=1 in

ΞPMDA raises some issues in addition Before discussing these additional issues, note that since {dl = BldL}L−1

l=1 ,

it follows that {Cl = BlCLBT

l }L−1 l=1 , leading to the following reformulated

PMDA condition

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl +

  • ALCLAT

L

−1 = C −1

L

slide-33
SLIDE 33

PMDA condition in practice

Specification of ΞPMDA—some issues

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl +

  • ALCLAT

L

−1 = C −1

L

slide-34
SLIDE 34

PMDA condition in practice

Specification of ΞPMDA—some issues

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl +

  • ALCLAT

L

−1 = C −1

L

All but one of the matrices {Al}L

l=1 can be specified freely, while the

remaining one must be selected to fulfill the PMDA condition

slide-35
SLIDE 35

PMDA condition in practice

Specification of ΞPMDA—some issues

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl +

  • ALCLAT

L

−1 = C −1

L

All but one of the matrices {Al}L

l=1 can be specified freely, while the

remaining one must be selected to fulfill the PMDA condition Solving the PMDA condition for one of the Al’s seems difficult

slide-36
SLIDE 36

PMDA condition in practice

Specification of ΞPMDA—some issues

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl +

  • ALCLAT

L

−1 = C −1

L

All but one of the matrices {Al}L

l=1 can be specified freely, while the

remaining one must be selected to fulfill the PMDA condition Solving the PMDA condition for one of the Al’s seems difficult Solving it for ALCLAT

L is, however, viable

ALCLAT

L =

  • C −1

L

− L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl −1

slide-37
SLIDE 37

PMDA condition in practice

Specification of ΞPMDA—some issues

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl +

  • ALCLAT

L

−1 = C −1

L

All but one of the matrices {Al}L

l=1 can be specified freely, while the

remaining one must be selected to fulfill the PMDA condition Solving the PMDA condition for one of the Al’s seems difficult Solving it for ALCLAT

L is, however, viable

ALCLAT

L =

  • IL − CL

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl −1 CL

slide-38
SLIDE 38

PMDA condition in practice

Specification of ΞPMDA—a possibility ALCLAT

L =

  • IL − CL

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl −1 CL

slide-39
SLIDE 39

PMDA condition in practice

Specification of ΞPMDA—a possibility ALCLAT

L =

  • IL − CL

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl −1 CL Selecting {Al = α1/2

l

Il}L−1

l=1 leads to

ALCLAT

L =

  • IL − CL

L−1

l=1 α−1 l

BT

l

  • BlCLBT

l

−1 Bl −1 CL

slide-40
SLIDE 40

PMDA condition in practice

Specification of ΞPMDA—a possibility ALCLAT

L =

  • IL − CL

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl −1 CL Selecting {Al = α1/2

l

Il}L−1

l=1 leads to

ALCLAT

L =

  • IL − CL

L−1

l=1 α−1 l

BT

l

  • BlCLBT

l

−1 Bl −1 CL

def

= (IL − QL)−1

slide-41
SLIDE 41

PMDA condition in practice

Specification of ΞPMDA—a possibility ALCLAT

L =

  • IL − CL

L−1

l=1 BT l

  • AlBlCLBT

l AT l

−1 Bl −1 CL Selecting {Al = α1/2

l

Il}L−1

l=1 leads to

ALCLAT

L =

  • IL − CL

L−1

l=1 α−1 l

BT

l

  • BlCLBT

l

−1 Bl −1 CL

def

= (IL − QL)−1 One may then write ΞPMDA =

  • Ξ[1,L−1]

MDA

(IL − QL)−1 CL

slide-42
SLIDE 42

PMDA condition in practice

Specification of ΞPMDA—a possibility with some issues

ΞPMDA =

  • Ξ[1,L−1]

MDA

(IL − QL)−1 CL

slide-43
SLIDE 43

PMDA condition in practice

Specification of ΞPMDA—a possibility with some issues

ΞPMDA =

  • Ξ[1,L−1]

MDA

(IL − QL)−1 CL

  • For a given matrix sequence {Bl}L

l=1, one can risk selecting {αl}L−1 l=1 such that

(IL − QL)−1 CL does not become a covariance matrix

slide-44
SLIDE 44

PMDA condition in practice

Specification of ΞPMDA—a possibility with some issues

ΞPMDA =

  • Ξ[1,L−1]

MDA

(IL − QL)−1 CL

  • For a given matrix sequence {Bl}L

l=1, one can risk selecting {αl}L−1 l=1 such that

(IL − QL)−1 CL does not become a covariance matrix The matrix IL − QL can be computationally costly to invert for large problems

slide-45
SLIDE 45

PMDA condition in practice

Specification of ΞPMDA—a possibility with some issues

ΞPMDA =

  • Ξ[1,L−1]

MDA

(IL − QL)−1 CL

  • For a given matrix sequence {Bl}L

l=1, one can risk selecting {αl}L−1 l=1 such that

(IL − QL)−1 CL does not become a covariance matrix The matrix IL − QL can be computationally costly to invert for large problems Specifying sufficiently large elements in {αl}L−1

l=1 will make QL small enough

that (IL − QL)−1 CL becomes a covariance matrix, and it will allow for approximation of (IL − QL)−1 by a truncated Neumann series

slide-46
SLIDE 46

PMDA condition in practice

Specification of ΞPMDA—a possibility with some issues

ΞPMDA =

  • Ξ[1,L−1]

MDA

(IL − QL)−1 CL

  • For a given matrix sequence {Bl}L

l=1, one can risk selecting {αl}L−1 l=1 such that

(IL − QL)−1 CL does not become a covariance matrix The matrix IL − QL can be computationally costly to invert for large problems Specifying sufficiently large elements in {αl}L−1

l=1 will make QL small enough

that (IL − QL)−1 CL becomes a covariance matrix, and it will allow for approximation of (IL − QL)−1 by a truncated Neumann series. Specifying too large elements in {αl}L−1

l=1 will, however, effectively remove the influence of

{dl}L−1

l=1 on the assimilation, which is unwanted. A balanced specification of

{αl}L−1

l=1 is therefore required

slide-47
SLIDE 47

Summary

Assimilation of multiple linearly dependent data vectors incorporates use of some information multiple times (partially multiple data asssimilation (PMDA)). The corresponding data covariance matrices should therefore be modified. A condition that the modified covariance matrices must satisfy in order to sample correctly in the linear-Gaussian case has been developed (Mannseth, in review). This PMDA condition is a generalization of the MDA condition (Emerick and Reynolds, Computers & Geosci 55, 2013) that the covariances must satisfy in the special case when a single data vector is assimilated multiple times A simplified version of the PMDA condition has been proposed (Mannseth, in review). Also application of the simplified version involves both computational and accuracy issues

slide-48
SLIDE 48

Acknowledgements

Partial financial support was provided by the research projects 4D Seismic History Matching (2015-2018), funded by Eni, Petrobras, the Research Council of Norway (RCN-Petromaks 2) and Total EP Norge Assimilating 4D seismic data: big data into big models (2019-2022), funded by Aker BP, Equinor, Repsol, RCN-Petromaks 2 and Total EP Norge