Iterative Regularizing Ensemble Kalman Methods for Inverse Problems - - PowerPoint PPT Presentation

iterative regularizing ensemble kalman methods for
SMART_READER_LITE
LIVE PREVIEW

Iterative Regularizing Ensemble Kalman Methods for Inverse Problems - - PowerPoint PPT Presentation

Iterative Regularizing Ensemble Kalman Methods for Inverse Problems Marco Iglesias School of Mathematical Sciences University of Nottingham June 24th, 2014 EnKF workshop, Bergen, Norway (Nottingham University) Bayesian inverse problems


slide-1
SLIDE 1

Iterative Regularizing Ensemble Kalman Methods for Inverse Problems Marco Iglesias

School of Mathematical Sciences University of Nottingham

June 24th, 2014 EnKF workshop, Bergen, Norway

(Nottingham University) Bayesian inverse problems Marco Iglesias 1 / 50

slide-2
SLIDE 2

Nonlinear ill-posed inverse problems

The forward map Let G : X → Y be the forward (parameter-to-observations) map that arises from PDE-constrained problem where u is an unknown parameter (or property) u − → G(u) ( G nonlinear, compact, sequentially weakly closed operator between separable Hilbert spaces X and Y.)

(Nottingham University) Bayesian inverse problems Marco Iglesias 2 / 50

slide-3
SLIDE 3

Nonlinear ill-posed inverse problems

The forward map Let G : X → Y be the forward (parameter-to-observations) map that arises from PDE-constrained problem where u is an unknown parameter (or property) u − → G(u) ( G nonlinear, compact, sequentially weakly closed operator between separable Hilbert spaces X and Y.) Example: steady Darcy flow − ∇ · eu∇p = f in D, −eu∇p · n = BN in ΓN. p = BD in ΓD where ∂D = ΓN ∪ ΓD. u = log(K) ∈ L∞(D) − → G(u) = {p(xi)}N

i=1 ∈ RM

(Nottingham University) Bayesian inverse problems Marco Iglesias 2 / 50

slide-4
SLIDE 4

Nonlinear ill-posed inverse problems

The data and the noise level Let u† be the “truth”, i.e. G(u†) are the exact (noise-free) observations. Assume that we are given data y = G(u†) + ξ† where ξ† is noise and we are given a noise level η such that ||y − G(u†)|| ≤ η

(Nottingham University) Bayesian inverse problems Marco Iglesias 3 / 50

slide-5
SLIDE 5

Nonlinear ill-posed inverse problems

The data and the noise level Let u† be the “truth”, i.e. G(u†) are the exact (noise-free) observations. Assume that we are given data y = G(u†) + ξ† where ξ† is noise and we are given a noise level η such that ||y − G(u†)|| ≤ η The inverse problem Given y and η find approximate solutions u to G(u†) = G(u).

(Nottingham University) Bayesian inverse problems Marco Iglesias 3 / 50

slide-6
SLIDE 6

Nonlinear ill-posed inverse problems

The data and the noise level Let u† be the “truth”, i.e. G(u†) are the exact (noise-free) observations. Assume that we are given data y = G(u†) + ξ† where ξ† is noise and we are given a noise level η such that ||y − G(u†)|| ≤ η The inverse problem Given y and η find approximate solutions u to G(u†) = G(u). Ill-posedness (Hadamard) Existence Uniqueness Continuity with respect to the data y

(Nottingham University) Bayesian inverse problems Marco Iglesias 3 / 50

slide-7
SLIDE 7

Nonlinear ill-posed inverse problems

Lack of continuity (lack of stability) with respect to the data We can construct a sequence un ∈ X such that un u but G(un) → G(u)

(Nottingham University) Bayesian inverse problems Marco Iglesias 4 / 50

slide-8
SLIDE 8

Nonlinear ill-posed inverse problems

Lack of continuity (lack of stability) with respect to the data We can construct a sequence un ∈ X such that un u but G(un) → G(u) If we want to compute with standard optimization u = arg min

u∈X ||y − G(u)||2 → min

we may observe semiconvergence behavior [Kirsch, 1996]

(Nottingham University) Bayesian inverse problems Marco Iglesias 4 / 50

slide-9
SLIDE 9

Nonlinear ill-posed inverse problems

Lack of continuity (lack of stability) with respect to the data We can construct a sequence un ∈ X such that un u but G(un) → G(u) If we want to compute with standard optimization u = arg min

u∈X ||y − G(u)||2 → min

we may observe semiconvergence behavior [Kirsch, 1996] Regularization Construct an approximation uη that is stable, i.e. such that uη → u as η → 0 where G(u) = G(u†)

(Nottingham University) Bayesian inverse problems Marco Iglesias 4 / 50

slide-10
SLIDE 10

Regularization Regularization Approaches (for nonlinear operators) Regularize-then-compute (e.g. Tikhonov, TSVD) Compute while regularizing (Iterative Regularization) [Kaltenbacher, 2010]

◮ regularizing Levenberg-Marquardt ◮ Landweber iteration ◮ truncated Newton-CG ◮ iterative regularized Gauss-Newton method (Nottingham University) Bayesian inverse problems Marco Iglesias 5 / 50

slide-11
SLIDE 11

Regularization Regularization Approaches (for nonlinear operators) Regularize-then-compute (e.g. Tikhonov, TSVD) Compute while regularizing (Iterative Regularization) [Kaltenbacher, 2010]

◮ regularizing Levenberg-Marquardt ◮ Landweber iteration ◮ truncated Newton-CG ◮ iterative regularized Gauss-Newton method

Aim of this work: Apply ideas from Iterative Regularization to develop ensemble Kalman methods as derivative-free tools for solving nonlinear ill-posed inverse problem in a general abstract framework.

(Nottingham University) Bayesian inverse problems Marco Iglesias 5 / 50

slide-12
SLIDE 12

PDE-constrained Inverse Problems The Classical (deterministic) Inverse Problem Given data y ∈ Y find u = arg min

u∈X ||y − G(u)||2 → min

(Nottingham University) Bayesian inverse problems Marco Iglesias 6 / 50

slide-13
SLIDE 13

PDE-constrained Inverse Problems The Classical (deterministic) Inverse Problem Given data y ∈ Y find u = arg min

u∈X ||y − G(u)||2 → min

Consider µ0(u) = P(u) the prior on u and y = G(u) + ξ, ξ ∼ N(0, Γ) The Bayesian Inverse Problem Characterize the posterior µy(u) = P(u|y): dµy dµ0 (u) ∝ exp

  • − Φ(u; y)
  • where

Φ(u; y) = 1 2||Γ−1/2(y − G(u))||2

(Nottingham University) Bayesian inverse problems Marco Iglesias 6 / 50

slide-14
SLIDE 14

Overview of this work

Classical (deterministic) Inversion

Bayesian Inversion

Least-squares Characterize the posterior

dµy dµ0 (u) / exp

  • − Φ(u; y)
  • Φ(u; y) = 1

2||Γ−1/2(y − G(u))||2

Iterative Regularization

(e.g. regularizing LM, landweber iteration).

Ensemble Kalman-based methods u(j,a) = u(j,f) + K(y(j) − G(u(j,f)))

(Nottingham University) Bayesian inverse problems Marco Iglesias 7 / 50

slide-15
SLIDE 15

Reference

Iterative Regularization for Data Assimilation in Petroleum Reservoirs Multiscale Inverse Problems Workshop, Warwick University, June 17-19, 2013. http://www2.warwick.ac.uk/fac/sci/maths/research/events/2012-2013/nonsymp/mip/schedule/

(Nottingham University) Bayesian inverse problems Marco Iglesias 8 / 50

slide-16
SLIDE 16

Iterative ensemble Kalman “Smoother”

Assume that the data y = G(u†) + ξ with ξ ∼ N(0, Γ). Consider an initial ensemble u(1)

0 , . . . , u(Ne)

. Prediction u(j)

n → G(u(j) n )

un = 1 Ne

Ne

  • j=1

u(j)

n ,

wn = 1 Ne

Ne

  • j=1

G(u(j)

n )

Cww = 1 Ne − 1

Ne

  • j=1

(G(u(j)

n ) − wn)(G(u(j) n ) − wn)T,

Cuw = 1 Ne − 1

Ne

  • j=1

(u(j)

n − un)(G(u(j) n ) − wn)T

Analysis u(j)

n → u(j) n+1

u(j)

n+1 = u(j) n + Cuw(Cww + Γ)−1(y(j) − G(u(j) n )),

y(j) = y + η(j)

(Nottingham University) Bayesian inverse problems Marco Iglesias 9 / 50

slide-17
SLIDE 17

Ensemble Kalman Methods for Inverse Problems

Augmented analysis u(j)

n+1

= u(j)

n + Cuw(Cww + Γ)−1(y(j) − G(u(j) n ))

w(j)

n+1

= G(u(j)

n ) + Cww(Cww + Γ)−1(y(j) − G(u(j) n ))

(Nottingham University) Bayesian inverse problems Marco Iglesias 10 / 50

slide-18
SLIDE 18

Ensemble Kalman Methods for Inverse Problems

Augmented analysis u(j)

n+1

= u(j)

n + Cuw(Cww + Γ)−1(y(j) − G(u(j) n ))

w(j)

n+1

= G(u(j)

n ) + Cww(Cww + Γ)−1(y(j) − G(u(j) n ))

z(j)

n+1 = z(j) n + CfHT

HCfHT + Γ −1 (y(j) − Hz(j)

n )

where z = (u, w)T ∈ Z ≡ X × Y H = (0, I) Cf =

  • Cuu

Cuw (Cuw)T Cww

  • (Nottingham University)

Bayesian inverse problems Marco Iglesias 10 / 50

slide-19
SLIDE 19

Ensemble Kalman Methods for Inverse Problems

Augmented analysis u(j)

n+1

= u(j)

n + Cuw(Cww + Γ)−1(y(j) − G(u(j) n ))

w(j)

n+1

= G(u(j)

n ) + Cww(Cww + Γ)−1(y(j) − G(u(j) n ))

z(j)

n+1 = z(j) n + CfHT

HCfHT + Γ −1 (y(j) − Hz(j)

n )

where z = (u, w)T ∈ Z ≡ X × Y H = (0, I) Cf =

  • Cuu

Cuw (Cuw)T Cww

  • Kalman as a Tikhonov-regularized linear inverse problems

zn+1 = 1 Ne

Ne

  • j=1

z(j)

n+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + ||(Cf)− 1 2 (z − zn)||2

Z

  • where zn ≡

1 Ne

Ne

j=1 z(j) n .

(Nottingham University) Bayesian inverse problems Marco Iglesias 10 / 50

slide-20
SLIDE 20

Ensemble Kalman Methods for Inverse Problems

Kalman as a Tikhonov-regularized linear inverse problems zn+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + ||(Cf)− 1 2 (z − zn)||2

Z

  • In summary, this iterative method is solving a sequence of linear inverse

problems: Given y, find z such that y = Hz

(Nottingham University) Bayesian inverse problems Marco Iglesias 11 / 50

slide-21
SLIDE 21

Ensemble Kalman Methods for Inverse Problems

Kalman as a Tikhonov-regularized linear inverse problems zn+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + ||(Cf)− 1 2 (z − zn)||2

Z

  • In summary, this iterative method is solving a sequence of linear inverse

problems: Given y, find z such that y = Hz Preliminary work on Kalman methods for inverse problems

  • M. Iglesias, K. Law and A.M. Stuart,

Ensemble Kalman methods for inverse problems. Inverse Problems. 29 (2013) 045001 http://arxiv.org/abs/1209.2736

(Nottingham University) Bayesian inverse problems Marco Iglesias 11 / 50

slide-22
SLIDE 22

Ensemble Kalman Methods for Inverse Problems

This algorithm can be formulated in a general abstract framework on Hilbert spaces. No localization/inflation/truncation of any type was considered. The main focus was the initial ensemble. We consider both initial ensemble sample from the prior but also the first elements of a basis. Invariance subspace property A = span{u(j)

0 }Ne j=1.

Theorem (Iglesias, Law, Stuart) u(j)

n ∈ A for all (n, j) ∈ N × {1, · · · , Ne}.

Even though the problem is defined on a compact subspace A, we found that further regularization is needed.

(Nottingham University) Bayesian inverse problems Marco Iglesias 12 / 50

slide-23
SLIDE 23

Synthetic experiment with a toy (groundwater) model

Consider Initial ensemble generated from a prior P(u) = N(u, C). Let G(u) be the forward operator that arises from a reservoir model.

0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x (dimensionless) y (dimensionless)

Truth

−2.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2 2.5 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x (dimensionless) y (dimensionless)

Measurement locations

Consider a truth u† ∼ P(u) from which synthetic data are generated by y = G(u†) + η η ∼ N(0, Γ) (prescribed Γ covariance of the Gaussian noise).

(Nottingham University) Bayesian inverse problems Marco Iglesias 13 / 50

slide-24
SLIDE 24

Reconstructing the truth with the mean of an ensemble of Ne = 75 (with small noise)

0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x (dimensionless) y (dimensionless)

Truth

−2.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2 2.5

Ensemble mean (no reg). Iter:1 Ensemble mean (no reg). Iter:2 Ensemble mean (no reg). Iter:3 Ensemble mean (no reg). Iter:12

(Nottingham University) Bayesian inverse problems Marco Iglesias 14 / 50

slide-25
SLIDE 25

Performance un ≡ 1 N

N

  • j=1

u(j)

n

||Γ−1/2(y − un)||l2 ||un − u†||L2(D) Data misfit Error w.r.t truth

2 4 6 8 10 12 14 16 18 20 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5

iteration log data misfit

unregularized

2 4 6 8 10 12 14 16 18 20 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1

iteration relative error unregularized (Nottingham University) Bayesian inverse problems Marco Iglesias 15 / 50

slide-26
SLIDE 26

Recall the standard update formula yields a mean defined by zn+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + ||(Cf)− 1 2 (z − zn)||2

Z

  • (Nottingham University)

Bayesian inverse problems Marco Iglesias 16 / 50

slide-27
SLIDE 27

Recall the standard update formula yields a mean defined by zn+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + ||(Cf)− 1 2 (z − zn)||2

Z

  • We propose in
  • M. A. Iglesias

Iterative regularization for ensemble-based data assimilation in reservoir models. In review (http://arxiv.org/abs/1401.5375). 2014 (Submitted to Computational Geosciences)

to modify the ensemble method z(j)

n+1 = z(j) n + CfHT

HCfHT + αΓ −1 (y(j) − Hz(j)

n )

so that the mean is given by zn+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + α||(Cf)− 1 2 (z − zn)||2

Z

  • i.e. Regularizing Kalman as in Levenberg-Marquardt!! where a selection of α

guided by Iterative Regularization methods.

(Nottingham University) Bayesian inverse problems Marco Iglesias 16 / 50

slide-28
SLIDE 28

Recall the standard update formula yields a mean defined by zn+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + ||(Cf)− 1 2 (z − zn)||2

Z

  • We propose in
  • M. A. Iglesias

Iterative regularization for ensemble-based data assimilation in reservoir models. In review (http://arxiv.org/abs/1401.5375). 2014 (Submitted to Computational Geosciences)

to modify the ensemble method z(j)

n+1 = z(j) n + CfHT

HCfHT + αΓ −1 (y(j) − Hz(j)

n )

so that the mean is given by zn+1 = argminz

  • ||Γ− 1

2 (y − Hz)||2 + α||(Cf)− 1 2 (z − zn)||2

Z

  • i.e. Regularizing Kalman as in Levenberg-Marquardt!! where a selection of α

guided by Iterative Regularization methods. More precisely, we propose ρ||Γ−1/2(y − Hzn)|| ≤ ||Γ−1/2(y − Hzn+1(α))|| ≤ ||Γ−1/2(y − Hzn)|| where ρ is a tunable parameter in (0, 1).

(Nottingham University) Bayesian inverse problems Marco Iglesias 16 / 50

slide-29
SLIDE 29

Proposition [Iglesias 2014] α → ||Γ−1/2(y − Hzn+1(α))|| is continuous and monotone nondecreasing. Moreover, limα→0 ||Γ−1/2(y − Hzn+1(α))|| = ||Γ−1/2(y − Hzn)|| limα→∞ ||Γ−1/2(y − Hzn+1(α))|| = 0 Thus, there exists α such that ρ||Γ−1/2(y − Hzn)|| ≤ ||Γ−1/2(y − Hzn+1(α))||

(Nottingham University) Bayesian inverse problems Marco Iglesias 17 / 50

slide-30
SLIDE 30

Proposition [Iglesias 2014] α → ||Γ−1/2(y − Hzn+1(α))|| is continuous and monotone nondecreasing. Moreover, limα→0 ||Γ−1/2(y − Hzn+1(α))|| = ||Γ−1/2(y − Hzn)|| limα→∞ ||Γ−1/2(y − Hzn+1(α))|| = 0 Thus, there exists α such that ρ||Γ−1/2(y − Hzn)|| ≤ ||Γ−1/2(y − Hzn+1(α))|| Stopping criteria: Discrepancy Principle We terminate the algorithm whenever ||Γ−1/2(y − Hzn)|| ≤ τη||Γ−1/2|| for a prescribed value τ > 1/ρ where η = ||y − G(u†)|| is the noise level. Then we can show that the selection of α is consistent with the discrepancy principle to the linear inverse problem corresponding to the analysis step.

(Nottingham University) Bayesian inverse problems Marco Iglesias 17 / 50

slide-31
SLIDE 31

An iterative regularizing ensemble Kalman method Let ρ < 1 and τ > 1/ρ. Generate an initial ensemble u(j)

0 ∼ µ0

A regularizing Kalman method (1) Prediction Step: Evaluate w(j,f)

m

= G(u(j)

m ) define wf m

(2) Stopping criteria. If ||Γ−1/2(y − wf

m)|| ≤ τη

  • Stop. Otherwise: define Cuw

m , um, Cww m

and (3) Analysis step: Compute the updated ensembles u(j)

m+1 = u(j) m + Cuw m (Cww m

+ αmΓ)−1(y(j) − w(j,f)

m

) for αm such that αm||Γ1/2(Cww

m

+ αmΓ)−1(yη − wf

m)|| ≤ ρ||Γ−1/2(yη − wf m)||

(Nottingham University) Bayesian inverse problems Marco Iglesias 18 / 50

slide-32
SLIDE 32

Synthetic experiment with a toy (groundwater) model

Consider Initial ensemble generated from a prior P(u) = N(u, C). Let G(u) be the forward operator that arises from a reservoir model.

0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x (dimensionless) y (dimensionless)

Truth

−2.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2 2.5 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x (dimensionless) y (dimensionless)

Measurement locations

Consider a truth u† ∼ P(u) from which synthetic data are generated by y = G(u†) + η η ∼ N(0, Γ) (prescribed Γ covariance of the Gaussian noise).

(Nottingham University) Bayesian inverse problems Marco Iglesias 19 / 50

slide-33
SLIDE 33

Reconstructing the truth with the mean of an ensemble of Ne = 75 (with small noise)

0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x (dimensionless) y (dimensionless)

Truth

−2.5 −2 −1.5 −1 −0.5 0.5 1 1.5 2 2.5

Ensemble mean (no reg). Iter:5 Ensemble mean (no reg). Iter:9 Ensemble mean (no reg). Iter:13 Ensemble mean (no reg). Iter:17

(Nottingham University) Bayesian inverse problems Marco Iglesias 20 / 50

slide-34
SLIDE 34

Performance un ≡ 1 N

N

  • j=1

u(j)

n

||Γ−1/2(y − un)||l2 ||un − u†||L2(D) Data misfit Error w.r.t truth

2 4 6 8 10 12 14 16 18 20 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5

iteration log data misfit

regularized unregularized

2 4 6 8 10 12 14 16 18 20 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1

iteration relative error regularized unregularized

(Nottingham University) Bayesian inverse problems Marco Iglesias 21 / 50

slide-35
SLIDE 35

Regularizing properties as a function of ρ

10 20 30 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.2

10 20 30 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.2

(Nottingham University) Bayesian inverse problems Marco Iglesias 22 / 50

slide-36
SLIDE 36

Regularizing properties as a function of ρ

10 20 30 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.3

10 20 30 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.3

(Nottingham University) Bayesian inverse problems Marco Iglesias 23 / 50

slide-37
SLIDE 37

Regularizing properties as a function of ρ

10 20 30 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.4

10 20 30 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.4

(Nottingham University) Bayesian inverse problems Marco Iglesias 24 / 50

slide-38
SLIDE 38

Regularizing properties as a function of ρ

10 20 30 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.5

10 20 30 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.5

(Nottingham University) Bayesian inverse problems Marco Iglesias 25 / 50

slide-39
SLIDE 39

Regularizing properties as a function of ρ

10 20 30 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.6

10 20 30 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.6

(Nottingham University) Bayesian inverse problems Marco Iglesias 26 / 50

slide-40
SLIDE 40

Regularizing properties as a function of ρ

10 20 30 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.7

10 20 30 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.7

(Nottingham University) Bayesian inverse problems Marco Iglesias 27 / 50

slide-41
SLIDE 41

Regularizing properties as a function of ρ

10 20 30 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.8

10 20 30 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.8

(Nottingham University) Bayesian inverse problems Marco Iglesias 28 / 50

slide-42
SLIDE 42

Regularization parameter α Plot of log α

2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12

iteraton log alpha ρ=0.7 ρ=0.5

(Nottingham University) Bayesian inverse problems Marco Iglesias 29 / 50

slide-43
SLIDE 43

Regularizing properties as a function of the ensemble size

5 10 15 20 1 2 3 4 5 6

iteration log−data misfit

Ne=50, ρ =0.7

5 10 15 20 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=50, ρ =0.7

(Nottingham University) Bayesian inverse problems Marco Iglesias 30 / 50

slide-44
SLIDE 44

Regularizing properties as a function of the ensemble size

5 10 15 20 1 2 3 4 5 6

iteration log−data misfit

Ne=75, ρ =0.7

5 10 15 20 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=75, ρ =0.7

(Nottingham University) Bayesian inverse problems Marco Iglesias 31 / 50

slide-45
SLIDE 45

Regularizing properties as a function of the ensemble size

5 10 15 20 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.7

5 10 15 20 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.7

(Nottingham University) Bayesian inverse problems Marco Iglesias 32 / 50

slide-46
SLIDE 46

Regularizing properties as a function of the ensemble size

5 10 15 20 1 2 3 4 5 6

iteration log−data misfit

Ne=150, ρ =0.7

5 10 15 20 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=150, ρ =0.7

(Nottingham University) Bayesian inverse problems Marco Iglesias 33 / 50

slide-47
SLIDE 47

Regularizing properties as a function of the ensemble size

5 10 15 20 1 2 3 4 5 6

iteration log−data misfit

Ne=200, ρ =0.7

5 10 15 20 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=200, ρ =0.7

(Nottingham University) Bayesian inverse problems Marco Iglesias 34 / 50

slide-48
SLIDE 48

Regularizing properties as a function of the ensemble size

5 10 15 20 1 2 3 4 5 6

iteration log−data misfit

Ne=300, ρ =0.7

5 10 15 20 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=300, ρ =0.7

(Nottingham University) Bayesian inverse problems Marco Iglesias 35 / 50

slide-49
SLIDE 49

Convergence as the noise level decreases un ≡ 1 N

N

  • j=1

u(j)

n

||Γ−1/2(y − un)||l2 ||un − u†||L2(D)

5 10 15 20 25 30 35 1 2 3 4 5 6

iteration log−data misfit

0.25% 0.5% 1% 2.5% 5%

5 10 15 20 25 30 35 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1

iteration relative error

0.25% 0.5% 1% 2.5% 5%

(Nottingham University) Bayesian inverse problems Marco Iglesias 36 / 50

slide-50
SLIDE 50

Computational cost Cost approx 6000!!!

20 40 60 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.9

20 40 60 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.9

(Nottingham University) Bayesian inverse problems Marco Iglesias 37 / 50

slide-51
SLIDE 51

Accelerating EnKF Recall the augmented analysis u(j)

n+1

= u(j)

n + Cuw(Cww + Γ)−1(y(j) − G(u(j) n ))

w(j)

n+1

= G(u(j)

n ) + Cww(Cww + Γ)−1(y(j) − G(u(j) n ))

Use the second equation to do some linear iterations when ρ is large (and so is α). adhoc!!

(Nottingham University) Bayesian inverse problems Marco Iglesias 38 / 50

slide-52
SLIDE 52

Computational cost 1 nonlinear iteration every iteration Cost approx 6000!!!

20 40 60 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.9

20 40 60 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.9

(Nottingham University) Bayesian inverse problems Marco Iglesias 39 / 50

slide-53
SLIDE 53

Computational cost 1 nonlinear iteration every 5 Cost approx 1200!!!

20 40 60 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.9

20 40 60 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.9

(Nottingham University) Bayesian inverse problems Marco Iglesias 40 / 50

slide-54
SLIDE 54

Computational cost 1 nonlinear iteration every 10 Cost approx 600!!!

20 40 60 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.9

20 40 60 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.9

(Nottingham University) Bayesian inverse problems Marco Iglesias 41 / 50

slide-55
SLIDE 55

Computational cost 1 nonlinear iteration every 15 Cost approx 400!!!

20 40 60 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.9

20 40 60 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.9

(Nottingham University) Bayesian inverse problems Marco Iglesias 42 / 50

slide-56
SLIDE 56

Computational cost 1 nonlinear iteration every 20 Cost approx 300!!!

20 40 60 1 2 3 4 5 6

iteration log−data misfit

Ne=100, ρ =0.9

20 40 60 0.6 0.7 0.8 0.9 1

iteration relative error

Ne=100, ρ =0.9

(Nottingham University) Bayesian inverse problems Marco Iglesias 43 / 50

slide-57
SLIDE 57

Connections with variational Iterative Regularization

Assume that at a given iteration level G(u(j)

m ) ≈ G(um) + DG(um)(u(j) m − um)

The update formula becomes um+1 = um + Cuu

m DG(um)∗(DG(um)Cuu m DG(um)∗ + αmΓ)−1(y − G(um))

where Cuu

m =

1 Ne − 1

Ne

  • j=1

(u(j,f)

m

− uf

m)(u(j,f) m

− uf

m)T

(Nottingham University) Bayesian inverse problems Marco Iglesias 44 / 50

slide-58
SLIDE 58

Connections with variational Iterative Regularization

Assume that at a given iteration level G(u(j)

m ) ≈ G(um) + DG(um)(u(j) m − um)

The update formula becomes um+1 = um + Cuu

m DG(um)∗(DG(um)Cuu m DG(um)∗ + αmΓ)−1(y − G(um))

where Cuu

m =

1 Ne − 1

Ne

  • j=1

(u(j,f)

m

− uf

m)(u(j,f) m

− uf

m)T

If we replace Cuu

m by the prior error covariance C, then

um+1 = um + CDG(um)∗(DG(um)CDG(um)∗ + αmΓ)−1(y − G(um)) This is Levenberg-Marquardt applied for the minimization u = arg min

u∈X ||Γ−1/2(y − G(u))||2 → min

in X with norm ||C−1/2 · ||X. (No regularization term!!!!)

(Nottingham University) Bayesian inverse problems Marco Iglesias 44 / 50

slide-59
SLIDE 59

The regularizing LM Selecting αm and the stopping criteria according to the discrepancy principle yields the regularizing Levenberg-Marquardt of [Hanke, 1997]: Theorem [Hanke 1997] um converges after a finite number of iterations and um → u as η → 0 (where G(u) = G(u†)) The regularizing LM scheme for reservoir modeling applications

  • M. A. Iglesias

Iterative regularization for ensemble-based data assimilation in reservoir models. In review (http://arxiv.org/abs/1401.5375). 2014

  • M. A. Iglesias and C. Dawson

The regularizing Levenberg-Marquardt scheme for history matching of petroleum reservoirs, Computational Geosciences, (2013) 17:1033-1053

(Nottingham University) Bayesian inverse problems Marco Iglesias 45 / 50

slide-60
SLIDE 60

The proposed ES as an approximate regularizing LM scheme Comparing ES with the regularizing LM scheme (on the same subspace)

10 20 30 40 50 0.5 0.55 0.6 0.65 0.7 0.75 0.8

iteration relative error regularizing ES regularizing LM

10 20 30 40 50 50 60 70 80 90 100 110 120 130

iteration log−data misfit regularizing ES regularizing LM

(Nottingham University) Bayesian inverse problems Marco Iglesias 46 / 50

slide-61
SLIDE 61

Ensemble Kalman method for geometric inverse problems Suppose we are interested in recovering something like:

0.5 1 1.5 2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x (dimensionless) y (dimensionless)

Truth

200 400 600 800 1000 1200

We parametrize the permeability in terms of a level set function u. i.e. K(u) = Ki✶u<0 + Ke✶u≥0 p = G(u) is, as before, the solution to −∇ · K(u)∇p = f evaluated at some locations. We invert noisy measurements: y = G(u) + ξ

(Nottingham University) Bayesian inverse problems Marco Iglesias 47 / 50

slide-62
SLIDE 62

Ensemble Kalman method for geometric inverse problems Let us consider an artificial prior (for the initial ensemble) µ0 = N(0, C) with some covariance that reflects the regularity of the shape. K(u) = Ki✶u<0 + Ke✶u≥0

(Nottingham University) Bayesian inverse problems Marco Iglesias 48 / 50

slide-63
SLIDE 63

Ensemble Kalman method for geometric inverse problems

(Nottingham University) Bayesian inverse problems Marco Iglesias 49 / 50

slide-64
SLIDE 64

Summary

Iterative regularization provides strategies for regularizing Kalman based methods. Regularization has strong effect in the robustness and accuracy of ensemble methods for solving both classical and Bayesian inverse problems. Further investigations are required to establish the mathematical properties of these approximations. References

  • M. A. Iglesias

Iterative regularization for ensemble-based data assimilation in reservoir models. In review (http://arxiv.org/abs/1401.5375). 2014

  • M. Iglesias, K. Law and A.M. Stuart,

Ensemble Kalman methods for inverse problems. Inverse Problems. 29 (2013) 045001 http://arxiv.org/abs/1209.2736

  • M. A. Iglesias and C. Dawson

The regularizing Levenberg-Marquardt scheme for history matching of petroleum reservoirs, Computational Geosciences, (2013) 17:1033-1053

(Nottingham University) Bayesian inverse problems Marco Iglesias 50 / 50