Course on Inverse Problems Albert Tarantola Lesson X: Optimization - - PowerPoint PPT Presentation

course on inverse problems
SMART_READER_LITE
LIVE PREVIEW

Course on Inverse Problems Albert Tarantola Lesson X: Optimization - - PowerPoint PPT Presentation

Princeton University Department of Geosciences Course on Inverse Problems Albert Tarantola Lesson X: Optimization Optimization If the volumetric probability f post ( M ) is expected to have a small number of maxima (say one, or two, or


slide-1
SLIDE 1

Princeton University

Department of Geosciences

Course on Inverse Problems

Albert Tarantola

Lesson X: Optimization

slide-2
SLIDE 2

Optimization

  • If the volumetric probability fpost(M) is expected to have a

small number of maxima (say one, or two, or three),

  • we may try to locate them by using standard optimization

methods (simplex methods, gradient-based methods),

  • and we may try to study fpost(M) in the neighborhood of

each optimum. Practical tip: simplex methods and gradient-based methods work much better with the function ψ(M) = log( fpost(M)/ f0) than with the function fpost(M) .

slide-3
SLIDE 3

Least-squares theory

  • The model parameter manifold may be a linear space, with

vectors denoted m, m′, . . . , and the a priori information may have the Gaussian form fprior(m) = k exp

  • 1

2 (m − mprior)t Cm

  • 1 (m − mprior)
  • .
  • The observable parameter manifold may be a linear space,

with vectors denoted o, o′, . . . , and the information brought by measurements may have the Gaussian form gobs(o) = k exp

  • 1

2 (o − oobs)t Co

  • 1 (o − oobs)
  • .
  • The forward modeling relation becomes, with these nota-

tions,

  • = o(m)

.

slide-4
SLIDE 4

Then, the posterior volumetric probability for the model pa- rameters, whose general expression is fpost(m) = 1 ν fprior(m) gobs( o(m) ) . here becomes fpost(m) = k exp( −S(m) ) , where the misfit function S(m) is the sum of squares 2 S(m) = (m − mprior)t Cm

  • 1 (m − mprior)

+ (o(m) − oobs)t Co

  • 1 (o(m) − oobs)

.

slide-5
SLIDE 5

The maximum likelihood model is the model m maximizing fpost(m) . It is also the model minimizing S(m) . It can be

  • btained using a quasi-Newton algorithm,

mn+1 = mn − H-1

n γn

, where the Hessian of S is Hn = Ot

n C-1

  • On + C-1

m

, and the gradient of S is γn = Ot

n C-1

  • ( o(mn) − oobs ) + C-1

m ( mn − mprior )

.

slide-6
SLIDE 6

Here, the tangent linear operator On is defined via

  • ( mn + δm ) = o(mn) + On δm + . . .

When the notations m = {mα} = {m1, m2, . . . , mp}

  • = {oi} = {o1, o2, . . . , oq}
  • i = oi(m1, m2, . . . , mp)

apply, then On is the matrix of partial derivatives Oiα = ∂oi ∂mα (evaluated at point mn ).

slide-7
SLIDE 7

As we have seen, the model m∞ at which the algorithm con- verges maximizes the posterior volumetric probability fpost(m) . To estimate the posterior uncertainties: the covariance oper- ator of the Gaussian volumetric probability that is tangent to fpost(m) at m∞ is

  • Cm = H-1

, while the covariance operator of the Gaussian volumetric prob- ability that is tangent to gpost(o) at o∞ = o(m∞) is

  • Co = O∞

Cm Ot

. Example: plants leafs and the radiative transfer model.

⇒ mathematica notebook