iterative regularizing ensemble kalman methods for
play

Iterative Regularizing Ensemble Kalman Methods for Inverse Problems - PowerPoint PPT Presentation

Iterative Regularizing Ensemble Kalman Methods for Inverse Problems Marco Iglesias School of Mathematical Sciences University of Nottingham June 24th, 2014 EnKF workshop, Bergen, Norway (Nottingham University) Bayesian inverse problems


  1. Iterative Regularizing Ensemble Kalman Methods for Inverse Problems Marco Iglesias School of Mathematical Sciences University of Nottingham June 24th, 2014 EnKF workshop, Bergen, Norway (Nottingham University) Bayesian inverse problems Marco Iglesias 1 / 50

  2. Nonlinear ill-posed inverse problems The forward map Let G : X → Y be the forward (parameter-to-observations) map that arises from PDE-constrained problem where u is an unknown parameter (or property) u − → G ( u ) ( G nonlinear, compact, sequentially weakly closed operator between separable Hilbert spaces X and Y.) (Nottingham University) Bayesian inverse problems Marco Iglesias 2 / 50

  3. Nonlinear ill-posed inverse problems The forward map Let G : X → Y be the forward (parameter-to-observations) map that arises from PDE-constrained problem where u is an unknown parameter (or property) u − → G ( u ) ( G nonlinear, compact, sequentially weakly closed operator between separable Hilbert spaces X and Y.) Example: steady Darcy flow − ∇ · e u ∇ p = f in D , − e u ∇ p · n = B N in Γ N . p = B D in Γ D where ∂ D = Γ N ∪ Γ D . u = log ( K ) ∈ L ∞ ( D ) − → G ( u ) = { p ( x i ) } N i = 1 ∈ R M (Nottingham University) Bayesian inverse problems Marco Iglesias 2 / 50

  4. Nonlinear ill-posed inverse problems The data and the noise level Let u † be the “truth”, i.e. G ( u † ) are the exact (noise-free) observations. Assume that we are given data y = G ( u † ) + ξ † where ξ † is noise and we are given a noise level η such that || y − G ( u † ) || ≤ η (Nottingham University) Bayesian inverse problems Marco Iglesias 3 / 50

  5. Nonlinear ill-posed inverse problems The data and the noise level Let u † be the “truth”, i.e. G ( u † ) are the exact (noise-free) observations. Assume that we are given data y = G ( u † ) + ξ † where ξ † is noise and we are given a noise level η such that || y − G ( u † ) || ≤ η The inverse problem Given y and η find approximate solutions u to G ( u † ) = G ( u ) . (Nottingham University) Bayesian inverse problems Marco Iglesias 3 / 50

  6. Nonlinear ill-posed inverse problems The data and the noise level Let u † be the “truth”, i.e. G ( u † ) are the exact (noise-free) observations. Assume that we are given data y = G ( u † ) + ξ † where ξ † is noise and we are given a noise level η such that || y − G ( u † ) || ≤ η The inverse problem Given y and η find approximate solutions u to G ( u † ) = G ( u ) . Ill-posedness (Hadamard) Existence Uniqueness Continuity with respect to the data y (Nottingham University) Bayesian inverse problems Marco Iglesias 3 / 50

  7. Nonlinear ill-posed inverse problems Lack of continuity (lack of stability) with respect to the data We can construct a sequence u n ∈ X such that u n � u but G ( u n ) → G ( u ) (Nottingham University) Bayesian inverse problems Marco Iglesias 4 / 50

  8. Nonlinear ill-posed inverse problems Lack of continuity (lack of stability) with respect to the data We can construct a sequence u n ∈ X such that u n � u but G ( u n ) → G ( u ) If we want to compute with standard optimization u ∈ X || y − G ( u ) || 2 → min u = arg min we may observe semiconvergence behavior [Kirsch, 1996] (Nottingham University) Bayesian inverse problems Marco Iglesias 4 / 50

  9. Nonlinear ill-posed inverse problems Lack of continuity (lack of stability) with respect to the data We can construct a sequence u n ∈ X such that u n � u but G ( u n ) → G ( u ) If we want to compute with standard optimization u ∈ X || y − G ( u ) || 2 → min u = arg min we may observe semiconvergence behavior [Kirsch, 1996] Regularization Construct an approximation u η that is stable, i.e. such that u η → u as η → 0 where G ( u ) = G ( u † ) (Nottingham University) Bayesian inverse problems Marco Iglesias 4 / 50

  10. Regularization Regularization Approaches (for nonlinear operators) Regularize-then-compute (e.g. Tikhonov, TSVD) Compute while regularizing (Iterative Regularization) [Kaltenbacher, 2010] ◮ regularizing Levenberg-Marquardt ◮ Landweber iteration ◮ truncated Newton-CG ◮ iterative regularized Gauss-Newton method (Nottingham University) Bayesian inverse problems Marco Iglesias 5 / 50

  11. Regularization Regularization Approaches (for nonlinear operators) Regularize-then-compute (e.g. Tikhonov, TSVD) Compute while regularizing (Iterative Regularization) [Kaltenbacher, 2010] ◮ regularizing Levenberg-Marquardt ◮ Landweber iteration ◮ truncated Newton-CG ◮ iterative regularized Gauss-Newton method Aim of this work: Apply ideas from Iterative Regularization to develop ensemble Kalman methods as derivative-free tools for solving nonlinear ill-posed inverse problem in a general abstract framework. (Nottingham University) Bayesian inverse problems Marco Iglesias 5 / 50

  12. PDE-constrained Inverse Problems The Classical (deterministic) Inverse Problem Given data y ∈ Y find u ∈ X || y − G ( u ) || 2 → min u = arg min (Nottingham University) Bayesian inverse problems Marco Iglesias 6 / 50

  13. PDE-constrained Inverse Problems The Classical (deterministic) Inverse Problem Given data y ∈ Y find u ∈ X || y − G ( u ) || 2 → min u = arg min Consider µ 0 ( u ) = P ( u ) the prior on u and ξ ∼ N ( 0 , Γ) y = G ( u ) + ξ, The Bayesian Inverse Problem Characterize the posterior µ y ( u ) = P ( u | y ) : d µ y � � ( u ) ∝ exp − Φ( u ; y ) d µ 0 where Φ( u ; y ) = 1 2 || Γ − 1 / 2 ( y − G ( u )) || 2 (Nottingham University) Bayesian inverse problems Marco Iglesias 6 / 50

  14. Overview of this work Bayesian Classical (deterministic) Inversion Inversion Characterize the posterior Least-squares d µ y � � Φ( u ; y ) = 1 ( u ) / exp − Φ( u ; y ) 2 || Γ − 1 / 2 ( y − G ( u )) || 2 d µ 0 Iterative Ensemble Kalman-based methods Regularization u ( j , a ) = u ( j , f ) + K ( y ( j ) − G ( u ( j , f ) )) (e.g. regularizing LM, landweber iteration). (Nottingham University) Bayesian inverse problems Marco Iglesias 7 / 50

  15. Reference Iterative Regularization for Data Assimilation in Petroleum Reservoirs Multiscale Inverse Problems Workshop, Warwick University, June 17-19, 2013. http://www2.warwick.ac.uk/fac/sci/maths/research/events/2012-2013/nonsymp/mip/schedule/ (Nottingham University) Bayesian inverse problems Marco Iglesias 8 / 50

  16. Iterative ensemble Kalman “Smoother” Assume that the data y = G ( u † ) + ξ with ξ ∼ N ( 0 , Γ) . Consider an initial ensemble u ( 1 ) 0 , . . . , u ( N e ) . 0 Prediction u ( j ) n → G ( u ( j ) n ) N e N e u n = 1 w n = 1 � u ( j ) � G ( u ( j ) n , n ) N e N e j = 1 j = 1 N e 1 C ww = ( G ( u ( j ) n ) − w n )( G ( u ( j ) � n ) − w n ) T , N e − 1 j = 1 N e 1 C uw = ( u ( j ) n − u n )( G ( u ( j ) � n ) − w n ) T N e − 1 j = 1 Analysis u ( j ) n → u ( j ) n + 1 n + C uw ( C ww + Γ) − 1 ( y ( j ) − G ( u ( j ) y ( j ) = y + η ( j ) u ( j ) n + 1 = u ( j ) n )) , (Nottingham University) Bayesian inverse problems Marco Iglesias 9 / 50

  17. Ensemble Kalman Methods for Inverse Problems Augmented analysis n + C uw ( C ww + Γ) − 1 ( y ( j ) − G ( u ( j ) u ( j ) = u ( j ) n )) n + 1 n ) + C ww ( C ww + Γ) − 1 ( y ( j ) − G ( u ( j ) w ( j ) = G ( u ( j ) n )) n + 1 (Nottingham University) Bayesian inverse problems Marco Iglesias 10 / 50

  18. Ensemble Kalman Methods for Inverse Problems Augmented analysis n + C uw ( C ww + Γ) − 1 ( y ( j ) − G ( u ( j ) u ( j ) = u ( j ) n )) n + 1 n ) + C ww ( C ww + Γ) − 1 ( y ( j ) − G ( u ( j ) w ( j ) = G ( u ( j ) n )) n + 1 � − 1 n + C f H T � HC f H T + Γ ( y ( j ) − Hz ( j ) z ( j ) n + 1 = z ( j ) n ) where C uu C uw � � z = ( u , w ) T ∈ Z ≡ X × Y C f = H = ( 0 , I ) ( C uw ) T C ww (Nottingham University) Bayesian inverse problems Marco Iglesias 10 / 50

  19. Ensemble Kalman Methods for Inverse Problems Augmented analysis n + C uw ( C ww + Γ) − 1 ( y ( j ) − G ( u ( j ) u ( j ) = u ( j ) n )) n + 1 n ) + C ww ( C ww + Γ) − 1 ( y ( j ) − G ( u ( j ) w ( j ) = G ( u ( j ) n )) n + 1 � − 1 n + C f H T � HC f H T + Γ ( y ( j ) − Hz ( j ) z ( j ) n + 1 = z ( j ) n ) where C uu C uw � � z = ( u , w ) T ∈ Z ≡ X × Y C f = H = ( 0 , I ) ( C uw ) T C ww Kalman as a Tikhonov-regularized linear inverse problems N e z n + 1 = 1 � 2 ( y − Hz ) || 2 + || ( C f ) − 1 � z ( j ) || Γ − 1 � 2 ( z − z n ) || 2 n + 1 = argmin z Z N e j = 1 j = 1 z ( j ) � N e 1 where z n ≡ n . N e (Nottingham University) Bayesian inverse problems Marco Iglesias 10 / 50

  20. Ensemble Kalman Methods for Inverse Problems Kalman as a Tikhonov-regularized linear inverse problems � 2 ( y − Hz ) || 2 + || ( C f ) − 1 � || Γ − 1 2 ( z − z n ) || 2 z n + 1 = argmin z Z In summary, this iterative method is solving a sequence of linear inverse problems: Given y , find z such that y = Hz (Nottingham University) Bayesian inverse problems Marco Iglesias 11 / 50

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend