indam intensive period computational methods for inverse
play

INDAM intensive period Computational Methods for Inverse Problems in - PowerPoint PPT Presentation

INDAM intensive period Computational Methods for Inverse Problems in Imaging Conclusive Workshop, Como, Italy, August 16-18, 2018 (Numerical linear algebra, regularization in Banach spaces and) variable exponent Lebesgue spaces for


  1. INDAM intensive period “Computational Methods for Inverse Problems in Imaging ” Conclusive Workshop, Como, Italy, August 16-18, 2018 (Numerical linear algebra, regularization in Banach spaces and) variable exponent Lebesgue spaces for adaptive regularization Claudio Estatico Dipartimento di Matematica, Universit` a di Genova, Italy

  2. Related joint works: • Image restoration, acceleration and preconditioning: Paola Brianzi, Fabio Di Benedetto; Pietro Dell’Acqua; Marco Donatelli DIMA, Univ. Genova; DISAT, Univ. dell’Aquila; Univ. Insubria, Como. • Microwave inverse scattering: Alessandro Fedeli, Matteo Pastorino and Andrea Randazzo Dip. di Ingegneria DITEN, Univ. Genova. • Remote Sensing: Flavia Lenti, Serge Gratton; David Titley-Peloquin; Maurizio Migliaccio, Ferdinando Nunziata; Federico Benvenuto, Alberto Sorrentino CERFACS and ENSEEIHT, Universit´ e de Toulouse; Dep. of Bioresource Engineering, McGill University, Canada; Dip. per le Tecnologie, Univ. Napoli Parthenope; Dip. di Matematica (DIMA), Univ. Genova. • Subsurface prospecting: Gian Piero Deidda; Patricia Diaz De Alba, Caterina Fenu, Giuseppe Rodriguez Dip. di Ing. Civile, Dip. di Matematica e Informatica, Univ. Cagliari.

  3. Introduction: linear equations and inverse problems in Hilbert vs Banach space settings

  4. Inverse Problem By the knowledge of some “observed” data y (i.e., the effect), find an approximation of some model parameters x (i.e., the cause). Given the (noisy) data y ∈ Y , find (an approximation of) the unknown x ∈ X such that Ax = y where A : X − → Y is a known linear operator, and X , Y are two functional (here Hilbert or Banach) spaces. True image Blurred (noisy) image Restored image Inverse problems are usually ill-posed, they need regularization techniques.

  5. Numerical linear algebra lives in Hilbert spaces A Hilbert space is a complete vector space endowed with a scalar product that allows (lengths and) angles to be defined and computed. In any Hilbert spaces, what happens in our Euclidean world (that is, in our part of universe ...) always holds: � u + v � 2 = � u � 2 + � v � 2 • Pythagorean theorem: if u ⊥ v, then � u + v � 2 + � u − v � 2 = 2( � u � 2 + � v � 2 ) • Parallelogram identity: 2 � u � 2 = u ∇ 1 • Gradient of (the square of) the norm: The scalar product allows also for a natural definition of: P ( u ) = � u, v � v/ � v � 2 • orthogonal projection (best approximation) of u on v • SVD decomposition (or eigenvalues/eigenvectors dec.) of linear operators (for separable Hilbert spaces, as any finite dimensional Hilbert space...)

  6. Regularization: the “classical” framework in Hilbert spaces In general, all the regularization methods for ill-posed functional equations have been deeply investigated in the context of Hilbert spaces . Benefits: Any linear (or linearized) operator in Hilbert spaces can be decomposed into a set of eigenfunctions by using the conventional spectral theory. This way, convergence and regularization properties of any solving method can be analyzed by considering the behavior of any single eigenfunction (i.e., we can use the SVD decomposition ...). Drawback: Regularization methods in Hilbert spaces usually give rise to smooth (and over-smooth) solutions. In image deblurring, regularization methods in Hilbert spaces do not allow to obtain a good localization of the edges.

  7. Regularization: the “recent” framework in Banach spaces More recently, some regularization methods have been introduced and in- vestigated in Banach spaces . In any Banach space, only distances between its elements can be defined and measured, but no scalar product (thus no “angle”) is defined. Benefits: Due to the geometrical properties of Banach spaces, these regularization methods allow us to obtain solutions endowed with lower over-smoothness, which result, as instance, in a better localization and restoration of the dis- continuities in imaging applications. Another useful property of the regular- ization in Banach space is that solutions are more sparse, that is, in general they can be represented by very few components. Drawback: The “Mathematics” is much more involving (the spectral theory cannot be used...). Convex analysis is required.

  8. Regularization: Hilbert VS Banach spaces Hilbert spaces Banach spaces L 2 ( R n ) , H s = W s, 2 L p ( R n ) , W s,p , p ≥ 1; BV Benefits Easier computation Better restoration (Spectral theory, of the discontinuities; eigencomponents) Sparse solutions Drawbacks Over-smoothness Theoretically tricky (bad localization of edges) (Convex analysis required) Recap: In Banach spaces we have no scalar product (so, no orthogonal projection), no Pythagorean theorem, no SVD... The (sub-)gradient of (the square of) the norm is not linear, so that the least square solution (of a linear problem) is no more a linear problem... Examples: L 1 for sparse recovery or L p , 1 < p < 2, for edge restoration.

  9. L 2 (R 2 ) L 1.2 (R 2 ) L 5 (R 2 ) 1/2 ||x|| 2 1/1.2 ||x|| 1.2 1/5 ||x|| 5 2 ) 1.2 ) 5 ) ∇ ( 1/2 ||x|| 2 ∇ ( 1/1.2 ||x|| 1.2 ∇ ( 1/5 ||x|| 5 5 5 5 0 0 0 −5 −5 −5 −5 0 5 −5 0 5 −5 0 5

  10. Regularization in Hilbert and Banach spaces I Iterative minimization algorithms Gradient-like methods: Landweber, CG II Iterative projection algorithms SIRT: Cimmino, DROP, ... III Preconditioning Trigonometric matrix algebra preconditioners IV Multi-parameter and Adaptive regularization The Variable exponent Lebesgue spaces

  11. Regularization in Hilbert and Banach spaces I Iterative minimization algorithms

  12. Iterative minimization algorithms In the variational approach, instead of solving straightly the operator equa- tion Ax = y , we minimize the residual functional H : X − → [0 , + ∞ ) H ( x ) = 1 r � Ax − y � r Y . Basic iterative scheme: x k +1 = x k − λ k Φ A ( x k , y ) where the operator Φ A : X × Y − → X returns a value � 1 � r � Ax k − y � r Φ A ( x k , y ) ≈ ∇ H ( x k ) = ∇ , Y that is, (an approximation of) the “gradient” of the functional 1 r � Ax − y � r Y in the point x k and λ k > 0 is the step length. This way, the iterative schemes are all different generalizations of the basic gradient descent method.

  13. The Landweber algorithm in Hilbert spaces In the conventional case (i.e., both X and Y are Hilbert spaces) we consider the least square functional H 2 ( x ) = 1 2 � Ax − y � 2 Y . Direct computation of ∇ H 2 by chaining rule for derivatives (in R n ) � ∗ � ∗ �� ( ∇ 1 2 � u � 2 ) | u = Ax − y ∇ H 2 ( x ) = J ( Ax − y ) = (( Ax − y ) ∗ A ) ∗ = A ∗ ( Ax − y ) leads to the “simplest” iterative method: the Landweber algorithm x k +1 = x k − λA ∗ ( Ax k − y ) Since H 2 is convex, there exists a non-empty set of stationary points, i.e. ∇ H 2 ( x ) = 0, which are all minimum points of H 2 . The constant step size λ ∈ (0 , 2 / � A ∗ A � ) , yields H 2 ( x k +1 ) < H 2 ( x k ) and also guarantees the convergence of the iterations.

  14. Towards the Landweber algorithm in Banach spaces (I) x k +1 = x k − λA ∗ ( Ax k − y ) Formally, A ∗ is the dual operator of A , that is, the operator A ∗ : Y ∗ − → X ∗ such that ∀ y ∗ ∈ Y ∗ , y ∗ ( Ax ) = ( A ∗ y ∗ )( x ) , ∀ x ∈ X and where X ∗ and Y ∗ are the dual spaces of X and Y . If X and Y are Hilbert spaces, then X is isometrically isomorphic to X ∗ and Y is isometrically isomorphic to Y ∗ (by virtue of Riesz Theorem), so that the operator A ∗ : Y ∗ − → X ∗ can be identified with A ∗ : Y − → G . However, in general Banach spaces are not isometrically isomorph to their duals. This way, the Landweber iteration above is well defined in Hilbert spaces (...only!) The key point: To generalize from Hilbert to Banach spaces we have to consider the so-called duality maps.

  15. Towards the Landweber algorithm in Banach spaces (II) The computation of the (sub-)gradient of the residual functional requires the (sub-)gradient of the norm of the Banach space involved. Here the key tool is the so-called “duality map”, which maps a Banach space B with its dual B ∗ . Theorem (Asplund, Acta Math. , 1968) Let B be a Banach space and let r > 1. The duality map J B r is the (sub-)gradient of the convex functional → R defined as f ( u ) = 1 r � u � r f : B − B (with abuse of notation...): � 1 � J B r � · � r r = ∇ f = ∇ . B Again from chaining rule, the (sub-)gradient of the residual 1 r � Ax − y � r Y , by means of the duality map J Y r , can be computed as follows � 1 � r � Ax − y � r = A ∗ J Y ∇ r ( Ax − y ) . Y

  16. The Landweber method in Banach spaces Let r > 1 a fixed weight value. Let x 0 ∈ X an initial guess (the null vector x 0 = 0 ∈ X is often used in the applications), and set x ∗ 0 = J X r ( x 0 ) ∈ X ∗ . For k = 0 , 1 , 2 , . . . x ∗ k +1 = x ∗ k − λ k A ∗ J Y r ( Ax k − y ) , x k +1 = J X ∗ r ∗ ( x ∗ k +1 ) , where r ∗ is the H¨ older conjugate of r , that is, 1 r + 1 r ∗ = 1, and the step sizes λ k are suitably chosen. → 2 X ∗ acts on the iterates x k ∈ X , and Here the duality map J X : X − r → 2 X ∗∗ acts on the iterates x ∗ the duality map J X ∗ r ∗ : X ∗ − k ∈ X ∗ : In order to be well defined, it is only required that the space X is reflexive, that is X ∗∗ is isometrically isomorph to X , so that J X ∗ r ∗ ⊆ X .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend