regularization of inverse problems
play

Regularization of Inverse Problems Matthias J. Ehrhardt January 28, - PowerPoint PPT Presentation

Regularization of Inverse Problems Matthias J. Ehrhardt January 28, 2019 What is an Inverse Problem? A : U V mapping between Hilbert spaces U , V , A L ( U , V ) physical model A , cause u and effect A ( u ) = Au . Direct /


  1. Regularization of Inverse Problems Matthias J. Ehrhardt January 28, 2019

  2. What is an Inverse Problem? ◮ A : U → V mapping between Hilbert spaces U , V , A ∈ L ( U , V ) ◮ physical model A , cause u and effect A ( u ) = Au . Direct / Forward Problem : given u , calculate Au .

  3. What is an Inverse Problem? ◮ A : U → V mapping between Hilbert spaces U , V , A ∈ L ( U , V ) ◮ physical model A , cause u and effect A ( u ) = Au . Direct / Forward Problem : given u , calculate Au . ◮ Example 1: ray transform (used in CT, PET, ...) � A : L 2 (Ω) → L 2 ([0 , 2 π ] , [ − 1 , 1]) , u ( s θ + t θ ⊥ ) dt Au ( θ, s ) = R θ s t u t θ ⊥

  4. What is an Inverse Problem? ◮ A : U → V mapping between Hilbert spaces U , V , A ∈ L ( U , V ) ◮ physical model A , cause u and effect A ( u ) = Au . Direct / Forward Problem : given u , calculate Au . ◮ Example 1: ray transform (used in CT, PET, ...) � A : L 2 (Ω) → L 2 ([0 , 2 π ] , [ − 1 , 1]) , u ( s θ + t θ ⊥ ) dt Au ( θ, s ) = R →

  5. What is an Inverse Problem? ◮ A : U → V mapping between Hilbert spaces U , V , A ∈ L ( U , V ) ◮ physical model A , cause u and effect A ( u ) = Au . Direct / Forward Problem : given u , calculate Au . ◮ Example 1: ray transform (used in CT, PET, ...) � A : L 2 (Ω) → L 2 ([0 , 2 π ] , [ − 1 , 1]) , u ( s θ + t θ ⊥ ) dt Au ( θ, s ) = R →

  6. What is an Inverse Problem? ◮ A : U → V mapping between Hilbert spaces U , V , A ∈ L ( U , V ) ◮ physical model A , cause u and effect A ( u ) = Au . Direct / Forward Problem : given u , calculate Au . ◮ Example 1: ray transform (used in CT, PET, ...) � A : L 2 (Ω) → L 2 ([0 , 2 π ] , [ − 1 , 1]) , u ( s θ + t θ ⊥ ) dt Au ( θ, s ) = R → Inverse Problem : Given v , calculate u with Au = v . Infer from the effect the cause .

  7. What is the problem with Inverse Problems? Examples A solution may ◮ not exist . Au = 0 , v � = 0

  8. What is the problem with Inverse Problems? Examples A solution may ◮ not exist . Au = 0 , v � = 0 ◮ not be unique . Au = 0 , v = 0

  9. What is the problem with Inverse Problems? Examples A solution may ◮ not exist . Au = 0 , v � = 0 ◮ not be unique . Au = 0 , v = 0 ◮ be sensitive to noise . - Positron Emission Tomography (PET) - Data: PET scanner in London � - Model: ray transform, Au ( L ) = L u ( r ) dr - Find u such that Au = v →

  10. What is the problem with Inverse Problems? Examples A solution may ◮ not exist . Au = 0 , v � = 0 ◮ not be unique . Au = 0 , v = 0 ◮ be sensitive to noise . - Positron Emission Tomography (PET) - Data: PET scanner in London � - Model: ray transform, Au ( L ) = L u ( r ) dr - Find u such that Au = v →

  11. What is the problem with Inverse Problems? Definition (Jacques Hadamard, 1865-1963): An Inverse Problem “ Au = v ” is called well-posed , if the solution (1) exists . (2) is unique . (3) depends continuously on the data. “Small errors in v lead to small errors in u .” Otherwise, we call it ill-posed .

  12. What is the problem with Inverse Problems? Definition (Jacques Hadamard, 1865-1963): An Inverse Problem “ Au = v ” is called well-posed , if the solution (1) exists . (2) is unique . (3) depends continuously on the data. “Small errors in v lead to small errors in u .” Otherwise, we call it ill-posed . Almost all interesting inverse problems are ill-posed.

  13. Generalized Solutions Definition : Let v ∈ V . The set of all approximate solutions of “ Au = v ” is � � � � L := u ∈ U � � Au − v � ≤ � Az − v � ∀ z ∈ U . � If a solution z ∈ U exists, � Az − v � = 0, then � � � L = u ∈ U � Au = v �

  14. Generalized Solutions Definition : Let v ∈ V . The set of all approximate solutions of “ Au = v ” is � � � � L := u ∈ U � � Au − v � ≤ � Az − v � ∀ z ∈ U . � If a solution z ∈ U exists, � Az − v � = 0, then � � � L = u ∈ U � Au = v � Definition : An approximate solution u ∈ L is called minimal-norm-solution , if � u � ≤ � u � ∀ u ∈ L .

  15. Properties of Minimal-Norm-Solutions Recall: ◮ Range / image of A : R A := { v ∈ V | ∃ u ∈ U Au = v } ◮ Orthogonal complement: A ⊥ := { v ∈ V | � v , z � = 0 ∀ z ∈ A} ◮ Minkowski sum: A + B := { u + v | u ∈ A , v ∈ B}

  16. Properties of Minimal-Norm-Solutions Recall: ◮ Range / image of A : R A := { v ∈ V | ∃ u ∈ U Au = v } ◮ Orthogonal complement: A ⊥ := { v ∈ V | � v , z � = 0 ∀ z ∈ A} ◮ Minkowski sum: A + B := { u + v | u ∈ A , v ∈ B} Proposition : R A is closed if and only if R A + R ⊥ A = V .

  17. Properties of Minimal-Norm-Solutions Recall: ◮ Range / image of A : R A := { v ∈ V | ∃ u ∈ U Au = v } ◮ Orthogonal complement: A ⊥ := { v ∈ V | � v , z � = 0 ∀ z ∈ A} ◮ Minkowski sum: A + B := { u + v | u ∈ A , v ∈ B} Proposition : R A is closed if and only if R A + R ⊥ A = V . Example : A : ℓ 2 → ℓ 2 , ( Au ) j = u j j . Range R A not closed .

  18. Properties of Minimal-Norm-Solutions Recall: ◮ Range / image of A : R A := { v ∈ V | ∃ u ∈ U Au = v } ◮ Orthogonal complement: A ⊥ := { v ∈ V | � v , z � = 0 ∀ z ∈ A} ◮ Minkowski sum: A + B := { u + v | u ∈ A , v ∈ B} Proposition : R A is closed if and only if R A + R ⊥ A = V . Example : A : ℓ 2 → ℓ 2 , ( Au ) j = u j j . Range R A not closed . Theorem : Let v ∈ R A + R ⊥ A . Then there exists a unique minimal-norm-solution u of “ Au = v ”. We write A † v = u .

  19. Properties of Minimal-Norm-Solutions Recall: ◮ Range / image of A : R A := { v ∈ V | ∃ u ∈ U Au = v } ◮ Orthogonal complement: A ⊥ := { v ∈ V | � v , z � = 0 ∀ z ∈ A} ◮ Minkowski sum: A + B := { u + v | u ∈ A , v ∈ B} Proposition : R A is closed if and only if R A + R ⊥ A = V . Example : A : ℓ 2 → ℓ 2 , ( Au ) j = u j j . Range R A not closed . Theorem : Let v ∈ R A + R ⊥ A . Then there exists a unique minimal-norm-solution u of “ Au = v ”. We write A † v = u . Theorem : If R A is not closed , then u does not depend continuously on v , i.e. A † is not continuous.

  20. Regularization A U V u = A † v v

  21. Regularization A U V u = A † v v v δ A † v δ

  22. Regularization A U V u = A † v v v δ R α v δ A † v δ

  23. Regularization A U V u = A † v v v δ R α v δ A † v δ Definition : A family { R α } α> 0 is called regularization of A † , if ◮ for all α > 0 the mapping R α : V → U is continuous. ◮ for all v ∈ R A + R ⊥ lim α → 0 R α v = A † v . A

  24. Regularization A U V u = A † v v v δ R α v δ A † v δ Definition : A family { R α } α> 0 is called regularization of A † , if ◮ for all α > 0 the mapping R α : V → U is continuous. ◮ for all v ∈ R A + R ⊥ lim α → 0 R α v = A † v . A →

  25. Regularization A U V u = A † v v v δ R α v δ A † v δ Definition : A family { R α } α> 0 is called regularization of A † , if ◮ for all α > 0 the mapping R α : V → U is continuous. ◮ for all v ∈ R A + R ⊥ lim α → 0 R α v = A † v . A →

  26. Popular examples of regularization Tikhonov regularization (Andrey Tikhonov, 1906-1993) R α v δ = arg min � Au − v δ � 2 + α � u � 2 � � u

  27. Popular examples of regularization Tikhonov regularization (Andrey Tikhonov, 1906-1993) R α v δ = arg min � Au − v δ � 2 + α � u � 2 � � u = ( A ∗ A + α I ) − 1 A ∗ v δ

  28. Popular examples of regularization Tikhonov regularization (Andrey Tikhonov, 1906-1993) R α v δ = arg min � Au − v δ � 2 + α � u � 2 � � u = ( A ∗ A + α I ) − 1 A ∗ v δ Proposition : ( A ∗ A + α I ) − 1 ∈ L ( U , U ) for all α > 0

  29. Popular examples of regularization Tikhonov regularization (Andrey Tikhonov, 1906-1993) R α v δ = arg min � Au − v δ � 2 + α � u � 2 � � u = ( A ∗ A + α I ) − 1 A ∗ v δ Proposition : ( A ∗ A + α I ) − 1 ∈ L ( U , U ) for all α > 0 Variational regularization R α v δ = arg min � � D ( Au , v δ ) + α J ( u ) u ◮ data fit D : “divergence” D ( x , y ) ≥ 0 , D ( x , y ) = 0 iff x = y Examples: D ( x , y ) = � x − y � 2 , � x − y � 1 , � x − y + y log( y / x ) ◮ regularizer J : Penalizes unwanted features; ensures stability Examples: J ( u ) = � u � 2 , � u � 1 , TV( u ) = �∇ u � 1

  30. Popular examples of regularization Tikhonov regularization (Andrey Tikhonov, 1906-1993) R α v δ = arg min � Au − v δ � 2 + α � u � 2 � � u = ( A ∗ A + α I ) − 1 A ∗ v δ Proposition : ( A ∗ A + α I ) − 1 ∈ L ( U , U ) for all α > 0 Variational regularization R α v δ = arg min � � D ( Au , v δ ) + α J ( u ) u ◮ data fit D : “divergence” D ( x , y ) ≥ 0 , D ( x , y ) = 0 iff x = y Examples: D ( x , y ) = � x − y � 2 , � x − y � 1 , � x − y + y log( y / x ) ◮ regularizer J : Penalizes unwanted features; ensures stability Examples: J ( u ) = � u � 2 , � u � 1 , TV( u ) = �∇ u � 1 ◮ decouples solution of inverse problem into 2 steps : 1. Modelling : choose D , J , A , α . 2. Optimization : connection to statistics, machine learning ...

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend