total variation in image analysis the homo erectus stage
play

Total Variation in Image Analysis (The Homo Erectus Stage?) Franois - PowerPoint PPT Presentation

Total Variation Total Variation in Image Analysis (The Homo Erectus Stage?) Franois Lauze 1Department of Computer Science University of Copenhagen Hlar Summer School on Sparse Coding, August 2010 Total Variation Outline Motivation 1


  1. Total Variation Motivation Tikhonov regularization Tikhonov regularization It can be show that this is equivalent to minimize E ( u ) = � Ku − u 0 � 2 + λ � Ru � 2 for a λ = λ ( σ ) (Wahba?). E ( u ) minimizaton can be derived from a Maximum a Posteriori formulation p ( u | u 0 ) = p ( u 0 | u ) p ( u ) Arg.max p ( u 0 ) u Rewriting in a continuous setting: � � ( Ku − u o ) 2 dx + λ |∇ u | 2 dx E ( u ) = Ω Ω

  2. Total Variation Motivation Tikhonov regularization Tikhonov regularization It can be show that this is equivalent to minimize E ( u ) = � Ku − u 0 � 2 + λ � Ru � 2 for a λ = λ ( σ ) (Wahba?). E ( u ) minimizaton can be derived from a Maximum a Posteriori formulation p ( u | u 0 ) = p ( u 0 | u ) p ( u ) Arg.max p ( u 0 ) u Rewriting in a continuous setting: � � ( Ku − u o ) 2 dx + λ |∇ u | 2 dx E ( u ) = Ω Ω

  3. Total Variation Motivation Tikhonov regularization How to solve? Solution satisfies the Euler-Lagrange equation for E : K ∗ ( Ku − u 0 ) − λ ∆ u = 0 . ( K ∗ is the adjoint of K ) A linear equation, easy to implement, and many fast solvers exit, but...

  4. Total Variation Motivation Tikhonov regularization How to solve? Solution satisfies the Euler-Lagrange equation for E : K ∗ ( Ku − u 0 ) − λ ∆ u = 0 . ( K ∗ is the adjoint of K ) A linear equation, easy to implement, and many fast solvers exit, but...

  5. Total Variation Motivation Tikhonov regularization How to solve? Solution satisfies the Euler-Lagrange equation for E : K ∗ ( Ku − u 0 ) − λ ∆ u = 0 . ( K ∗ is the adjoint of K ) A linear equation, easy to implement, and many fast solvers exit, but...

  6. Total Variation Motivation Tikhonov regularization Tikhonov example Denoising example, K = Id . Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? Ω ( u − u 0 ) 2 dx : not guilty! � The term Ω |∇ u | 2 dx . Derivatives and step edges do not go too well � Then it must be together?

  7. Total Variation Motivation Tikhonov regularization Tikhonov example Denoising example, K = Id . Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? Ω ( u − u 0 ) 2 dx : not guilty! � The term Ω |∇ u | 2 dx . Derivatives and step edges do not go too well � Then it must be together?

  8. Total Variation Motivation Tikhonov regularization Tikhonov example Denoising example, K = Id . Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? Ω ( u − u 0 ) 2 dx : not guilty! � The term Ω |∇ u | 2 dx . Derivatives and step edges do not go too well � Then it must be together?

  9. Total Variation Motivation Tikhonov regularization Tikhonov example Denoising example, K = Id . Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? Ω ( u − u 0 ) 2 dx : not guilty! � The term Ω |∇ u | 2 dx . Derivatives and step edges do not go too well � Then it must be together?

  10. Total Variation Motivation 1-D computation on step edges Outline Motivation 1 Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges Total Variation I 2 First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification Bibliography 4 The End 5

  11. Total Variation Motivation 1-D computation on step edges Set Ω = [ − 1 , 1 ] , a a real number and u the step-edge function � 0 x ≤ 0 u ( x ) = a x > 0 Not differentiable at 0, but forget about it and try to compute � 1 | u ′ ( x ) | 2 dx . − 1 Around 0 “approximate” u ′ ( x ) by u ( h ) − u ( − h ) , h > 0 , small 2 h

  12. Total Variation Motivation 1-D computation on step edges Set Ω = [ − 1 , 1 ] , a a real number and u the step-edge function � 0 x ≤ 0 u ( x ) = a x > 0 Not differentiable at 0, but forget about it and try to compute � 1 | u ′ ( x ) | 2 dx . − 1 Around 0 “approximate” u ′ ( x ) by u ( h ) − u ( − h ) , h > 0 , small 2 h

  13. Total Variation Motivation 1-D computation on step edges Set Ω = [ − 1 , 1 ] , a a real number and u the step-edge function � 0 x ≤ 0 u ( x ) = a x > 0 Not differentiable at 0, but forget about it and try to compute � 1 | u ′ ( x ) | 2 dx . − 1 Around 0 “approximate” u ′ ( x ) by u ( h ) − u ( − h ) , h > 0 , small 2 h

  14. Total Variation Motivation 1-D computation on step edges with this finite difference approximation u ′ ( x ) ≈ a 2 h , x ∈ [ − h , h ] then � 1 � − h � h � 1 | u ′ ( x ) | 2 dx | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx = − 1 − 1 − h h � a � 2 = 0 + 2 h × + 0 2 h a 2 = 2 h → ∞ , h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

  15. Total Variation Motivation 1-D computation on step edges with this finite difference approximation u ′ ( x ) ≈ a 2 h , x ∈ [ − h , h ] then � 1 � − h � h � 1 | u ′ ( x ) | 2 dx | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx = − 1 − 1 − h h � a � 2 = 0 + 2 h × + 0 2 h a 2 = 2 h → ∞ , h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

  16. Total Variation Motivation 1-D computation on step edges with this finite difference approximation u ′ ( x ) ≈ a 2 h , x ∈ [ − h , h ] then � 1 � − h � h � 1 | u ′ ( x ) | 2 dx | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx = − 1 − 1 − h h � a � 2 = 0 + 2 h × + 0 2 h a 2 = 2 h → ∞ , h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

  17. Total Variation Motivation 1-D computation on step edges with this finite difference approximation u ′ ( x ) ≈ a 2 h , x ∈ [ − h , h ] then � 1 � − h � h � 1 | u ′ ( x ) | 2 dx | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx = − 1 − 1 − h h � a � 2 = 0 + 2 h × + 0 2 h a 2 = 2 h → ∞ , h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

  18. Total Variation Motivation 1-D computation on step edges with this finite difference approximation u ′ ( x ) ≈ a 2 h , x ∈ [ − h , h ] then � 1 � − h � h � 1 | u ′ ( x ) | 2 dx | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx + | u ′ ( x ) | 2 dx = − 1 − 1 − h h � a � 2 = 0 + 2 h × + 0 2 h a 2 = 2 h → ∞ , h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

  19. Total Variation Motivation 1-D computation on step edges Replace the square in the previous computation by p > 0 and redo: Then � 1 � − h � h � 1 | u ′ ( x ) | p dx | u ′ ( x ) | p dx + | u ′ ( x ) | p dx + | u ′ ( x ) | p dx = − 1 − 1 − h h p � a � � = 0 + 2 h × + 0 � � 2 h � | a | p ( 2 h ) 1 − p < ∞ = when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u .

  20. Total Variation Motivation 1-D computation on step edges Replace the square in the previous computation by p > 0 and redo: Then � 1 � − h � h � 1 | u ′ ( x ) | p dx | u ′ ( x ) | p dx + | u ′ ( x ) | p dx + | u ′ ( x ) | p dx = − 1 − 1 − h h p � a � � = 0 + 2 h × + 0 � � 2 h � | a | p ( 2 h ) 1 − p < ∞ = when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u .

  21. Total Variation Motivation 1-D computation on step edges Replace the square in the previous computation by p > 0 and redo: Then � 1 � − h � h � 1 | u ′ ( x ) | p dx | u ′ ( x ) | p dx + | u ′ ( x ) | p dx + | u ′ ( x ) | p dx = − 1 − 1 − h h p � a � � = 0 + 2 h × + 0 � � 2 h � | a | p ( 2 h ) 1 − p < ∞ = when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u .

  22. Total Variation Motivation 1-D computation on step edges Replace the square in the previous computation by p > 0 and redo: Then � 1 � − h � h � 1 | u ′ ( x ) | p dx | u ′ ( x ) | p dx + | u ′ ( x ) | p dx + | u ′ ( x ) | p dx = − 1 − 1 − h h p � a � � = 0 + 2 h × + 0 � � 2 h � | a | p ( 2 h ) 1 − p < ∞ = when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u .

  23. Total Variation Motivation 1-D computation on step edges Replace the square in the previous computation by p > 0 and redo: Then � 1 � − h � h � 1 | u ′ ( x ) | p dx | u ′ ( x ) | p dx + | u ′ ( x ) | p dx + | u ′ ( x ) | p dx = − 1 − 1 − h h p � a � � = 0 + 2 h × + 0 � � 2 h � | a | p ( 2 h ) 1 − p < ∞ = when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u .

  24. Total Variation Motivation 1-D computation on step edges Replace the square in the previous computation by p > 0 and redo: Then � 1 � − h � h � 1 | u ′ ( x ) | p dx | u ′ ( x ) | p dx + | u ′ ( x ) | p dx + | u ′ ( x ) | p dx = − 1 − 1 − h h p � a � � = 0 + 2 h × + 0 � � 2 h � | a | p ( 2 h ) 1 − p < ∞ = when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u .

  25. Total Variation Total Variation I First definition Outline Motivation 1 Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges Total Variation I 2 First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification Bibliography 4 The End 5

  26. Total Variation Total Variation I First definition Let u : Ω ⊂ R n → R . Define total variation as � n � � � � u 2 J ( u ) = |∇ u | dx , |∇ u | = x i . � Ω i = 1 When J ( u ) is finite, one says that u has bounded variations and the space of function of bounded variations on Ω is denoted BV (Ω) .

  27. Total Variation Total Variation I First definition Let u : Ω ⊂ R n → R . Define total variation as � n � � � � u 2 J ( u ) = |∇ u | dx , |∇ u | = x i . � Ω i = 1 When J ( u ) is finite, one says that u has bounded variations and the space of function of bounded variations on Ω is denoted BV (Ω) .

  28. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  29. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  30. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  31. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  32. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  33. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  34. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  35. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  36. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  37. Total Variation Total Variation I First definition Expected: when minimizing J ( u ) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in smooth parts, ∇ u well defined, 1 Jump discontinuities (our edges) 2 something else (Cantor part) which can be nasty... 3 The functions that do not possess this nasty part form a subspace of BV (Ω) called SBV (Ω) , The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

  38. Total Variation Total Variation I Rudin-Osher-Fatemi Outline Motivation 1 Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges Total Variation I 2 First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification Bibliography 4 The End 5

  39. Total Variation Total Variation I Rudin-Osher-Fatemi ROF Denoising State the denoising problem as minimizing J ( u ) under the constraints � � � ( u − u 0 ) 2 dx = | Ω | σ 2 u dx = u o dx , ( | Ω | = area/volume of Ω) Ω Ω Ω Solve via Lagrange multipliers.

  40. Total Variation Total Variation I Rudin-Osher-Fatemi ROF Denoising State the denoising problem as minimizing J ( u ) under the constraints � � � ( u − u 0 ) 2 dx = | Ω | σ 2 u dx = u o dx , ( | Ω | = area/volume of Ω) Ω Ω Ω Solve via Lagrange multipliers.

  41. Total Variation Total Variation I Rudin-Osher-Fatemi ROF Denoising State the denoising problem as minimizing J ( u ) under the constraints � � � ( u − u 0 ) 2 dx = | Ω | σ 2 u dx = u o dx , ( | Ω | = area/volume of Ω) Ω Ω Ω Solve via Lagrange multipliers.

  42. Total Variation Total Variation I Rudin-Osher-Fatemi TV-denoising Chambolle-Lions: there exists λ such the solution minimizes E TV ( u ) = 1 � � ( Ku − u 0 ) 2 dx + λ |∇ u | dx 2 Ω Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 . |∇ u | � � ∇ u The term div is highly non linear. Problems especially when |∇ u | = 0. |∇ u | � � In fact ∇ u / ∇ u |∇ u | ( x ) is the unit normal of the level line of u at x and div is the |∇ u | (mean)curvature of the level line: not defined when the level line is singular or does not exist!

  43. Total Variation Total Variation I Rudin-Osher-Fatemi TV-denoising Chambolle-Lions: there exists λ such the solution minimizes E TV ( u ) = 1 � � ( Ku − u 0 ) 2 dx + λ |∇ u | dx 2 Ω Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 . |∇ u | � � ∇ u The term div is highly non linear. Problems especially when |∇ u | = 0. |∇ u | � � In fact ∇ u / ∇ u |∇ u | ( x ) is the unit normal of the level line of u at x and div is the |∇ u | (mean)curvature of the level line: not defined when the level line is singular or does not exist!

  44. Total Variation Total Variation I Rudin-Osher-Fatemi TV-denoising Chambolle-Lions: there exists λ such the solution minimizes E TV ( u ) = 1 � � ( Ku − u 0 ) 2 dx + λ |∇ u | dx 2 Ω Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 . |∇ u | � � ∇ u The term div is highly non linear. Problems especially when |∇ u | = 0. |∇ u | � � In fact ∇ u / ∇ u |∇ u | ( x ) is the unit normal of the level line of u at x and div is the |∇ u | (mean)curvature of the level line: not defined when the level line is singular or does not exist!

  45. Total Variation Total Variation I Rudin-Osher-Fatemi TV-denoising Chambolle-Lions: there exists λ such the solution minimizes E TV ( u ) = 1 � � ( Ku − u 0 ) 2 dx + λ |∇ u | dx 2 Ω Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 . |∇ u | � � ∇ u The term div is highly non linear. Problems especially when |∇ u | = 0. |∇ u | � � In fact ∇ u / ∇ u |∇ u | ( x ) is the unit normal of the level line of u at x and div is the |∇ u | (mean)curvature of the level line: not defined when the level line is singular or does not exist!

  46. Total Variation Total Variation I Rudin-Osher-Fatemi TV-denoising Chambolle-Lions: there exists λ such the solution minimizes E TV ( u ) = 1 � � ( Ku − u 0 ) 2 dx + λ |∇ u | dx 2 Ω Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 . |∇ u | � � ∇ u The term div is highly non linear. Problems especially when |∇ u | = 0. |∇ u | � � In fact ∇ u / ∇ u |∇ u | ( x ) is the unit normal of the level line of u at x and div is the |∇ u | (mean)curvature of the level line: not defined when the level line is singular or does not exist!

  47. Total Variation Total Variation I Rudin-Osher-Fatemi Acar-Vogel Replace it by regularized version � |∇ u | 2 + β, |∇ u | β = β > 0 Acar - Vogel show that � � � lim J β ( u ) = |∇ u | β dx = J ( u ) . β → 0 Ω Replace energy by � ( Ku − u 0 ) 2 dx + λ J β ( u ) E ′ ( u ) = Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 |∇ u | β The null denominator problem disappears.

  48. Total Variation Total Variation I Rudin-Osher-Fatemi Acar-Vogel Replace it by regularized version � |∇ u | 2 + β, |∇ u | β = β > 0 Acar - Vogel show that � � � lim J β ( u ) = |∇ u | β dx = J ( u ) . β → 0 Ω Replace energy by � ( Ku − u 0 ) 2 dx + λ J β ( u ) E ′ ( u ) = Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 |∇ u | β The null denominator problem disappears.

  49. Total Variation Total Variation I Rudin-Osher-Fatemi Acar-Vogel Replace it by regularized version � |∇ u | 2 + β, |∇ u | β = β > 0 Acar - Vogel show that � � � lim J β ( u ) = |∇ u | β dx = J ( u ) . β → 0 Ω Replace energy by � ( Ku − u 0 ) 2 dx + λ J β ( u ) E ′ ( u ) = Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 |∇ u | β The null denominator problem disappears.

  50. Total Variation Total Variation I Rudin-Osher-Fatemi Acar-Vogel Replace it by regularized version � |∇ u | 2 + β, |∇ u | β = β > 0 Acar - Vogel show that � � � lim J β ( u ) = |∇ u | β dx = J ( u ) . β → 0 Ω Replace energy by � ( Ku − u 0 ) 2 dx + λ J β ( u ) E ′ ( u ) = Ω Euler-Lagrange equation: � ∇ u � K ∗ ( Ku − u 0 ) − λ div = 0 |∇ u | β The null denominator problem disappears.

  51. Total Variation Total Variation I Rudin-Osher-Fatemi Example Implementation by finite differences, fixed-point strategy, linearization. λ = 1 . 5, β = 10 − 4 Original

  52. Total Variation Total Variation I Rudin-Osher-Fatemi Example Implementation by finite differences, fixed-point strategy, linearization. λ = 1 . 5, β = 10 − 4 Original

  53. Total Variation Total Variation I Inpainting/Denoising Outline Motivation 1 Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges Total Variation I 2 First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification Bibliography 4 The End 5

  54. Total Variation Total Variation I Inpainting/Denoising Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): E ITV ( u ) = 1 � � ( u − u 0 ) 2 dx + λ |∇ u | dx 2 Ω \ H Ω Euler-Lagrange Equation: � ∇ u � ( u − u 0 ) χ − λ div = 0 . |∇ u | ( χ ( x ) = 1 is x �∈ H , 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

  55. Total Variation Total Variation I Inpainting/Denoising Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): E ITV ( u ) = 1 � � ( u − u 0 ) 2 dx + λ |∇ u | dx 2 Ω \ H Ω Euler-Lagrange Equation: � ∇ u � ( u − u 0 ) χ − λ div = 0 . |∇ u | ( χ ( x ) = 1 is x �∈ H , 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

  56. Total Variation Total Variation I Inpainting/Denoising Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): E ITV ( u ) = 1 � � ( u − u 0 ) 2 dx + λ |∇ u | dx 2 Ω \ H Ω Euler-Lagrange Equation: � ∇ u � ( u − u 0 ) χ − λ div = 0 . |∇ u | ( χ ( x ) = 1 is x �∈ H , 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

  57. Total Variation Total Variation I Inpainting/Denoising Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): E ITV ( u ) = 1 � � ( u − u 0 ) 2 dx + λ |∇ u | dx 2 Ω \ H Ω Euler-Lagrange Equation: � ∇ u � ( u − u 0 ) χ − λ div = 0 . |∇ u | ( χ ( x ) = 1 is x �∈ H , 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

  58. Total Variation Total Variation I Inpainting/Denoising Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): E ITV ( u ) = 1 � � ( u − u 0 ) 2 dx + λ |∇ u | dx 2 Ω \ H Ω Euler-Lagrange Equation: � ∇ u � ( u − u 0 ) χ − λ div = 0 . |∇ u | ( χ ( x ) = 1 is x �∈ H , 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

  59. Total Variation Total Variation I Inpainting/Denoising Degraded Inpainted

  60. Total Variation Total Variation I Inpainting/Denoising Segmention Inpainting - driven segmention (Lauze, Nielsen 2008, IJCV) Aortic calcifiction Detection Segmention

  61. Total Variation Total Variation II Relaxing the derivative constraints Outline Motivation 1 Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges Total Variation I 2 First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification Bibliography 4 The End 5

  62. Total Variation Total Variation II Relaxing the derivative constraints With definition of total variation as � J ( u ) = |∇ u | dx Ω u must have (weak) derivatives . But we just saw that the computation is possible for a step-edge u ( x ) = 0, x < 0, u ( x ) = a , x > 0: � 1 | u ′ ( x ) | dx = | a | − 1 Can we avoid the use of derivatives of u ?

  63. Total Variation Total Variation II Relaxing the derivative constraints With definition of total variation as � J ( u ) = |∇ u | dx Ω u must have (weak) derivatives . But we just saw that the computation is possible for a step-edge u ( x ) = 0, x < 0, u ( x ) = a , x > 0: � 1 | u ′ ( x ) | dx = | a | − 1 Can we avoid the use of derivatives of u ?

  64. Total Variation Total Variation II Relaxing the derivative constraints With definition of total variation as � J ( u ) = |∇ u | dx Ω u must have (weak) derivatives . But we just saw that the computation is possible for a step-edge u ( x ) = 0, x < 0, u ( x ) = a , x > 0: � 1 | u ′ ( x ) | dx = | a | − 1 Can we avoid the use of derivatives of u ?

  65. Total Variation Total Variation II Relaxing the derivative constraints With definition of total variation as � J ( u ) = |∇ u | dx Ω u must have (weak) derivatives . But we just saw that the computation is possible for a step-edge u ( x ) = 0, x < 0, u ( x ) = a , x > 0: � 1 | u ′ ( x ) | dx = | a | − 1 Can we avoid the use of derivatives of u ?

  66. Total Variation Total Variation II Relaxing the derivative constraints Assume first that ∇ u exists. |∇ u | = ∇ u · ∇ u |∇ u | ∇ u (except when ∇ u = 0) and |∇ u | is the normal to the level lines of u , it has everywhere norm 1. Let V the set of vector fields v ( x ) on Ω with | v ( x ) | ≤ 1. I claim � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ V Ω (consequence of Cauchy-Schwarz inequality).

  67. Total Variation Total Variation II Relaxing the derivative constraints Assume first that ∇ u exists. |∇ u | = ∇ u · ∇ u |∇ u | ∇ u (except when ∇ u = 0) and |∇ u | is the normal to the level lines of u , it has everywhere norm 1. Let V the set of vector fields v ( x ) on Ω with | v ( x ) | ≤ 1. I claim � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ V Ω (consequence of Cauchy-Schwarz inequality).

  68. Total Variation Total Variation II Relaxing the derivative constraints Assume first that ∇ u exists. |∇ u | = ∇ u · ∇ u |∇ u | ∇ u (except when ∇ u = 0) and |∇ u | is the normal to the level lines of u , it has everywhere norm 1. Let V the set of vector fields v ( x ) on Ω with | v ( x ) | ≤ 1. I claim � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ V Ω (consequence of Cauchy-Schwarz inequality).

  69. Total Variation Total Variation II Relaxing the derivative constraints Restrict to the set W of such v ’s that are differentiable and vanishing at ∂ Ω , the boundary of Ω Then � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ W Ω But then I can use Divergence theorem: H ⊂ D ⊂ R n , f : D → R differentiable function, g = ( g 1 , . . . , g n ) : D → R n differentiable vector field and div g = � n i = 1 g i x i , � � � ∇ f · g dx = − f div g dx + fg · n ( s ) ds ∂ H H H with n ( s ) exterior normal field to ∂ H . Apply it to J(u) above: � � � J ( u ) = sup − u ( x ) div v ( x ) dx v ∈ W Ω The gradient has disappeared from u ! This is the classical definition of total variation. Note that when ∇ u ( x ) � = 0, optimal v ( x ) = ( ∇ u / |∇| u )( x ) and div v ( x ) is the mean curvature of the level set of u at x . Geometry is there!

  70. Total Variation Total Variation II Relaxing the derivative constraints Restrict to the set W of such v ’s that are differentiable and vanishing at ∂ Ω , the boundary of Ω Then � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ W Ω But then I can use Divergence theorem: H ⊂ D ⊂ R n , f : D → R differentiable function, g = ( g 1 , . . . , g n ) : D → R n differentiable vector field and div g = � n i = 1 g i x i , � � � ∇ f · g dx = − f div g dx + fg · n ( s ) ds ∂ H H H with n ( s ) exterior normal field to ∂ H . Apply it to J(u) above: � � � J ( u ) = sup − u ( x ) div v ( x ) dx v ∈ W Ω The gradient has disappeared from u ! This is the classical definition of total variation. Note that when ∇ u ( x ) � = 0, optimal v ( x ) = ( ∇ u / |∇| u )( x ) and div v ( x ) is the mean curvature of the level set of u at x . Geometry is there!

  71. Total Variation Total Variation II Relaxing the derivative constraints Restrict to the set W of such v ’s that are differentiable and vanishing at ∂ Ω , the boundary of Ω Then � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ W Ω But then I can use Divergence theorem: H ⊂ D ⊂ R n , f : D → R differentiable function, g = ( g 1 , . . . , g n ) : D → R n differentiable vector field and div g = � n i = 1 g i x i , � � � ∇ f · g dx = − f div g dx + fg · n ( s ) ds ∂ H H H with n ( s ) exterior normal field to ∂ H . Apply it to J(u) above: � � � J ( u ) = sup − u ( x ) div v ( x ) dx v ∈ W Ω The gradient has disappeared from u ! This is the classical definition of total variation. Note that when ∇ u ( x ) � = 0, optimal v ( x ) = ( ∇ u / |∇| u )( x ) and div v ( x ) is the mean curvature of the level set of u at x . Geometry is there!

  72. Total Variation Total Variation II Relaxing the derivative constraints Restrict to the set W of such v ’s that are differentiable and vanishing at ∂ Ω , the boundary of Ω Then � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ W Ω But then I can use Divergence theorem: H ⊂ D ⊂ R n , f : D → R differentiable function, g = ( g 1 , . . . , g n ) : D → R n differentiable vector field and div g = � n i = 1 g i x i , � � � ∇ f · g dx = − f div g dx + fg · n ( s ) ds ∂ H H H with n ( s ) exterior normal field to ∂ H . Apply it to J(u) above: � � � J ( u ) = sup − u ( x ) div v ( x ) dx v ∈ W Ω The gradient has disappeared from u ! This is the classical definition of total variation. Note that when ∇ u ( x ) � = 0, optimal v ( x ) = ( ∇ u / |∇| u )( x ) and div v ( x ) is the mean curvature of the level set of u at x . Geometry is there!

  73. Total Variation Total Variation II Relaxing the derivative constraints Restrict to the set W of such v ’s that are differentiable and vanishing at ∂ Ω , the boundary of Ω Then � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ W Ω But then I can use Divergence theorem: H ⊂ D ⊂ R n , f : D → R differentiable function, g = ( g 1 , . . . , g n ) : D → R n differentiable vector field and div g = � n i = 1 g i x i , � � � ∇ f · g dx = − f div g dx + fg · n ( s ) ds ∂ H H H with n ( s ) exterior normal field to ∂ H . Apply it to J(u) above: � � � J ( u ) = sup − u ( x ) div v ( x ) dx v ∈ W Ω The gradient has disappeared from u ! This is the classical definition of total variation. Note that when ∇ u ( x ) � = 0, optimal v ( x ) = ( ∇ u / |∇| u )( x ) and div v ( x ) is the mean curvature of the level set of u at x . Geometry is there!

  74. Total Variation Total Variation II Relaxing the derivative constraints Restrict to the set W of such v ’s that are differentiable and vanishing at ∂ Ω , the boundary of Ω Then � J ( u ) = sup ∇ u ( x ) · v ( x ) dx v ∈ W Ω But then I can use Divergence theorem: H ⊂ D ⊂ R n , f : D → R differentiable function, g = ( g 1 , . . . , g n ) : D → R n differentiable vector field and div g = � n i = 1 g i x i , � � � ∇ f · g dx = − f div g dx + fg · n ( s ) ds ∂ H H H with n ( s ) exterior normal field to ∂ H . Apply it to J(u) above: � � � J ( u ) = sup − u ( x ) div v ( x ) dx v ∈ W Ω The gradient has disappeared from u ! This is the classical definition of total variation. Note that when ∇ u ( x ) � = 0, optimal v ( x ) = ( ∇ u / |∇| u )( x ) and div v ( x ) is the mean curvature of the level set of u at x . Geometry is there!

  75. Total Variation Total Variation II Definition in action Outline Motivation 1 Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges Total Variation I 2 First definition Rudin-Osher-Fatemi Inpainting/Denoising 3 Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification Bibliography 4 The End 5

  76. Total Variation Total Variation II Definition in action Step-edge u the step-edge function defined in previous slides. We compute J ( u ) with the new definition. here W = { φ : [ − 1 , 1 ] → R differentiable , φ ( − 1 ) = φ ( 1 ) = 0 , | φ ( x ) | ≤ 1 } , � 1 u ( x ) φ ′ ( x ) dx J ( u ) = sup φ ∈ W − 1 we compute � 1 � 1 u ( x ) φ ′ ( x ) dx = a φ ′ ( x ) dx − 1 0 = a ( φ ( 1 ) − φ ( 0 )) = − a φ ( 0 ) As − 1 ≤ φ ( 0 ) ≤ 1, the maximum is | a | .

  77. Total Variation Total Variation II Definition in action Step-edge u the step-edge function defined in previous slides. We compute J ( u ) with the new definition. here W = { φ : [ − 1 , 1 ] → R differentiable , φ ( − 1 ) = φ ( 1 ) = 0 , | φ ( x ) | ≤ 1 } , � 1 u ( x ) φ ′ ( x ) dx J ( u ) = sup φ ∈ W − 1 we compute � 1 � 1 u ( x ) φ ′ ( x ) dx = a φ ′ ( x ) dx − 1 0 = a ( φ ( 1 ) − φ ( 0 )) = − a φ ( 0 ) As − 1 ≤ φ ( 0 ) ≤ 1, the maximum is | a | .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend