Total Variation in Image Analysis (The Homo Erectus Stage?) Franois - - PowerPoint PPT Presentation

total variation in image analysis the homo erectus stage
SMART_READER_LITE
LIVE PREVIEW

Total Variation in Image Analysis (The Homo Erectus Stage?) Franois - - PowerPoint PPT Presentation

Total Variation Total Variation in Image Analysis (The Homo Erectus Stage?) Franois Lauze 1Department of Computer Science University of Copenhagen Hlar Summer School on Sparse Coding, August 2010 Total Variation Outline Motivation 1


slide-1
SLIDE 1

Total Variation

Total Variation in Image Analysis (The Homo Erectus Stage?)

François Lauze

1Department of Computer Science University of Copenhagen

Hólar Summer School on Sparse Coding, August 2010

slide-2
SLIDE 2

Total Variation

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-3
SLIDE 3

Total Variation Motivation Origin and uses of Total Variation

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-4
SLIDE 4

Total Variation Motivation Origin and uses of Total Variation

In mathematics: the Plateau problem of minimal surfaces, i.e. surfaces of minimal area with a given boundary In image analysis: denoising, image reconstruction, segmentation... An ubiquitous prior for many image processing tasks.

slide-5
SLIDE 5

Total Variation Motivation Origin and uses of Total Variation

In mathematics: the Plateau problem of minimal surfaces, i.e. surfaces of minimal area with a given boundary In image analysis: denoising, image reconstruction, segmentation... An ubiquitous prior for many image processing tasks.

slide-6
SLIDE 6

Total Variation Motivation Origin and uses of Total Variation

In mathematics: the Plateau problem of minimal surfaces, i.e. surfaces of minimal area with a given boundary In image analysis: denoising, image reconstruction, segmentation... An ubiquitous prior for many image processing tasks.

slide-7
SLIDE 7

Total Variation Motivation Denoising

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-8
SLIDE 8

Total Variation Motivation Denoising

Denoising

Determine an unknown image from a noisy observation.

slide-9
SLIDE 9

Total Variation Motivation Denoising

Methods

All methods based on some statistical inference. Fourier/Wavelets Markov Random Fields Variational and Partial Differential Equations methods ... We focus on variational and PDE methods.

slide-10
SLIDE 10

Total Variation Motivation Denoising

A simple corruption model

A digital image u of size N × M pixels, corrupted by Gaussian white noise of variance σ2 write it as observed image u0 = u + η, u − u02 =

ij(uij − u0ij)2 = NMσ2

(noise variance = σ2),

ij uij = ij u0ij (zero mean noise).

could add a blur degradation u0 = Ku + η for instance, so to have Ku − u02 = NMσ2.

slide-11
SLIDE 11

Total Variation Motivation Denoising

A simple corruption model

A digital image u of size N × M pixels, corrupted by Gaussian white noise of variance σ2 write it as observed image u0 = u + η, u − u02 =

ij(uij − u0ij)2 = NMσ2

(noise variance = σ2),

ij uij = ij u0ij (zero mean noise).

could add a blur degradation u0 = Ku + η for instance, so to have Ku − u02 = NMσ2.

slide-12
SLIDE 12

Total Variation Motivation Denoising

A simple corruption model

A digital image u of size N × M pixels, corrupted by Gaussian white noise of variance σ2 write it as observed image u0 = u + η, u − u02 =

ij(uij − u0ij)2 = NMσ2

(noise variance = σ2),

ij uij = ij u0ij (zero mean noise).

could add a blur degradation u0 = Ku + η for instance, so to have Ku − u02 = NMσ2.

slide-13
SLIDE 13

Total Variation Motivation Denoising

A simple corruption model

A digital image u of size N × M pixels, corrupted by Gaussian white noise of variance σ2 write it as observed image u0 = u + η, u − u02 =

ij(uij − u0ij)2 = NMσ2

(noise variance = σ2),

ij uij = ij u0ij (zero mean noise).

could add a blur degradation u0 = Ku + η for instance, so to have Ku − u02 = NMσ2.

slide-14
SLIDE 14

Total Variation Motivation Denoising

A simple corruption model

A digital image u of size N × M pixels, corrupted by Gaussian white noise of variance σ2 write it as observed image u0 = u + η, u − u02 =

ij(uij − u0ij)2 = NMσ2

(noise variance = σ2),

ij uij = ij u0ij (zero mean noise).

could add a blur degradation u0 = Ku + η for instance, so to have Ku − u02 = NMσ2.

slide-15
SLIDE 15

Total Variation Motivation Denoising

Recovery

The problem: Find u such that u − u02 = NMσ2,

  • ij

uij =

  • ij

u0ij (1) is not well-posed. Many solutions possible. In order to recover u, extra information is needed, e.g. in the form of a prior on u. For images, smoothness priors often used. Let Ru a digital gradient of u, Then find smoothest u that satisfy constraints (1), the smoothest meaning with smallest T(u) = Ru =

  • ij

|Ru|2

ij.

slide-16
SLIDE 16

Total Variation Motivation Denoising

Recovery

The problem: Find u such that u − u02 = NMσ2,

  • ij

uij =

  • ij

u0ij (1) is not well-posed. Many solutions possible. In order to recover u, extra information is needed, e.g. in the form of a prior on u. For images, smoothness priors often used. Let Ru a digital gradient of u, Then find smoothest u that satisfy constraints (1), the smoothest meaning with smallest T(u) = Ru =

  • ij

|Ru|2

ij.

slide-17
SLIDE 17

Total Variation Motivation Denoising

Recovery

The problem: Find u such that u − u02 = NMσ2,

  • ij

uij =

  • ij

u0ij (1) is not well-posed. Many solutions possible. In order to recover u, extra information is needed, e.g. in the form of a prior on u. For images, smoothness priors often used. Let Ru a digital gradient of u, Then find smoothest u that satisfy constraints (1), the smoothest meaning with smallest T(u) = Ru =

  • ij

|Ru|2

ij.

slide-18
SLIDE 18

Total Variation Motivation Denoising

Recovery

The problem: Find u such that u − u02 = NMσ2,

  • ij

uij =

  • ij

u0ij (1) is not well-posed. Many solutions possible. In order to recover u, extra information is needed, e.g. in the form of a prior on u. For images, smoothness priors often used. Let Ru a digital gradient of u, Then find smoothest u that satisfy constraints (1), the smoothest meaning with smallest T(u) = Ru =

  • ij

|Ru|2

ij.

slide-19
SLIDE 19

Total Variation Motivation Denoising

Recovery

The problem: Find u such that u − u02 = NMσ2,

  • ij

uij =

  • ij

u0ij (1) is not well-posed. Many solutions possible. In order to recover u, extra information is needed, e.g. in the form of a prior on u. For images, smoothness priors often used. Let Ru a digital gradient of u, Then find smoothest u that satisfy constraints (1), the smoothest meaning with smallest T(u) = Ru =

  • ij

|Ru|2

ij.

slide-20
SLIDE 20

Total Variation Motivation Tikhonov regularization

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-21
SLIDE 21

Total Variation Motivation Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) = Ku − u02 + λRu2 for a λ = λ(σ) (Wahba?). E(u) minimizaton can be derived from a Maximum a Posteriori formulation Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0) Rewriting in a continuous setting: E(u) =

(Ku − uo)2 dx + λ

|∇u|2 dx

slide-22
SLIDE 22

Total Variation Motivation Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) = Ku − u02 + λRu2 for a λ = λ(σ) (Wahba?). E(u) minimizaton can be derived from a Maximum a Posteriori formulation Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0) Rewriting in a continuous setting: E(u) =

(Ku − uo)2 dx + λ

|∇u|2 dx

slide-23
SLIDE 23

Total Variation Motivation Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) = Ku − u02 + λRu2 for a λ = λ(σ) (Wahba?). E(u) minimizaton can be derived from a Maximum a Posteriori formulation Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0) Rewriting in a continuous setting: E(u) =

(Ku − uo)2 dx + λ

|∇u|2 dx

slide-24
SLIDE 24

Total Variation Motivation Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) = Ku − u02 + λRu2 for a λ = λ(σ) (Wahba?). E(u) minimizaton can be derived from a Maximum a Posteriori formulation Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0) Rewriting in a continuous setting: E(u) =

(Ku − uo)2 dx + λ

|∇u|2 dx

slide-25
SLIDE 25

Total Variation Motivation Tikhonov regularization

Tikhonov regularization

It can be show that this is equivalent to minimize E(u) = Ku − u02 + λRu2 for a λ = λ(σ) (Wahba?). E(u) minimizaton can be derived from a Maximum a Posteriori formulation Arg.max

u

p(u|u0) = p(u0|u)p(u) p(u0) Rewriting in a continuous setting: E(u) =

(Ku − uo)2 dx + λ

|∇u|2 dx

slide-26
SLIDE 26

Total Variation Motivation Tikhonov regularization

How to solve?

Solution satisfies the Euler-Lagrange equation for E: K ∗ (Ku − u0) − λ∆u = 0. (K ∗ is the adjoint of K) A linear equation, easy to implement, and many fast solvers exit, but...

slide-27
SLIDE 27

Total Variation Motivation Tikhonov regularization

How to solve?

Solution satisfies the Euler-Lagrange equation for E: K ∗ (Ku − u0) − λ∆u = 0. (K ∗ is the adjoint of K) A linear equation, easy to implement, and many fast solvers exit, but...

slide-28
SLIDE 28

Total Variation Motivation Tikhonov regularization

How to solve?

Solution satisfies the Euler-Lagrange equation for E: K ∗ (Ku − u0) − λ∆u = 0. (K ∗ is the adjoint of K) A linear equation, easy to implement, and many fast solvers exit, but...

slide-29
SLIDE 29

Total Variation Motivation Tikhonov regularization

Tikhonov example

Denoising example, K = Id. Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? The term

  • Ω(u − u0)2 dx: not guilty!

Then it must be

  • Ω |∇u|2 dx. Derivatives and step edges do not go too well

together?

slide-30
SLIDE 30

Total Variation Motivation Tikhonov regularization

Tikhonov example

Denoising example, K = Id. Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? The term

  • Ω(u − u0)2 dx: not guilty!

Then it must be

  • Ω |∇u|2 dx. Derivatives and step edges do not go too well

together?

slide-31
SLIDE 31

Total Variation Motivation Tikhonov regularization

Tikhonov example

Denoising example, K = Id. Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? The term

  • Ω(u − u0)2 dx: not guilty!

Then it must be

  • Ω |∇u|2 dx. Derivatives and step edges do not go too well

together?

slide-32
SLIDE 32

Total Variation Motivation Tikhonov regularization

Tikhonov example

Denoising example, K = Id. Original λ = 50 λ = 500 Not good: images contain edges but Tikhonov blur them. Why? The term

  • Ω(u − u0)2 dx: not guilty!

Then it must be

  • Ω |∇u|2 dx. Derivatives and step edges do not go too well

together?

slide-33
SLIDE 33

Total Variation Motivation 1-D computation on step edges

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-34
SLIDE 34

Total Variation Motivation 1-D computation on step edges

Set Ω = [−1, 1], a a real number and u the step-edge function u(x) =

  • x ≤ 0

a x > 0 Not differentiable at 0, but forget about it and try to compute 1

−1

|u′(x)|2 dx. Around 0 “approximate” u′(x) by u(h) − u(−h) 2h , h > 0, small

slide-35
SLIDE 35

Total Variation Motivation 1-D computation on step edges

Set Ω = [−1, 1], a a real number and u the step-edge function u(x) =

  • x ≤ 0

a x > 0 Not differentiable at 0, but forget about it and try to compute 1

−1

|u′(x)|2 dx. Around 0 “approximate” u′(x) by u(h) − u(−h) 2h , h > 0, small

slide-36
SLIDE 36

Total Variation Motivation 1-D computation on step edges

Set Ω = [−1, 1], a a real number and u the step-edge function u(x) =

  • x ≤ 0

a x > 0 Not differentiable at 0, but forget about it and try to compute 1

−1

|u′(x)|2 dx. Around 0 “approximate” u′(x) by u(h) − u(−h) 2h , h > 0, small

slide-37
SLIDE 37

Total Variation Motivation 1-D computation on step edges

with this finite difference approximation u′(x) ≈ a 2h , x ∈ [−h, h] then 1

−1

|u′(x)|2 dx = −h

−1

|u′(x)|2 dx + h

−h

|u′(x)|2 dx + 1

h

|u′(x)|2 dx = 0 + 2h × a 2h 2 + 0 = a2 2h → ∞, h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

slide-38
SLIDE 38

Total Variation Motivation 1-D computation on step edges

with this finite difference approximation u′(x) ≈ a 2h , x ∈ [−h, h] then 1

−1

|u′(x)|2 dx = −h

−1

|u′(x)|2 dx + h

−h

|u′(x)|2 dx + 1

h

|u′(x)|2 dx = 0 + 2h × a 2h 2 + 0 = a2 2h → ∞, h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

slide-39
SLIDE 39

Total Variation Motivation 1-D computation on step edges

with this finite difference approximation u′(x) ≈ a 2h , x ∈ [−h, h] then 1

−1

|u′(x)|2 dx = −h

−1

|u′(x)|2 dx + h

−h

|u′(x)|2 dx + 1

h

|u′(x)|2 dx = 0 + 2h × a 2h 2 + 0 = a2 2h → ∞, h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

slide-40
SLIDE 40

Total Variation Motivation 1-D computation on step edges

with this finite difference approximation u′(x) ≈ a 2h , x ∈ [−h, h] then 1

−1

|u′(x)|2 dx = −h

−1

|u′(x)|2 dx + h

−h

|u′(x)|2 dx + 1

h

|u′(x)|2 dx = 0 + 2h × a 2h 2 + 0 = a2 2h → ∞, h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

slide-41
SLIDE 41

Total Variation Motivation 1-D computation on step edges

with this finite difference approximation u′(x) ≈ a 2h , x ∈ [−h, h] then 1

−1

|u′(x)|2 dx = −h

−1

|u′(x)|2 dx + h

−h

|u′(x)|2 dx + 1

h

|u′(x)|2 dx = 0 + 2h × a 2h 2 + 0 = a2 2h → ∞, h → 0 So a step-edge has “infinite energy”. It cannot minimizes Tikhonov. What went “wrong”: the square:

slide-42
SLIDE 42

Total Variation Motivation 1-D computation on step edges

Replace the square in the previous computation by p > 0 and redo: Then 1

−1

|u′(x)|p dx = −h

−1

|u′(x)|p dx + h

−h

|u′(x)|p dx + 1

h

|u′(x)|p dx = 0 + 2h ×

  • a

2h

  • p

+ 0 = |a|p(2h)1−p < ∞ when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u.

slide-43
SLIDE 43

Total Variation Motivation 1-D computation on step edges

Replace the square in the previous computation by p > 0 and redo: Then 1

−1

|u′(x)|p dx = −h

−1

|u′(x)|p dx + h

−h

|u′(x)|p dx + 1

h

|u′(x)|p dx = 0 + 2h ×

  • a

2h

  • p

+ 0 = |a|p(2h)1−p < ∞ when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u.

slide-44
SLIDE 44

Total Variation Motivation 1-D computation on step edges

Replace the square in the previous computation by p > 0 and redo: Then 1

−1

|u′(x)|p dx = −h

−1

|u′(x)|p dx + h

−h

|u′(x)|p dx + 1

h

|u′(x)|p dx = 0 + 2h ×

  • a

2h

  • p

+ 0 = |a|p(2h)1−p < ∞ when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u.

slide-45
SLIDE 45

Total Variation Motivation 1-D computation on step edges

Replace the square in the previous computation by p > 0 and redo: Then 1

−1

|u′(x)|p dx = −h

−1

|u′(x)|p dx + h

−h

|u′(x)|p dx + 1

h

|u′(x)|p dx = 0 + 2h ×

  • a

2h

  • p

+ 0 = |a|p(2h)1−p < ∞ when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u.

slide-46
SLIDE 46

Total Variation Motivation 1-D computation on step edges

Replace the square in the previous computation by p > 0 and redo: Then 1

−1

|u′(x)|p dx = −h

−1

|u′(x)|p dx + h

−h

|u′(x)|p dx + 1

h

|u′(x)|p dx = 0 + 2h ×

  • a

2h

  • p

+ 0 = |a|p(2h)1−p < ∞ when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u.

slide-47
SLIDE 47

Total Variation Motivation 1-D computation on step edges

Replace the square in the previous computation by p > 0 and redo: Then 1

−1

|u′(x)|p dx = −h

−1

|u′(x)|p dx + h

−h

|u′(x)|p dx + 1

h

|u′(x)|p dx = 0 + 2h ×

  • a

2h

  • p

+ 0 = |a|p(2h)1−p < ∞ when p ≤ 1 When p ≤ 1 this is finite! Edges can survive here! Quite ugly when p < 1 (but not uninteresting) When p = 1, this is the Total Variation of u.

slide-48
SLIDE 48

Total Variation Total Variation I First definition

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-49
SLIDE 49

Total Variation Total Variation I First definition

Let u : Ω ⊂ Rn → R. Define total variation as J(u) =

|∇u| dx, |∇u| =

  • n
  • i=1

u2

xi .

When J(u) is finite, one says that u has bounded variations and the space of function of bounded variations on Ω is denoted BV(Ω).

slide-50
SLIDE 50

Total Variation Total Variation I First definition

Let u : Ω ⊂ Rn → R. Define total variation as J(u) =

|∇u| dx, |∇u| =

  • n
  • i=1

u2

xi .

When J(u) is finite, one says that u has bounded variations and the space of function of bounded variations on Ω is denoted BV(Ω).

slide-51
SLIDE 51

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-52
SLIDE 52

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-53
SLIDE 53

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-54
SLIDE 54

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-55
SLIDE 55

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-56
SLIDE 56

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-57
SLIDE 57

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-58
SLIDE 58

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-59
SLIDE 59

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-60
SLIDE 60

Total Variation Total Variation I First definition

Expected: when minimizing J(u) with other constraints, edges are less penalized that with Tikhonov. Indeed edges are “naturally present” in bounded variation functions. In fact: functions of bounded variations can be decomposed in

1

smooth parts, ∇u well defined,

2

Jump discontinuities (our edges)

3

something else (Cantor part) which can be nasty...

The functions that do not possess this nasty part form a subspace of BV(Ω) called SBV(Ω), The Special functions of Bounded Variation, (used for instance when studying Mumford-Shah functional)

slide-61
SLIDE 61

Total Variation Total Variation I Rudin-Osher-Fatemi

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-62
SLIDE 62

Total Variation Total Variation I Rudin-Osher-Fatemi

ROF Denoising

State the denoising problem as minimizing J(u) under the constraints

u dx =

uo dx,

(u − u0)2 dx = |Ω|σ2 (|Ω| = area/volume of Ω) Solve via Lagrange multipliers.

slide-63
SLIDE 63

Total Variation Total Variation I Rudin-Osher-Fatemi

ROF Denoising

State the denoising problem as minimizing J(u) under the constraints

u dx =

uo dx,

(u − u0)2 dx = |Ω|σ2 (|Ω| = area/volume of Ω) Solve via Lagrange multipliers.

slide-64
SLIDE 64

Total Variation Total Variation I Rudin-Osher-Fatemi

ROF Denoising

State the denoising problem as minimizing J(u) under the constraints

u dx =

uo dx,

(u − u0)2 dx = |Ω|σ2 (|Ω| = area/volume of Ω) Solve via Lagrange multipliers.

slide-65
SLIDE 65

Total Variation Total Variation I Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there exists λ such the solution minimizes ETV (u) = 1 2

(Ku − u0)2 dx + λ

|∇u| dx Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|

  • = 0.

The term div

  • ∇u

|∇u|

  • is highly non linear. Problems especially when |∇u| = 0.

In fact ∇u/

|∇u| (x) is the unit normal of the level line of u at x and div

  • ∇u

|∇u|

  • is the

(mean)curvature of the level line: not defined when the level line is singular or does not exist!

slide-66
SLIDE 66

Total Variation Total Variation I Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there exists λ such the solution minimizes ETV (u) = 1 2

(Ku − u0)2 dx + λ

|∇u| dx Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|

  • = 0.

The term div

  • ∇u

|∇u|

  • is highly non linear. Problems especially when |∇u| = 0.

In fact ∇u/

|∇u| (x) is the unit normal of the level line of u at x and div

  • ∇u

|∇u|

  • is the

(mean)curvature of the level line: not defined when the level line is singular or does not exist!

slide-67
SLIDE 67

Total Variation Total Variation I Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there exists λ such the solution minimizes ETV (u) = 1 2

(Ku − u0)2 dx + λ

|∇u| dx Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|

  • = 0.

The term div

  • ∇u

|∇u|

  • is highly non linear. Problems especially when |∇u| = 0.

In fact ∇u/

|∇u| (x) is the unit normal of the level line of u at x and div

  • ∇u

|∇u|

  • is the

(mean)curvature of the level line: not defined when the level line is singular or does not exist!

slide-68
SLIDE 68

Total Variation Total Variation I Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there exists λ such the solution minimizes ETV (u) = 1 2

(Ku − u0)2 dx + λ

|∇u| dx Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|

  • = 0.

The term div

  • ∇u

|∇u|

  • is highly non linear. Problems especially when |∇u| = 0.

In fact ∇u/

|∇u| (x) is the unit normal of the level line of u at x and div

  • ∇u

|∇u|

  • is the

(mean)curvature of the level line: not defined when the level line is singular or does not exist!

slide-69
SLIDE 69

Total Variation Total Variation I Rudin-Osher-Fatemi

TV-denoising

Chambolle-Lions: there exists λ such the solution minimizes ETV (u) = 1 2

(Ku − u0)2 dx + λ

|∇u| dx Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|

  • = 0.

The term div

  • ∇u

|∇u|

  • is highly non linear. Problems especially when |∇u| = 0.

In fact ∇u/

|∇u| (x) is the unit normal of the level line of u at x and div

  • ∇u

|∇u|

  • is the

(mean)curvature of the level line: not defined when the level line is singular or does not exist!

slide-70
SLIDE 70

Total Variation Total Variation I Rudin-Osher-Fatemi

Acar-Vogel

Replace it by regularized version |∇u|β =

  • |∇u|2 + β,

β > 0 Acar - Vogel show that lim

β→0

  • Jβ(u) =

|∇u|β dx

  • = J(u).

Replace energy by E′(u) =

(Ku − u0)2 dx + λJβ(u) Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|β

  • = 0

The null denominator problem disappears.

slide-71
SLIDE 71

Total Variation Total Variation I Rudin-Osher-Fatemi

Acar-Vogel

Replace it by regularized version |∇u|β =

  • |∇u|2 + β,

β > 0 Acar - Vogel show that lim

β→0

  • Jβ(u) =

|∇u|β dx

  • = J(u).

Replace energy by E′(u) =

(Ku − u0)2 dx + λJβ(u) Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|β

  • = 0

The null denominator problem disappears.

slide-72
SLIDE 72

Total Variation Total Variation I Rudin-Osher-Fatemi

Acar-Vogel

Replace it by regularized version |∇u|β =

  • |∇u|2 + β,

β > 0 Acar - Vogel show that lim

β→0

  • Jβ(u) =

|∇u|β dx

  • = J(u).

Replace energy by E′(u) =

(Ku − u0)2 dx + λJβ(u) Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|β

  • = 0

The null denominator problem disappears.

slide-73
SLIDE 73

Total Variation Total Variation I Rudin-Osher-Fatemi

Acar-Vogel

Replace it by regularized version |∇u|β =

  • |∇u|2 + β,

β > 0 Acar - Vogel show that lim

β→0

  • Jβ(u) =

|∇u|β dx

  • = J(u).

Replace energy by E′(u) =

(Ku − u0)2 dx + λJβ(u) Euler-Lagrange equation: K ∗(Ku − u0) − λdiv ∇u |∇u|β

  • = 0

The null denominator problem disappears.

slide-74
SLIDE 74

Total Variation Total Variation I Rudin-Osher-Fatemi

Example

Implementation by finite differences, fixed-point strategy, linearization. Original λ = 1.5, β = 10−4

slide-75
SLIDE 75

Total Variation Total Variation I Rudin-Osher-Fatemi

Example

Implementation by finite differences, fixed-point strategy, linearization. Original λ = 1.5, β = 10−4

slide-76
SLIDE 76

Total Variation Total Variation I Inpainting/Denoising

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-77
SLIDE 77

Total Variation Total Variation I Inpainting/Denoising

Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): EITV (u) = 1 2

  • Ω\H

(u − u0)2 dx + λ

|∇u| dx Euler-Lagrange Equation: (u − u0)χ − λdiv ∇u |∇u|

  • = 0.

(χ(x) = 1 is x ∈ H, 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

slide-78
SLIDE 78

Total Variation Total Variation I Inpainting/Denoising

Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): EITV (u) = 1 2

  • Ω\H

(u − u0)2 dx + λ

|∇u| dx Euler-Lagrange Equation: (u − u0)χ − λdiv ∇u |∇u|

  • = 0.

(χ(x) = 1 is x ∈ H, 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

slide-79
SLIDE 79

Total Variation Total Variation I Inpainting/Denoising

Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): EITV (u) = 1 2

  • Ω\H

(u − u0)2 dx + λ

|∇u| dx Euler-Lagrange Equation: (u − u0)χ − λdiv ∇u |∇u|

  • = 0.

(χ(x) = 1 is x ∈ H, 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

slide-80
SLIDE 80

Total Variation Total Variation I Inpainting/Denoising

Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): EITV (u) = 1 2

  • Ω\H

(u − u0)2 dx + λ

|∇u| dx Euler-Lagrange Equation: (u − u0)χ − λdiv ∇u |∇u|

  • = 0.

(χ(x) = 1 is x ∈ H, 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

slide-81
SLIDE 81

Total Variation Total Variation I Inpainting/Denoising

Filling u in the subset H ⊂ Ω where data is missing, denoise known data Inpainting energy (Chan & Shen): EITV (u) = 1 2

  • Ω\H

(u − u0)2 dx + λ

|∇u| dx Euler-Lagrange Equation: (u − u0)χ − λdiv ∇u |∇u|

  • = 0.

(χ(x) = 1 is x ∈ H, 0 otherwise). Very similar to denoising. Can use the same approximation/implementation.

slide-82
SLIDE 82

Total Variation Total Variation I Inpainting/Denoising

Degraded Inpainted

slide-83
SLIDE 83

Total Variation Total Variation I Inpainting/Denoising

Segmention

Inpainting - driven segmention (Lauze, Nielsen 2008, IJCV) Aortic calcifiction Detection Segmention

slide-84
SLIDE 84

Total Variation Total Variation II Relaxing the derivative constraints

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-85
SLIDE 85

Total Variation Total Variation II Relaxing the derivative constraints

With definition of total variation as J(u) =

|∇u| dx u must have (weak) derivatives. But we just saw that the computation is possible for a step-edge u(x) = 0, x < 0, u(x) = a, x > 0: 1

−1

|u′(x)| dx = |a| Can we avoid the use of derivatives of u?

slide-86
SLIDE 86

Total Variation Total Variation II Relaxing the derivative constraints

With definition of total variation as J(u) =

|∇u| dx u must have (weak) derivatives. But we just saw that the computation is possible for a step-edge u(x) = 0, x < 0, u(x) = a, x > 0: 1

−1

|u′(x)| dx = |a| Can we avoid the use of derivatives of u?

slide-87
SLIDE 87

Total Variation Total Variation II Relaxing the derivative constraints

With definition of total variation as J(u) =

|∇u| dx u must have (weak) derivatives. But we just saw that the computation is possible for a step-edge u(x) = 0, x < 0, u(x) = a, x > 0: 1

−1

|u′(x)| dx = |a| Can we avoid the use of derivatives of u?

slide-88
SLIDE 88

Total Variation Total Variation II Relaxing the derivative constraints

With definition of total variation as J(u) =

|∇u| dx u must have (weak) derivatives. But we just saw that the computation is possible for a step-edge u(x) = 0, x < 0, u(x) = a, x > 0: 1

−1

|u′(x)| dx = |a| Can we avoid the use of derivatives of u?

slide-89
SLIDE 89

Total Variation Total Variation II Relaxing the derivative constraints

Assume first that ∇u exists. |∇u| = ∇u · ∇u |∇u| (except when ∇u = 0) and

∇u |∇u| is the normal to the level lines of u, it has

everywhere norm 1. Let V the set of vector fields v(x) on Ω with |v(x)| ≤ 1. I claim J(u) = sup

v∈V

∇u(x) · v(x) dx (consequence of Cauchy-Schwarz inequality).

slide-90
SLIDE 90

Total Variation Total Variation II Relaxing the derivative constraints

Assume first that ∇u exists. |∇u| = ∇u · ∇u |∇u| (except when ∇u = 0) and

∇u |∇u| is the normal to the level lines of u, it has

everywhere norm 1. Let V the set of vector fields v(x) on Ω with |v(x)| ≤ 1. I claim J(u) = sup

v∈V

∇u(x) · v(x) dx (consequence of Cauchy-Schwarz inequality).

slide-91
SLIDE 91

Total Variation Total Variation II Relaxing the derivative constraints

Assume first that ∇u exists. |∇u| = ∇u · ∇u |∇u| (except when ∇u = 0) and

∇u |∇u| is the normal to the level lines of u, it has

everywhere norm 1. Let V the set of vector fields v(x) on Ω with |v(x)| ≤ 1. I claim J(u) = sup

v∈V

∇u(x) · v(x) dx (consequence of Cauchy-Schwarz inequality).

slide-92
SLIDE 92

Total Variation Total Variation II Relaxing the derivative constraints

Restrict to the set W of such v’s that are differentiable and vanishing at ∂Ω, the boundary of Ω Then J(u) = sup

v∈W

∇u(x) · v(x) dx But then I can use Divergence theorem: H ⊂ D ⊂ Rn, f : D → R differentiable function, g = (g1, . . . , gn) : D → Rn differentiable vector field and div g = n

i=1 gi xi ,

  • H

∇f · g dx = −

  • H

fdiv g dx +

  • ∂H

fg · n(s) ds with n(s) exterior normal field to ∂H. Apply it to J(u) above: J(u) = sup

v∈W

u(x) div v(x) dx

  • The gradient has disappeared from u! This is the classical definition of total

variation. Note that when ∇u(x) = 0, optimal v(x) = (∇u/|∇|u)(x) and divv(x) is the mean curvature of the level set of u at x. Geometry is there!

slide-93
SLIDE 93

Total Variation Total Variation II Relaxing the derivative constraints

Restrict to the set W of such v’s that are differentiable and vanishing at ∂Ω, the boundary of Ω Then J(u) = sup

v∈W

∇u(x) · v(x) dx But then I can use Divergence theorem: H ⊂ D ⊂ Rn, f : D → R differentiable function, g = (g1, . . . , gn) : D → Rn differentiable vector field and div g = n

i=1 gi xi ,

  • H

∇f · g dx = −

  • H

fdiv g dx +

  • ∂H

fg · n(s) ds with n(s) exterior normal field to ∂H. Apply it to J(u) above: J(u) = sup

v∈W

u(x) div v(x) dx

  • The gradient has disappeared from u! This is the classical definition of total

variation. Note that when ∇u(x) = 0, optimal v(x) = (∇u/|∇|u)(x) and divv(x) is the mean curvature of the level set of u at x. Geometry is there!

slide-94
SLIDE 94

Total Variation Total Variation II Relaxing the derivative constraints

Restrict to the set W of such v’s that are differentiable and vanishing at ∂Ω, the boundary of Ω Then J(u) = sup

v∈W

∇u(x) · v(x) dx But then I can use Divergence theorem: H ⊂ D ⊂ Rn, f : D → R differentiable function, g = (g1, . . . , gn) : D → Rn differentiable vector field and div g = n

i=1 gi xi ,

  • H

∇f · g dx = −

  • H

fdiv g dx +

  • ∂H

fg · n(s) ds with n(s) exterior normal field to ∂H. Apply it to J(u) above: J(u) = sup

v∈W

u(x) div v(x) dx

  • The gradient has disappeared from u! This is the classical definition of total

variation. Note that when ∇u(x) = 0, optimal v(x) = (∇u/|∇|u)(x) and divv(x) is the mean curvature of the level set of u at x. Geometry is there!

slide-95
SLIDE 95

Total Variation Total Variation II Relaxing the derivative constraints

Restrict to the set W of such v’s that are differentiable and vanishing at ∂Ω, the boundary of Ω Then J(u) = sup

v∈W

∇u(x) · v(x) dx But then I can use Divergence theorem: H ⊂ D ⊂ Rn, f : D → R differentiable function, g = (g1, . . . , gn) : D → Rn differentiable vector field and div g = n

i=1 gi xi ,

  • H

∇f · g dx = −

  • H

fdiv g dx +

  • ∂H

fg · n(s) ds with n(s) exterior normal field to ∂H. Apply it to J(u) above: J(u) = sup

v∈W

u(x) div v(x) dx

  • The gradient has disappeared from u! This is the classical definition of total

variation. Note that when ∇u(x) = 0, optimal v(x) = (∇u/|∇|u)(x) and divv(x) is the mean curvature of the level set of u at x. Geometry is there!

slide-96
SLIDE 96

Total Variation Total Variation II Relaxing the derivative constraints

Restrict to the set W of such v’s that are differentiable and vanishing at ∂Ω, the boundary of Ω Then J(u) = sup

v∈W

∇u(x) · v(x) dx But then I can use Divergence theorem: H ⊂ D ⊂ Rn, f : D → R differentiable function, g = (g1, . . . , gn) : D → Rn differentiable vector field and div g = n

i=1 gi xi ,

  • H

∇f · g dx = −

  • H

fdiv g dx +

  • ∂H

fg · n(s) ds with n(s) exterior normal field to ∂H. Apply it to J(u) above: J(u) = sup

v∈W

u(x) div v(x) dx

  • The gradient has disappeared from u! This is the classical definition of total

variation. Note that when ∇u(x) = 0, optimal v(x) = (∇u/|∇|u)(x) and divv(x) is the mean curvature of the level set of u at x. Geometry is there!

slide-97
SLIDE 97

Total Variation Total Variation II Relaxing the derivative constraints

Restrict to the set W of such v’s that are differentiable and vanishing at ∂Ω, the boundary of Ω Then J(u) = sup

v∈W

∇u(x) · v(x) dx But then I can use Divergence theorem: H ⊂ D ⊂ Rn, f : D → R differentiable function, g = (g1, . . . , gn) : D → Rn differentiable vector field and div g = n

i=1 gi xi ,

  • H

∇f · g dx = −

  • H

fdiv g dx +

  • ∂H

fg · n(s) ds with n(s) exterior normal field to ∂H. Apply it to J(u) above: J(u) = sup

v∈W

u(x) div v(x) dx

  • The gradient has disappeared from u! This is the classical definition of total

variation. Note that when ∇u(x) = 0, optimal v(x) = (∇u/|∇|u)(x) and divv(x) is the mean curvature of the level set of u at x. Geometry is there!

slide-98
SLIDE 98

Total Variation Total Variation II Definition in action

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-99
SLIDE 99

Total Variation Total Variation II Definition in action

Step-edge

u the step-edge function defined in previous slides. We compute J(u) with the new definition. here W = {φ : [−1, 1] → R differentiable, φ(−1) = φ(1) = 0, |φ(x)| ≤ 1}, J(u) = sup

φ∈W

1

−1

u(x)φ′(x) dx we compute 1

−1

u(x)φ′(x) dx = a 1 φ′(x) dx = a (φ(1) − φ(0)) = −aφ(0) As −1 ≤ φ(0) ≤ 1, the maximum is |a|.

slide-100
SLIDE 100

Total Variation Total Variation II Definition in action

Step-edge

u the step-edge function defined in previous slides. We compute J(u) with the new definition. here W = {φ : [−1, 1] → R differentiable, φ(−1) = φ(1) = 0, |φ(x)| ≤ 1}, J(u) = sup

φ∈W

1

−1

u(x)φ′(x) dx we compute 1

−1

u(x)φ′(x) dx = a 1 φ′(x) dx = a (φ(1) − φ(0)) = −aφ(0) As −1 ≤ φ(0) ≤ 1, the maximum is |a|.

slide-101
SLIDE 101

Total Variation Total Variation II Definition in action

Step-edge

u the step-edge function defined in previous slides. We compute J(u) with the new definition. here W = {φ : [−1, 1] → R differentiable, φ(−1) = φ(1) = 0, |φ(x)| ≤ 1}, J(u) = sup

φ∈W

1

−1

u(x)φ′(x) dx we compute 1

−1

u(x)φ′(x) dx = a 1 φ′(x) dx = a (φ(1) − φ(0)) = −aφ(0) As −1 ≤ φ(0) ≤ 1, the maximum is |a|.

slide-102
SLIDE 102

Total Variation Total Variation II Definition in action

Step-edge

u the step-edge function defined in previous slides. We compute J(u) with the new definition. here W = {φ : [−1, 1] → R differentiable, φ(−1) = φ(1) = 0, |φ(x)| ≤ 1}, J(u) = sup

φ∈W

1

−1

u(x)φ′(x) dx we compute 1

−1

u(x)φ′(x) dx = a 1 φ′(x) dx = a (φ(1) − φ(0)) = −aφ(0) As −1 ≤ φ(0) ≤ 1, the maximum is |a|.

slide-103
SLIDE 103

Total Variation Total Variation II Definition in action

Step-edge

u the step-edge function defined in previous slides. We compute J(u) with the new definition. here W = {φ : [−1, 1] → R differentiable, φ(−1) = φ(1) = 0, |φ(x)| ≤ 1}, J(u) = sup

φ∈W

1

−1

u(x)φ′(x) dx we compute 1

−1

u(x)φ′(x) dx = a 1 φ′(x) dx = a (φ(1) − φ(0)) = −aφ(0) As −1 ≤ φ(0) ≤ 1, the maximum is |a|.

slide-104
SLIDE 104

Total Variation Total Variation II Definition in action

Step-edge

u the step-edge function defined in previous slides. We compute J(u) with the new definition. here W = {φ : [−1, 1] → R differentiable, φ(−1) = φ(1) = 0, |φ(x)| ≤ 1}, J(u) = sup

φ∈W

1

−1

u(x)φ′(x) dx we compute 1

−1

u(x)φ′(x) dx = a 1 φ′(x) dx = a (φ(1) − φ(0)) = −aφ(0) As −1 ≤ φ(0) ≤ 1, the maximum is |a|.

slide-105
SLIDE 105

Total Variation Total Variation II Definition in action

2D example

B open set with regular boundary curve partialB, Ω large enough to contain B and χB the characteristic function of B χB(x) =

  • 1

x ∈ B x ∈ B For v ∈ W, by the divergence theorem on B and its boundary ∂B

χ(x)div v(x) dx =

  • B

div v(x) dx = −

  • ∂B

v(s) · n(s) ds (n(s) is the exterior normal to ∂B) This integral is maximized when v = −n : length of ∂B perimeter of B.

slide-106
SLIDE 106

Total Variation Total Variation II Definition in action

2D example

B open set with regular boundary curve partialB, Ω large enough to contain B and χB the characteristic function of B χB(x) =

  • 1

x ∈ B x ∈ B For v ∈ W, by the divergence theorem on B and its boundary ∂B

χ(x)div v(x) dx =

  • B

div v(x) dx = −

  • ∂B

v(s) · n(s) ds (n(s) is the exterior normal to ∂B) This integral is maximized when v = −n : length of ∂B perimeter of B.

slide-107
SLIDE 107

Total Variation Total Variation II Definition in action

2D example

B open set with regular boundary curve partialB, Ω large enough to contain B and χB the characteristic function of B χB(x) =

  • 1

x ∈ B x ∈ B For v ∈ W, by the divergence theorem on B and its boundary ∂B

χ(x)div v(x) dx =

  • B

div v(x) dx = −

  • ∂B

v(s) · n(s) ds (n(s) is the exterior normal to ∂B) This integral is maximized when v = −n : length of ∂B perimeter of B.

slide-108
SLIDE 108

Total Variation Total Variation II Definition in action

2D example

B open set with regular boundary curve partialB, Ω large enough to contain B and χB the characteristic function of B χB(x) =

  • 1

x ∈ B x ∈ B For v ∈ W, by the divergence theorem on B and its boundary ∂B

χ(x)div v(x) dx =

  • B

div v(x) dx = −

  • ∂B

v(s) · n(s) ds (n(s) is the exterior normal to ∂B) This integral is maximized when v = −n : length of ∂B perimeter of B.

slide-109
SLIDE 109

Total Variation Total Variation II Definition in action

Sets of finite perimeter

Let H ⊂ Ω. If its characteristic function χH satisfies J(χH) < ∞ H is called set of finite perimeter (and PerΩ(H) := J(χH) is its perimeter) This is used for instance in the Chan and Vese algorithm. If J(u) < ∞ and Ht = {x ∈ Ω, u(x) < t} the lower t-level set of u, J(u) = +∞

−∞

J(χHt ) dt Coarea formula

slide-110
SLIDE 110

Total Variation Total Variation II Definition in action

Sets of finite perimeter

Let H ⊂ Ω. If its characteristic function χH satisfies J(χH) < ∞ H is called set of finite perimeter (and PerΩ(H) := J(χH) is its perimeter) This is used for instance in the Chan and Vese algorithm. If J(u) < ∞ and Ht = {x ∈ Ω, u(x) < t} the lower t-level set of u, J(u) = +∞

−∞

J(χHt ) dt Coarea formula

slide-111
SLIDE 111

Total Variation Total Variation II Definition in action

Sets of finite perimeter

Let H ⊂ Ω. If its characteristic function χH satisfies J(χH) < ∞ H is called set of finite perimeter (and PerΩ(H) := J(χH) is its perimeter) This is used for instance in the Chan and Vese algorithm. If J(u) < ∞ and Ht = {x ∈ Ω, u(x) < t} the lower t-level set of u, J(u) = +∞

−∞

J(χHt ) dt Coarea formula

slide-112
SLIDE 112

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-113
SLIDE 113

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Chambolle algorithm

Let K ∈ L2(Ω) the closure of the set {div v, v ∈ C1

0(Ω)2, |v(x)| ≤ 1} i.e. the

image of W by div. Then J(u) = sup

φ∈K

u φ dx = u, φL2(Ω)

  • Solution of the denoising problem arg.min
  • Ω(u − u0)2 + λJ(u) given by

u = u0 − πλK (u0) with πλK orthogonal projection onto the convex set λK (Chambolle). Needs a bit of convex analysis to show that: subdifferentials and subgradients, Fenchel transforms, indicators/characteristic functions and elementary results on them

slide-114
SLIDE 114

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Chambolle algorithm

Let K ∈ L2(Ω) the closure of the set {div v, v ∈ C1

0(Ω)2, |v(x)| ≤ 1} i.e. the

image of W by div. Then J(u) = sup

φ∈K

u φ dx = u, φL2(Ω)

  • Solution of the denoising problem arg.min
  • Ω(u − u0)2 + λJ(u) given by

u = u0 − πλK (u0) with πλK orthogonal projection onto the convex set λK (Chambolle). Needs a bit of convex analysis to show that: subdifferentials and subgradients, Fenchel transforms, indicators/characteristic functions and elementary results on them

slide-115
SLIDE 115

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Chambolle algorithm

Let K ∈ L2(Ω) the closure of the set {div v, v ∈ C1

0(Ω)2, |v(x)| ≤ 1} i.e. the

image of W by div. Then J(u) = sup

φ∈K

u φ dx = u, φL2(Ω)

  • Solution of the denoising problem arg.min
  • Ω(u − u0)2 + λJ(u) given by

u = u0 − πλK (u0) with πλK orthogonal projection onto the convex set λK (Chambolle). Needs a bit of convex analysis to show that: subdifferentials and subgradients, Fenchel transforms, indicators/characteristic functions and elementary results on them

slide-116
SLIDE 116

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Chambolle algorithm

Let K ∈ L2(Ω) the closure of the set {div v, v ∈ C1

0(Ω)2, |v(x)| ≤ 1} i.e. the

image of W by div. Then J(u) = sup

φ∈K

u φ dx = u, φL2(Ω)

  • Solution of the denoising problem arg.min
  • Ω(u − u0)2 + λJ(u) given by

u = u0 − πλK (u0) with πλK orthogonal projection onto the convex set λK (Chambolle). Needs a bit of convex analysis to show that: subdifferentials and subgradients, Fenchel transforms, indicators/characteristic functions and elementary results on them

slide-117
SLIDE 117

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Chambolle algorithm

Let K ∈ L2(Ω) the closure of the set {div v, v ∈ C1

0(Ω)2, |v(x)| ≤ 1} i.e. the

image of W by div. Then J(u) = sup

φ∈K

u φ dx = u, φL2(Ω)

  • Solution of the denoising problem arg.min
  • Ω(u − u0)2 + λJ(u) given by

u = u0 − πλK (u0) with πλK orthogonal projection onto the convex set λK (Chambolle). Needs a bit of convex analysis to show that: subdifferentials and subgradients, Fenchel transforms, indicators/characteristic functions and elementary results on them

slide-118
SLIDE 118

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Chambolle algorithm

Let K ∈ L2(Ω) the closure of the set {div v, v ∈ C1

0(Ω)2, |v(x)| ≤ 1} i.e. the

image of W by div. Then J(u) = sup

φ∈K

u φ dx = u, φL2(Ω)

  • Solution of the denoising problem arg.min
  • Ω(u − u0)2 + λJ(u) given by

u = u0 − πλK (u0) with πλK orthogonal projection onto the convex set λK (Chambolle). Needs a bit of convex analysis to show that: subdifferentials and subgradients, Fenchel transforms, indicators/characteristic functions and elementary results on them

slide-119
SLIDE 119

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel Transform

X Hilbert space, f : X → R convex, proper. Fenchel transform of F: F ∗(v) = sup

u∈X

(u, vX − F(u)) Geometric meaning: take u∗ such that F ∗(u∗) < +∞: the affine function a(u) = u, u∗ − F ∗(u∗) is tangent to F and a(0) = −F ∗(u∗).

slide-120
SLIDE 120

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel Transform

X Hilbert space, f : X → R convex, proper. Fenchel transform of F: F ∗(v) = sup

u∈X

(u, vX − F(u)) Geometric meaning: take u∗ such that F ∗(u∗) < +∞: the affine function a(u) = u, u∗ − F ∗(u∗) is tangent to F and a(0) = −F ∗(u∗).

slide-121
SLIDE 121

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel Transform

X Hilbert space, f : X → R convex, proper. Fenchel transform of F: F ∗(v) = sup

u∈X

(u, vX − F(u)) Geometric meaning: take u∗ such that F ∗(u∗) < +∞: the affine function a(u) = u, u∗ − F ∗(u∗) is tangent to F and a(0) = −F ∗(u∗).

slide-122
SLIDE 122

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-123
SLIDE 123

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-124
SLIDE 124

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-125
SLIDE 125

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-126
SLIDE 126

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-127
SLIDE 127

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-128
SLIDE 128

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-129
SLIDE 129

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Fenchel transform

Interesting properties:

Convex if Φ is the transform of F and λ > 0, then the transform of u → λF(λ−1(u) is λΦ. if F 1-homogeneous, i.e. F(λu) = λF(u) then F ∗(u) only take values 0 and +∞ as the property above implies F ∗ = λF ∗, λ > 0. In that case, the set where F ∗ = 0 i a closed convex set of X, F ∗ = δC, the indicator function of C, δC(x) =

  • , x ∈ C

+∞ , x ∈ C For x ∈ R → |x|, C = [−1, 1] For J(u), C = K.

slide-130
SLIDE 130

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Subdifferentials

subdifferential of F at u: ∂F(u) = {v ∈ X, F(w) − F(u) ≥ w − u, v, ∀w ∈ X}. v ∈ ∂F(u) is a subgradient of F at u. Three fundamental (and easy) properties:

0 ∈ ∂F(u) iff u global minimizer of F u∗ ∈ ∂F(u) ⇔ F(u) + F ∗(u∗) = u, u∗ Duality: u∗ ∈ ∂F(u) ⇔ u ∈ ∂F ∗(u)

The duality above allows to transform optimization of homogeneous functions into domain constraints!

slide-131
SLIDE 131

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Subdifferentials

subdifferential of F at u: ∂F(u) = {v ∈ X, F(w) − F(u) ≥ w − u, v, ∀w ∈ X}. v ∈ ∂F(u) is a subgradient of F at u. Three fundamental (and easy) properties:

0 ∈ ∂F(u) iff u global minimizer of F u∗ ∈ ∂F(u) ⇔ F(u) + F ∗(u∗) = u, u∗ Duality: u∗ ∈ ∂F(u) ⇔ u ∈ ∂F ∗(u)

The duality above allows to transform optimization of homogeneous functions into domain constraints!

slide-132
SLIDE 132

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Subdifferentials

subdifferential of F at u: ∂F(u) = {v ∈ X, F(w) − F(u) ≥ w − u, v, ∀w ∈ X}. v ∈ ∂F(u) is a subgradient of F at u. Three fundamental (and easy) properties:

0 ∈ ∂F(u) iff u global minimizer of F u∗ ∈ ∂F(u) ⇔ F(u) + F ∗(u∗) = u, u∗ Duality: u∗ ∈ ∂F(u) ⇔ u ∈ ∂F ∗(u)

The duality above allows to transform optimization of homogeneous functions into domain constraints!

slide-133
SLIDE 133

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Subdifferentials

subdifferential of F at u: ∂F(u) = {v ∈ X, F(w) − F(u) ≥ w − u, v, ∀w ∈ X}. v ∈ ∂F(u) is a subgradient of F at u. Three fundamental (and easy) properties:

0 ∈ ∂F(u) iff u global minimizer of F u∗ ∈ ∂F(u) ⇔ F(u) + F ∗(u∗) = u, u∗ Duality: u∗ ∈ ∂F(u) ⇔ u ∈ ∂F ∗(u)

The duality above allows to transform optimization of homogeneous functions into domain constraints!

slide-134
SLIDE 134

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Subdifferentials

subdifferential of F at u: ∂F(u) = {v ∈ X, F(w) − F(u) ≥ w − u, v, ∀w ∈ X}. v ∈ ∂F(u) is a subgradient of F at u. Three fundamental (and easy) properties:

0 ∈ ∂F(u) iff u global minimizer of F u∗ ∈ ∂F(u) ⇔ F(u) + F ∗(u∗) = u, u∗ Duality: u∗ ∈ ∂F(u) ⇔ u ∈ ∂F ∗(u)

The duality above allows to transform optimization of homogeneous functions into domain constraints!

slide-135
SLIDE 135

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Subdifferentials

subdifferential of F at u: ∂F(u) = {v ∈ X, F(w) − F(u) ≥ w − u, v, ∀w ∈ X}. v ∈ ∂F(u) is a subgradient of F at u. Three fundamental (and easy) properties:

0 ∈ ∂F(u) iff u global minimizer of F u∗ ∈ ∂F(u) ⇔ F(u) + F ∗(u∗) = u, u∗ Duality: u∗ ∈ ∂F(u) ⇔ u ∈ ∂F ∗(u)

The duality above allows to transform optimization of homogeneous functions into domain constraints!

slide-136
SLIDE 136

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Subdifferentials

subdifferential of F at u: ∂F(u) = {v ∈ X, F(w) − F(u) ≥ w − u, v, ∀w ∈ X}. v ∈ ∂F(u) is a subgradient of F at u. Three fundamental (and easy) properties:

0 ∈ ∂F(u) iff u global minimizer of F u∗ ∈ ∂F(u) ⇔ F(u) + F ∗(u∗) = u, u∗ Duality: u∗ ∈ ∂F(u) ⇔ u ∈ ∂F ∗(u)

The duality above allows to transform optimization of homogeneous functions into domain constraints!

slide-137
SLIDE 137

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

TV-denoising

To minimize: 1 2u − u02

L2(Ω + λJ(u)

  • ptimality condition:

0 ∈ u − u0 + λ∂J(u) ⇔ u0 − u λ ∈ ∂J(u) Duality u0 λ ∈ u0 − u λ + 1 λ ∂J∗( u0 − u λ ) Set w = u0−u

λ

: w satisfies 0 ∈ w − u0 λ + 1 λ ∂J∗(w) This is the subdifferential of the convex function 1 2 w − u0/λ2 + 1 λ J∗(w) But J∗(w) = δK (w): we get w = πK (gλ).

slide-138
SLIDE 138

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

TV-denoising

To minimize: 1 2u − u02

L2(Ω + λJ(u)

  • ptimality condition:

0 ∈ u − u0 + λ∂J(u) ⇔ u0 − u λ ∈ ∂J(u) Duality u0 λ ∈ u0 − u λ + 1 λ ∂J∗( u0 − u λ ) Set w = u0−u

λ

: w satisfies 0 ∈ w − u0 λ + 1 λ ∂J∗(w) This is the subdifferential of the convex function 1 2 w − u0/λ2 + 1 λ J∗(w) But J∗(w) = δK (w): we get w = πK (gλ).

slide-139
SLIDE 139

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

TV-denoising

To minimize: 1 2u − u02

L2(Ω + λJ(u)

  • ptimality condition:

0 ∈ u − u0 + λ∂J(u) ⇔ u0 − u λ ∈ ∂J(u) Duality u0 λ ∈ u0 − u λ + 1 λ ∂J∗( u0 − u λ ) Set w = u0−u

λ

: w satisfies 0 ∈ w − u0 λ + 1 λ ∂J∗(w) This is the subdifferential of the convex function 1 2 w − u0/λ2 + 1 λ J∗(w) But J∗(w) = δK (w): we get w = πK (gλ).

slide-140
SLIDE 140

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

TV-denoising

To minimize: 1 2u − u02

L2(Ω + λJ(u)

  • ptimality condition:

0 ∈ u − u0 + λ∂J(u) ⇔ u0 − u λ ∈ ∂J(u) Duality u0 λ ∈ u0 − u λ + 1 λ ∂J∗( u0 − u λ ) Set w = u0−u

λ

: w satisfies 0 ∈ w − u0 λ + 1 λ ∂J∗(w) This is the subdifferential of the convex function 1 2 w − u0/λ2 + 1 λ J∗(w) But J∗(w) = δK (w): we get w = πK (gλ).

slide-141
SLIDE 141

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

TV-denoising

To minimize: 1 2u − u02

L2(Ω + λJ(u)

  • ptimality condition:

0 ∈ u − u0 + λ∂J(u) ⇔ u0 − u λ ∈ ∂J(u) Duality u0 λ ∈ u0 − u λ + 1 λ ∂J∗( u0 − u λ ) Set w = u0−u

λ

: w satisfies 0 ∈ w − u0 λ + 1 λ ∂J∗(w) This is the subdifferential of the convex function 1 2 w − u0/λ2 + 1 λ J∗(w) But J∗(w) = δK (w): we get w = πK (gλ).

slide-142
SLIDE 142

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

TV-denoising

To minimize: 1 2u − u02

L2(Ω + λJ(u)

  • ptimality condition:

0 ∈ u − u0 + λ∂J(u) ⇔ u0 − u λ ∈ ∂J(u) Duality u0 λ ∈ u0 − u λ + 1 λ ∂J∗( u0 − u λ ) Set w = u0−u

λ

: w satisfies 0 ∈ w − u0 λ + 1 λ ∂J∗(w) This is the subdifferential of the convex function 1 2 w − u0/λ2 + 1 λ J∗(w) But J∗(w) = δK (w): we get w = πK (gλ).

slide-143
SLIDE 143

Total Variation Total Variation II Using the new definition in denoising: Chambolle algorithm

Example

The usual original Denoised by projection

slide-144
SLIDE 144

Total Variation Total Variation II Image Simplification

Outline

1

Motivation Origin and uses of Total Variation Denoising Tikhonov regularization 1-D computation on step edges

2

Total Variation I First definition Rudin-Osher-Fatemi Inpainting/Denoising

3

Total Variation II Relaxing the derivative constraints Definition in action Using the new definition in denoising: Chambolle algorithm Image Simplification

4

Bibliography

5

The End

slide-145
SLIDE 145

Total Variation Total Variation II Image Simplification

Camerman Example

Solution of denoising energy present numerically stair-casing effect (Nikolova) Original xc λ = 100 λ = 500 The gradient becomes “sparse”.

slide-146
SLIDE 146

Total Variation Bibliography

Bibliography

Tikhonov, A. N.; Arsenin, V. Y. 1977. Solution of Ill-posed Problems. Wahba, G, 1990. Spline Models for Observational Data. Rudin, L.; Osher, S.; Fatemi, E. 1992. Nonlinear Total Variation Based Noise Removal Algorithms. Chambolle, A. 2004. An algorithm for Total Variation Minimization and Applications. Nikolova, M. 2004. Weakly Constrained Minimization: Application to the Estimation of Images and Signals Involving Constant Regions

slide-147
SLIDE 147

Total Variation The End

The End