On robust estimation and smoothing 2 with spatial and tonal kernels - - PowerPoint PPT Presentation

on robust estimation and smoothing
SMART_READER_LITE
LIVE PREVIEW

On robust estimation and smoothing 2 with spatial and tonal kernels - - PowerPoint PPT Presentation

M I Geometric Properties from Incomplete Data, Dagstuhl, March 2004 A 1 On robust estimation and smoothing 2 with spatial and tonal kernels 3 4 Pavel Mr azek Joachim Weickert and Andr es Bruhn 5 6 Mathematical Image Analysis


slide-1
SLIDE 1

Geometric Properties from Incomplete Data, Dagstuhl, March 2004

M I

A

On robust estimation and smoothing with spatial and tonal kernels

Pavel Mr´ azek Joachim Weickert and Andr´ es Bruhn

Mathematical Image Analysis Group, Saarland University http://www.mia.uni-saarland.de 1 2 3 4 5 6 7 8 9 10

slide-2
SLIDE 2

Estimation from noisy data

M I

A

  • General idea:

constant signal u + noise n = measured noisy data fi Task: estimate the signal u

  • Gaussian noise → mean u = 1

N

N

  • j=1

fj minimizes E(u) =

N

  • j=1

(u − fj)2

  • Noise with heavier tails → employ robust error norms

1 2 3 4 5 6 7 8 9 10

slide-3
SLIDE 3

Estimation from noisy data

M I

A

  • General idea:

constant signal u + noise n = measured noisy data fi Task: estimate the signal u

  • Gaussian noise → mean u = 1

N

N

  • j=1

fj minimizes E(u) =

N

  • j=1

(u − fj)2

  • Noise with heavier tails → employ robust error norms

M-estimators: minimize E(u) =

N

  • j=1

Ψ

  • |u − fj|2

Examples: Ψ(s2) = s2

−5 −4 −3 −2 −1 1 2 3 4 5

→ mean Ψ(s2) = |s|

−5 −4 −3 −2 −1 1 2 3 4 5

→ median Ψ(s2) = 1 − e−s2/λ2

−5 −4 −3 −2 −1 1 2 3 4 5 1

→ “mode” Ψ(s2) = min(s2, λ2)

−5 −4 −3 −2 −1 1 2 3 4 5 l

→ “mode” 1 2 3 4 5 6 7 8 9 10

slide-4
SLIDE 4

Local estimates

M I

A

  • Data fi measured at position xi

→ find local estimate ui as ui = argmin

u N

  • j=1

Ψ

  • |u − fj|2

w

  • |xi − xj|2

w(s2) =

  • 1

s2 < θ

  • therwise
−3 −2 −1 1 2 3 1

hard window w(s2) = e−s2/θ2

−3 −2 −1 1 2 3 1

soft window (Chu et al. (1996)) 1 2 3 4 5 6 7 8 9 10

slide-5
SLIDE 5

Local estimates

M I

A

  • Data fi measured at position xi

→ find local estimate ui as ui = argmin

u N

  • j=1

Ψ

  • |u − fj|2

w

  • |xi − xj|2

w(s2) =

  • 1

s2 < θ

  • therwise
−3 −2 −1 1 2 3 1

hard window w(s2) = e−s2/θ2

−3 −2 −1 1 2 3 1

soft window (Chu et al. (1996)) Local M-smoothers: minimize E(u) =

N

  • i=1
  • j∈B(i)

Ψ

  • |ui − fj|2

w

  • |xi − xj|2

1 2 3 4 5 6 7 8 9 10

slide-6
SLIDE 6

Local M-smoothers

M I

A

  • Gradient descent on E(u) =

N

  • i=1
  • j∈B(i)

Ψ

  • |ui − fj|2

w

  • |xi − xj|2

: uk+1

i

= uk

i − τ ∂E

∂ui = uk

i − τ

  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

2 (uk

i − fj)

=

  • 1 − 2τ
  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

uk

i

+ 2τ

  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

fj 1 2 3 4 5 6 7 8 9 10

slide-7
SLIDE 7

Local M-smoothers

M I

A

  • Gradient descent on E(u) =

N

  • i=1
  • j∈B(i)

Ψ

  • |ui − fj|2

w

  • |xi − xj|2

: uk+1

i

= uk

i − τ ∂E

∂ui = uk

i − τ

  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

2 (uk

i − fj)

=

  • 1 − 2τ
  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

uk

i

+ 2τ

  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

fj Setting τ =

1 2

j∈B(i) Ψ′

|uk

i −fj|2

w

  • |xi−xj|2

1 2 3 4 5 6 7 8 9 10

slide-8
SLIDE 8

Local M-smoothers

M I

A

  • Gradient descent on E(u) =

N

  • i=1
  • j∈B(i)

Ψ

  • |ui − fj|2

w

  • |xi − xj|2

: uk+1

i

= uk

i − τ ∂E

∂ui = uk

i − τ

  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

2 (uk

i − fj)

=

  • 1 − 2τ
  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

uk

i

+ 2τ

  • j∈B(i)

Ψ′ |uk

i − fj|2

w

  • |xi − xj|2

fj Setting τ =

1 2

j∈B(i) Ψ′

|uk

i −fj|2

w

  • |xi−xj|2

we obtain uk+1

i

=

  • j∈B(i) Ψ′

|uk

i − fj|2

w

  • |xi − xj|2

fj

  • j∈B(i) Ψ′

|uk

i − fj|2

w

  • |xi − xj|2

1 2 3 4 5 6 7 8 9 10

slide-9
SLIDE 9

The big picture

M I

A

LOCAL WINDOWED GLOBAL M-estimators

  • j Ψ
  • |u − fj|2

local M-smoothers

  • i
  • j Ψ
  • |ui − fj|2

w

  • |xi − xj|2

1 2 3 4 5 6 7 8 9 10

slide-10
SLIDE 10

Bayesian framework / regularization theory

M I

A

  • Take the local M-estimator, decrease the spatial window size

→ w

  • |xi − xj|2

=

  • 1

xi = xj

  • therwise

⇒ minimizing ED(u) =

N

  • i=1

Ψ

  • |ui − fi|2

has a trivial solution ui = fi. 1 2 3 4 5 6 7 8 9 10

slide-11
SLIDE 11

Bayesian framework / regularization theory

M I

A

  • Take the local M-estimator, decrease the spatial window size

→ w

  • |xi − xj|2

=

  • 1

xi = xj

  • therwise

⇒ minimizing ED(u) =

N

  • i=1

Ψ

  • |ui − fi|2

has a trivial solution ui = fi. Bayesian / regularization framework: combine with prior knowledge, assumptions e.g. about the smoothness of u E(u) = α ED(u) + (1 − α) ES(u) =

  • i

α ΨD

  • |ui − fi|2

+ (1 − α) ΨS

  • |∇u|2

Covered are e.g.

  • Mumford-Shah functional: ΨD(s2) = s2, ΨS(s2) = min(s2, λ2)
  • graduated nonconvexity of Blake and Zisserman
  • nonlinear diffusion filters (Perona-Malik, TV flow, ...)

1 2 3 4 5 6 7 8 9 10

slide-12
SLIDE 12

The big picture

M I

A

LOCAL WINDOWED GLOBAL M-estimators

  • j Ψ
  • |u − fj|2

local M-smoothers

  • i
  • j Ψ
  • |ui − fj|2

w

  • |xi − xj|2

Bayesian / regularization theory

  • α ΨD
  • |u − f|2

+ (1 − α) ΨS

  • |∇u|2

DATA TERM SMOOTHNESS TERM 1 2 3 4 5 6 7 8 9 10

slide-13
SLIDE 13

Smoothness term from larger window

M I

A

Express smoothness using discrete samples:

  • ES(u) =
  • ΨS
  • |∇u|2

≈ N

i=1 ΨS

  • j∈N(i) |ui − uj|2

isotropic

  • ES(u) ≈ N

i=1

  • j∈N(i) ΨS
  • |ui − uj|2

anisotropic 1 2 3 4 5 6 7 8 9 10

slide-14
SLIDE 14

Smoothness term from larger window

M I

A

Express smoothness using discrete samples:

  • ES(u) =
  • ΨS
  • |∇u|2

≈ N

i=1 ΨS

  • j∈N(i) |ui − uj|2

isotropic

  • ES(u) ≈ N

i=1

  • j∈N(i) ΨS
  • |ui − uj|2

anisotropic Increase the window size ⇒ the smoothness term becomes ES(u) =

N

  • i=1
  • j∈B(i)

Ψ

  • |ui − uj|2

w

  • |xi − xj|2

which can be minimized by iterating uk+1

i

=

  • j∈B(i) Ψ′

|uk

i − uk j|2

w

  • |xi − xj|2

uk

j

  • j∈B(i) Ψ′

|uk

i − uk j|2

w

  • |xi − xj|2

1 2 3 4 5 6 7 8 9 10

slide-15
SLIDE 15

Smoothness term from larger window

M I

A

Express smoothness using discrete samples:

  • ES(u) =
  • ΨS
  • |∇u|2

≈ N

i=1 ΨS

  • j∈N(i) |ui − uj|2

isotropic

  • ES(u) ≈ N

i=1

  • j∈N(i) ΨS
  • |ui − uj|2

anisotropic Increase the window size ⇒ the smoothness term becomes ES(u) =

N

  • i=1
  • j∈B(i)

Ψ

  • |ui − uj|2

w

  • |xi − xj|2

which can be minimized by iterating uk+1

i

=

  • j∈B(i) Ψ′

|uk

i − uk j|2

w

  • |xi − xj|2

uk

j

  • j∈B(i) Ψ′

|uk

i − uk j|2

w

  • |xi − xj|2

Bilateral filter of Tomasi and Manduchi

  • not exactly a gradient descent on ES(u):

samples ui not independent and each needs a different descent step τ

  • alternative functional proposed by Elad: windowed smoothness + local data term

1 2 3 4 5 6 7 8 9 10

slide-16
SLIDE 16

The big picture

M I

A

LOCAL WINDOWED GLOBAL M-estimators

  • j Ψ
  • |u − fj|2

local M-smoothers

  • i
  • j Ψ
  • |ui − fj|2

w

  • |xi − xj|2

Bayesian / regularization theory

  • α ΨD
  • |u − f|2

+ (1 − α) ΨS

  • |∇u|2

DATA TERM SMOOTHNESS TERM ΨS

  • j∈N(i) |ui − uj|2
  • j∈N(i) ΨS
  • |ui − uj|2

bilateral filter

  • i
  • j ΨS
  • |ui − uj|2

w

  • |xi − xj|2

1 2 3 4 5 6 7 8 9 10

slide-17
SLIDE 17

The big picture

M I

A

LOCAL WINDOWED GLOBAL M-estimators

  • j Ψ
  • |u − fj|2

local M-smoothers

  • i
  • j Ψ
  • |ui − fj|2

w

  • |xi − xj|2

Bayesian / regularization theory

  • α ΨD
  • |u − f|2

+ (1 − α) ΨS

  • |∇u|2

DATA TERM SMOOTHNESS TERM ΨS

  • j∈N(i) |ui − uj|2
  • j∈N(i) ΨS
  • |ui − uj|2

bilateral filter

  • i
  • j ΨS
  • |ui − uj|2

w

  • |xi − xj|2

DATA TERM SMOOTHNESS TERM + 1 2 3 4 5 6 7 8 9 10

slide-18
SLIDE 18

The unifying functional

M I

A

E(u) =

  • i
  • j

α ΨD

  • |ui − fj|2

wD

  • |xi − xj|2

+ (1 − α) ΨS

  • |ui − uj|2

wS

  • |xi − xj|2
  • covers many nonlinear filters for robust signal estimation and image smoothing
  • new filter combinations possible
  • assumptions about signal and noise → ΨD, wD, ΨS, wS, α

1 2 3 4 5 6 7 8 9 10

slide-19
SLIDE 19

The unifying functional

M I

A

E(u) =

  • i
  • j

α ΨD

  • |ui − fj|2

wD

  • |xi − xj|2

+ (1 − α) ΨS

  • |ui − uj|2

wS

  • |xi − xj|2
  • covers many nonlinear filters for robust signal estimation and image smoothing
  • new filter combinations possible
  • assumptions about signal and noise → ΨD, wD, ΨS, wS, α
  • interesting combination:

windowed data term + local smoothness E(u) =

  • i
  • j

α ΨD

  • |ui − fj|2

wD

  • |xi − xj|2

+ (1 − α) ΨS

  • |∇u|2

windowed data term local smoothness term windowed smoothness term mean linear diffusion median TV flow mode nonlinear diffusion (backward) bilateral filter 1 2 3 4 5 6 7 8 9 10

slide-20
SLIDE 20

−5 −4 −3 −2 −1 1 2 3 4 5

slide-21
SLIDE 21

−5 −4 −3 −2 −1 1 2 3 4 5

slide-22
SLIDE 22

−5 −4 −3 −2 −1 1 2 3 4 5 1

slide-23
SLIDE 23

−5 −4 −3 −2 −1 1 2 3 4 5 l

slide-24
SLIDE 24

−3 −2 −1 1 2 3 1

slide-25
SLIDE 25

−3 −2 −1 1 2 3 1