Higher-order Segmentation Functionals: Entropy, Color Consistency, - - PowerPoint PPT Presentation

higher order segmentation functionals
SMART_READER_LITE
LIVE PREVIEW

Higher-order Segmentation Functionals: Entropy, Color Consistency, - - PowerPoint PPT Presentation

Higher-order Segmentation Functionals: Entropy, Color Consistency, Curvature, etc. Yuri Boykov jointly with Andrew Delong M. Tang I. Ben Ayed O. Veksler H. Isack L. Gorelick C. Nieuwenhuis E. Toppe C. Olsson A. Osokin A. Delong


slide-1
SLIDE 1
  • C. Olsson

Higher-order Segmentation Functionals:

Entropy, Color Consistency, Curvature, etc.

Yuri Boykov jointly with

  • O. Veksler

Andrew Delong

  • L. Gorelick
  • C. Nieuwenhuis
  • E. Toppe
  • I. Ben Ayed
  • M. Tang
  • A. Delong
  • H. Isack
  • A. Osokin
slide-2
SLIDE 2

Different surface representations

mesh level-sets graph labeling

  • n complex
  • n grid

point cloud labeling

p

s

continuous

  • ptimization

mixed

  • ptimization

Z

p

s

p

{0,1}

p

s

combinatorial

  • ptimization
slide-3
SLIDE 3

this talk

graph labeling {0,1}

p

s

combinatorial

  • ptimization

Implicit surfaces/bondary

  • n grid
slide-4
SLIDE 4

S f, E(S) B(S) S f, E(S)

Image segmentation Basics

4

I

Fg) | Pr(I Bg) | Pr(I

p p p s

f E(S)

bg) | Pr(I fg) | Pr(I ln f

p p p

S

{0,1}

p

s

slide-5
SLIDE 5

Linear (modular) appearance of region

p p p s

f S f S R , ) (

Examples of potential functions

  • Log-likelihoods
  • Chan-Vese
  • Ballooning

2

) ( c I f

p p

1

p

f ) (

p p

I f Pr ln

slide-6
SLIDE 6

Basic boundary regularization for

{0,1}

p

s

pair-wise discontinuities

] [ ) (

q p N pq

s s w S B

slide-7
SLIDE 7

Basic boundary regularization for

{0,1}

p

s

second-order terms

q p q p q p

s s s s s s ) ( ) ( ] [ 1 1

quadratic

] [ ) (

q p N pq

s s w S B

slide-8
SLIDE 8

Basic boundary regularization for

{0,1}

p

s

Examples of discontinuity penalties

  • Boundary length
  • Image-weighted

boundary length

1

pq

w

2

) (

q p pq

I I w exp

second-order terms

] [ ) (

q p N pq pq

s s w S B

slide-9
SLIDE 9

Basic boundary regularization for

  • corresponds to boundary length |

|

– grids [B&K, 2003], via integral geometry – complexes [Sullivan 1994]

  • submodular second-order energy

– can be minimized exactly via graph cuts

[Greig et al.’91, Sullivan’94, Boykov-Jolly’01]

pq

w

n-links s t a cut

{0,1}

p

s

second-order terms

] [ ) (

q p N pq pq

s s w S B

slide-10
SLIDE 10

2

any (binary) segmentation energy E(S) is a set function E:

S

Ω

Submodular set functions

slide-11
SLIDE 11

Submodular set functions

Set function is submodular if for any

2 : E

) ( ) ( ) ( ) ( T E S E T S E T S E  

T S,

Significance: any submodular set function can be globally optimized in polynomial time [Grotschel et al.1981,88, Schrijver 2000]

S T

Ω

) | | (

9

O

slide-12
SLIDE 12

Submodular set functions

Set function is submodular if for any

2 : E

) ( ) } { ( ) ( ) } { ( S E v S E T E v T E

T S

S T

Ω an alternative equivalent definition providing intuitive interpretation: “diminishing returns”

v

v

Easily follows from the previous definition:

E(T) ) } { ( ) ( ) } { ( v S E S E v T E

S T S T S

Significance: any submodular set function can be globally optimized in polynomial time [Grotschel et al.1981,88, Schrijver 2000]

) | | (

9

O

slide-13
SLIDE 13

Graph cuts for minimization of submodular set functions

Assume set Ω and 2nd-order (quadratic) function

Function E(S) is submodular if for any

) ( ) ( ) ( ) ( 1 1 1 1 , , , ,

pq pq pq pq

E E E E

N q p ) ( ,

Significance: submodular 2nd-order boolean (set) function can be globally optimized in polynomial time by graph cuts [Hammer 1968, Pickard&Ratliff 1973]

N pq q p pq

s s E s E

) (

, ) ( ) (

} { 1 0, ,

q p s

s

Indicator variables

[Boros&Hammer 2000, Kolmogorov&Zabih2003]

) | | | N (|

2

O

slide-14
SLIDE 14

Combinatorial

  • ptimization

Continuous

  • ptimization

submodularity convexity

Global Optimization

?

slide-15
SLIDE 15

Assume Gibbs distribution over binary random variables for

Graph cuts for minimization of posterior energy (MRF)

} { 1 0,

p

s

)) ( ( ) ,..., ( S E exp s s Pr

n 1

Theorem [Boykov, Delong, Kolmogorov, Veksler in unpublished book 2014?] All random variables sp are positively correlated iff set function E(S) is submodular That is, submodularity implies MRF with “smoothness” prior

} { 1 s | p S

p

slide-16
SLIDE 16

Basic segmentation energy

] [

q p N pq pq p p p

s s w s f

boundary smoothness segment region/appearance

slide-17
SLIDE 17

this talk

Higher-order binary segmentation

Curvature (3-rd order) Convexity (3-rd order) segment region/appearance Shape priors (N-th order) Connectivity (N-th order) Cardinality potentials (N-th order) Appearance Entropy (N-th order) Color consistency (N-th order) Distribution consistency (N-th order) boundary smoothness

slide-18
SLIDE 18

submodular approximations

[our work: Trust Region 13, Auxiliary Cuts 13]

global minimum

[our work: One Cut 2014]

block-coordinate descent

[Zhu&Yuille 96, GrabCut 04]

Overview of this talk

  • From likelihoods to entropy
  • From entropy to color consistency
  • Convex cardinality potentials
  • Distribution consistency
  • From length to curvature
  • ptimization

high-order functionals

  • ther extensions [arXiv13]
slide-19
SLIDE 19

] [ ) Pr(

N pq q p pq p s p

s s w I S E

p

| ln ) , | (

1

assuming known

[Boykov&Jolly, ICCV2001] image segmentation, graph cut

RGB I p

  • parametric models – e.g. Gaussian or GMM
  • non-parametric models - histograms

} { 1 0,

p

s

pair-wise (quadratic) term unary (linear) term

Given likelihood models

guaranteed globally optimal S

slide-20
SLIDE 20

Beyond fixed likelihood models

[Rother, et al. SIGGRAPH’2004] iterative image segmentation, Grabcut (block coordinate descent )

RGB I p

1 0,

S

Models 0 , 1 are iteratively re-estimated (from initial box)

extra variables

  • parametric models – e.g. Gaussian or GMM
  • non-parametric models - histograms

} { 1 0,

p

s

] [ ) Pr(

N pq q p pq p s p

s s w I S E

p

| ln ) , , (

1

pair-wise (quadratic) term mixed optimization term

NP hard mixed optimization!

[Vesente et al., ICCV’09]

slide-21
SLIDE 21
  • Minimize over segmentation S for fixed 0 , 1
  • Minimize over 0 , 1 for fixed labeling S

Block-coordinate descent for

fixed for S=const

) (

1 0,

, S E

N pq q p pq p S p

s s w I S E

p

] [ ) Pr( ) ( | ln , ,

1 N pq q p pq s p p s p p

s s w I I S E

p p

] [ ) Pr( ) Pr( ) (

1 1 1 : :

| ln | ln , ,

S

p ˆ

S

p

1

ˆ

distribution of intensities in current bkg. segment ={p:Sp=0} distribution of intensities in current obj. segment S={p:Sp=1}

S

  • ptimal S is computed using graph cuts, as in [BJ 2001]
slide-22
SLIDE 22

Iterative learning of color models

(binary case )

  • GrabCut: iterated graph cuts [Rother et al., SIGGRAPH 04]

start from models 0 , 1 inside and outside some given box iterate graph cuts and model re-estimation until convergence to a local minimum

] [ ) Pr( ) (

N pq q p pq p S p

s s w I S E

p

| ln , ,

1

} { 1 0,

p

s

solution is sensitive to initial box

slide-23
SLIDE 23

BCD minimization of converges to a local minimum ) (

1 0,

, S E

E=2.37×106 E=2.41×106 E=1.39×106 E=1.410×106

Iterative learning of color models

(binary case ) } { 1 0,

p

s

(interactivity a la “snakes”)

slide-24
SLIDE 24

Iterative learning of color models

(could be used for more than 2 labels )

  • Unsupervised segmentation [Zhu&Yuille, 1996]

] [ ) Pr( ) (

N pq q p pq p S p

s s w I S E

p

| ln ... , , ,

2 1

| |labels

} { ,... , , 2 1

p

s

using level sets + merging heuristic initialize models 0 , 1 , 2 , from many randomly sampled boxes iterate segmentation and model re-estimation until convergence models compete, stable result if sufficiently many

slide-25
SLIDE 25

Iterative learning of color models

(could be used for more than 2 labels )

  • Unsupervised segmentation [Delong et al., 2012]

| |labels

} { ,... , , 2 1

p

s

using a-expansion (graph-cuts) initialize models 0 , 1 , 2 , from many randomly sampled boxes models compete, stable result if sufficiently many iterate segmentation and model re-estimation until convergence

] [ ) Pr( ) (

N pq q p pq p S p

s s w I S E

p

| ln ... , , ,

2 1

slide-26
SLIDE 26

Iterative learning of other models

(could be used for more than 2 labels )

  • Geometric multi-model fitting [Isack et al., 2012]

initialize plane models 0 , 1 , 2 , from many randomly sampled SIFT matches in 2 images of the same scene

| |labels

using a-expansion (graph-cuts) iterate segmentation and model re-estimation until convergence models compete, stable result if sufficiently many

] [

  • )

(

N pq q p pq p S

s s w p p S E

p

... , , ,

2 1

} { ,... , , 2 1

p

s

slide-27
SLIDE 27

Iterative learning of other models

(could be used for more than 2 labels )

  • Geometric multi-model fitting [Isack et al., 2012]

initialize Fundamental matrices 0 , 1 , 2 , from many randomly sampled SIFT matches in 2 consecutive frames in video

| |labels

using a-expansion (graph-cuts) iterate segmentation and model re-estimation until convergence models compete, stable result if sufficiently many

VIDEO

} { ,... , , 2 1

p

s

] [

  • )

(

N pq q p pq p S

s s w p p S E

p

... , , ,

2 1

slide-28
SLIDE 28

From color model estimation to entropy and color consistency global optimization in One Cut

[Tang et al. ICCV 2013]

slide-29
SLIDE 29

Interpretation of log-likelihoods: entropy of segment intensities

S

S1 Si S2 S3 S4 S5

pixels of color i in S

} | { i I S p S

p i

| | | | S S p

i s i

probability of intensity i in S

S p p

I ) ( | Pr ln

i i S i

p p S ln | |

=

where = {p1 , p2 , ... , pn }

given distribution

  • f intensities

} ,..., , {

S n S S S

p p p p

2 1

distribution of intensities

  • bserved at S

cross entropy

  • f distribution pS w.r.t.

H(S| )

i i i

p S ln | |

=

slide-30
SLIDE 30

Interpretation of log-likelihoods: entropy of segment intensities

] [ ) ( ) ( ) (

N pq q p pq

s s w S H | S | S H | S | S E

N pq q p pq 1 S : p 1 p S : p p

s s w | I ln | I ln

p p

] [ ) Pr( ) Pr(

) (

1 0,

, S E

) ( | | | S H S ) (

1

| | | S H S

1 0 ,

min

entropy of intensities in

S

entropy of intensities in

S

minimization of segments entropy

Note: H(P|Q) H(P) for any two distributions (equality when Q=P)

cross-entropy entropy

joint estimation of S and color models [Rother et al., SIGGRAPH’04, ICCV’09] [Tang et al, ICCV 2013]

slide-31
SLIDE 31

Interpretation of log-likelihoods: entropy of segment intensities

) (

1 0,

, S E

) ( | | | S H S ) (

1

| | | S H S

1 0 ,

min

entropy of intensities in

S

entropy of intensities in

S

binary optimization

Note: H(P|Q) H(P) for any two distributions (equality when Q=P)

cross-entropy entropy

mixed optimization [Tang et al, ICCV 2013] [Rother et al., SIGGRAPH’04, ICCV’09]

] [ ) ( ) ( ) (

N pq q p pq

s s w S H | S | S H | S | S E

N pq q p pq 1 S : p 1 p S : p p

s s w | I ln | I ln

p p

] [ ) Pr( ) Pr(

slide-32
SLIDE 32

] [ ) ( ) ( ) (

N pq q p pq

s s w S H | S | S H | S | S E

Interpretation of log-likelihoods: entropy of segment intensities

) (

1 0,

, S E

) ( | | | S H S ) (

1

| | | S H S

1 0 ,

min

entropy of intensities in

S

entropy of intensities in

S

common energy for categorical clustering, e.g. [Li et al. ICML’04]

Note: H(P|Q) H(P) for any two distributions (equality when Q=P)

cross-entropy entropy

N pq q p pq 1 S : p 1 p S : p p

s s w | I ln | I ln

p p

] [ ) Pr( ) Pr(

slide-33
SLIDE 33

Minimizing entropy of segments intensities

(intuitive motivation)

] [ ) ( ) ( ) (

N pq q p pq

s s w S H | S | S H | S | S E

unsupervised image segmentation (like in Chan-Vese) high entropy segmentation break image into two coherent segments with low entropy of intensities

S S

low entropy segmentation

S S

S S S S

slide-34
SLIDE 34

more general than Chan-Vese (colors can vary within each segment)

S S

S

S

break image into two coherent segments with low entropy of intensities

Minimizing entropy of segments intensities

(intuitive motivation)

] [ ) ( ) ( ) (

N pq q p pq

s s w S H | S | S H | S | S E

slide-35
SLIDE 35

From entropy to color consistency

all pixels

i

Minimization of entropy encourages pixels

i of the same color bin i

to be segmented together

(proof: see next page)

i 1 2 4 3 5

slide-36
SLIDE 36

From entropy to color consistency

] [ ) ( ) ( ) (

N pq q p pq

s s w S H | S | S H | S | S E

S i i S i S i i S i

p p S p p S ln | | ln | |

| | | | | | | |

ln | | ln | |

S S i i S S i i

i i

S S | | ln | | | | ln | |

i i i i i

S S S S | | ln | | | | ln | |

i i i i i

S S S S

| | ln | | | | ln | | S S S S

i i i i i

| S | ln | S | | S | ln | S | ) (

volume balancing color consistency |S| |S|

S S

Si Si

Si = S

i

|S| |Si|

i i

pixels in each color bin i prefer to be together (either inside object

  • r background)
slide-37
SLIDE 37

From entropy to color consistency

| | ln | | | | ln | | S S S S

volume balancing color consistency

S

Si = S

i

|S| |Si|

i i

segmentation S with better color consistency

pixels in each color bin i prefer to be together (either inside object

  • r background)

S

i i i i i

| S | ln | S | | S | ln | S | ) (

slide-38
SLIDE 38

From entropy to color consistency

| | ln | | | | ln | | S S S S

volume balancing color consistency

S

Si = S

i

|S| |Si|

i i

convex function of cardinality |S| (non-submodular)

pixels in each color bin i prefer to be together (either inside object

  • r background)

S

concave function of cardinality |Si| (submodular)

Graph-cut constructions for similar cardinality terms (for superpixel consistency) [Kohli et al. IJCV’09] In many applications, this term can be either dropped or replaced with simple unary ballooning [Tang et al. ICCV 2013]

i i i i i

| S | ln | S | | S | ln | S | ) (

slide-39
SLIDE 39

|Si|

From entropy to color consistency

| | ln | | | | ln | | S S S S

volume balancing color consistency (also, simpler construction)

connect pixels in each color bin to corresponding auxiliary nodes

] [

N pq q p pq

s s w

boundary smoothness |S| |Si|

i i

In many applications, this term can be either dropped or replaced with simple unary ballooning [Tang et al. ICCV 2013]

convex function of cardinality |S| (non-submodular)

L1 color separation works better in practice [Tang et al. ICCV 2013]

i i i i i

| S | ln | S | | S | ln | S | ) (

slide-40
SLIDE 40

smoothness + color consistency

 One Cut [Tang, et al., ICCV’13]

connect pixels in each color bin to corresponding auxiliary nodes

Grabcut is sensitive to bin size guaranteed global minimum box segmentation linear ballooning inside the box

slide-41
SLIDE 41

smoothness + color consistency

 One Cut [Tang, et al., ICCV’13]

box segmentation ballooning from hard constraints linear ballooning from saliency measure

connect pixels in each color bin to corresponding auxiliary nodes

guaranteed global minimum linear ballooning inside the box from seeds saliency-based segmentation

slide-42
SLIDE 42

photo-consistency + smoothness + color consistency

 Color consistency can be integrated into

binary stereo

connect pixels in each color bin to corresponding auxiliary nodes

+ color consistency photo-consistency+smoothness

slide-43
SLIDE 43

Approximating:

  • Convex cardinality potentials
  • Distribution consistency
  • Other high-order region terms
slide-44
SLIDE 44

d || S S ||

min

General Trust Region Approach (overview)

Trust region

(S) (S) E(S) B H

(S) (S) (S) E B U0 ~

1st-order approximation for H(S)

S

d

submodular (easy) hard

slide-45
SLIDE 45

|| S S || λ B(S) (S) U (S) L

λ

  • Constrained optimization

minimize

  • Unconstrained Lagrangian Formulation

minimize

d || S S || s.t. B(S) (S) U (S) E ~

can be approximated with unary terms [Boykov,Kolmogorov,Cremers,Delong, ECCV’06]

45

General Trust Region Approach (overview)

slide-46
SLIDE 46

Approximating L2 distance

C 2 s ds

dC dC , dC

|| || S S

p

  • p

p p

s s d 2 ) (

unary potentials [Boykov et al. ECCV 2006]

C p dp

d 2

dp - signed distance map from C0

slide-47
SLIDE 47

Trust Region Approximation

|S|

] [ ) Pr(

N pq q p pq p S p

s s w | I ln

p

submodular terms appearance log-likelihoods boundary length non-submodular term volume constraint Linear approx. at S0 S0 S0 submodular approx. trust region

p

  • p

p p

s s d ) (

L2 distance to S0

slide-48
SLIDE 48

Volume Constraint for Vertebrae segmentation

Log-Lik. + length

48

slide-49
SLIDE 49

Back to entropy-based segmentation

Interactive segmentation with box volume balancing color consistency

] [

N pq q p pq

s s w

boundary smoothness |S| |Si|

i i

+ +

submodular terms non-submodular term

global minimum Approximations

(local minima near the box)

slide-50
SLIDE 50

Trust Region Approximation

Surprisingly, TR outperforms QPBO, DD, TRWS, BP, etc.

  • n many high-order [CVPR’13] and/or

non-submodular problems [arXiv13]

slide-51
SLIDE 51

Curvature

slide-52
SLIDE 52

Pair-wise smoothness: limitations

52

  • discrete metrication errors

4-neighborhood

  • continuous convex formulations

8-neighborhood

  • resolved by higher connectivity
slide-53
SLIDE 53

Pair-wise smoothness: limitations

53

  • boundary over-smoothing (a.k.a. shrinking bias)
slide-54
SLIDE 54

Pair-wise smoothness: limitations

54

  • curvature
  • needs higher-order smoothness
  • boundary over-smoothing (a.k.a. shrinking bias)

multi-view reconstruction [Vogiatzis et al. 2005]

slide-55
SLIDE 55

Higher-order smoothness & curvature for discrete regularization

  • Geman and Geman 1983 (line process, simulated annealing)
  • Second-order stereo and surface reconstruction

– Li & Zuker 2010 (loopy belief propagation) – Woodford et al. 2009 (fusion of proposals, QPBO) – Olsson et al. 2012-13 (fusion of planes, nearly submodular)

  • Curvature in segmentation:

– Schoenemann et al. 2009 (complex, LP relaxation, many extra variables) – Strandmark & Kahl 2011 (complex, LP relaxation,…) – El-Zehiry & Grady 2010 (grid, 3-clique, only 90 degree accurate, QPBO) – Shekhovtsov et al. 2012 (grid patches, approximately learned, QPBO) – Olsson et al. 2013 (grid patches, integral geometry, partial enumeration) – Nieuwenhuis et al 2014? (grid, 3-cliques, integral geometry, trust region)

this talk good approximation of curvature, better and faster optimization practical !

slide-56
SLIDE 56

the rest of the talk:

  • Absolute curvature regularization on a grid

[Olsson, Ulen, Boykov, Kolmogorov - ICCV 2013]

  • Squared curvature regularization on a grid

[Nieuwenhuis, Toppe, Gorelick, Veksler, Boykov - arXiv 2013]

slide-57
SLIDE 57

Absolute Curvature

ds

S

| |

Motivating example: for any convex shape

2 ds

S

| |

  • no shrinking bias
  • thin structures
slide-58
SLIDE 58

Absolute Curvature

n n

easy to estimate via approximating polygons

ds

S

| |

polygons also work for

p

[Bruckstein et al. 2001]

slide-59
SLIDE 59

curvature on a cell complex (standard geometry)

/2 /4 /4 /2

  • Schoenemann et al. 2009
  • Strandmark & Kahl 2011

4- or 3-cliques on a cell complex solved via LP relaxations

slide-60
SLIDE 60

curvature on a cell complex (standard geometry)

/2 /4 /4 /2

cell-patch cliques on a complex

  • Olsson et al., ICCV 2013

partial enumeration + TRWS

zero gap reduction to pair-wise Constrain Satisfaction Problem

  • new graph: patches are nodes
  • curvature is a unary potential
  • patches overlap, need consistency
  • tighter LP relaxation

P4 P5 P6 P1 P2 P3

slide-61
SLIDE 61

A B C D E F G H

curvature on a cell complex (standard geometry)

/2 /4 /4 /2

A A A A B C F F G H E D

2A+B= /2 A+F+G+H = /4 D+E+F = /2 A+C= /4

/4 /2 /2 /2 /4

curvature on a pixel grid (integral geometry)

representative cell-patches representative pixel-patches

slide-62
SLIDE 62

2x2 patches 3x3 patches 5x5 patches

zero gap

integral approach to absolute curvature

  • n a grid
slide-63
SLIDE 63

integral approach to absolute curvature

  • n a grid

2x2 patches 3x3 patches 5x5 patches

zero gap

slide-64
SLIDE 64

Squared Curvature with 3-cliques

ds

S 2

slide-65
SLIDE 65

S S

3-cliques with configurations (0,1,0) and (1,0,1)

p p+ p-

general intuition example

Nieuwenhuis et al., arXiv 2013

more responses where curvature is higher

slide-66
SLIDE 66

N n n n C

s ds s

1 2 2

) (

Δ i Δ i

1 2 3 … n N N-1 n-1 n+1 n+2 … …

C rn

Δ i

5x5 neighborhood

ci

i

N n n i n 1 ) (

| |

) (

| |

n i n n

s

slide-67
SLIDE 67

rn

n

4

3

d d R | | , ) (

d r =1/

zoom-in

Thus, appropriately weighted 3-cliques estimate squared curvature integral

slide-68
SLIDE 68

r

r ds

r Circle

1

2 ) (

Experimental evaluation

slide-69
SLIDE 69

Experimental evaluation

r

r ds

r Circle

1

2 ) (

slide-70
SLIDE 70

Model is OK on given segments. But, how do we optimize non-submodular 3-cliques (010) and (101)?

  • 1. Standard trick: convert to non-submodular

pair-wise binary optimization

  • 2. Our observation: QPBO does not work

(unless non-submodular regularization is very weak)

Fast Trust Region [CVPR13, arXiv]

uses local submodular approximations

slide-71
SLIDE 71

Segmentation Examples

length-based regularization

slide-72
SLIDE 72

elastica [Heber,Ranftl,Pock, 2012]

Segmentation Examples

slide-73
SLIDE 73

90-degree curvature [El-Zehiry&Grady, 2010]

Segmentation Examples

slide-74
SLIDE 74
  • ur squared curvature

7x7 neighborhood

Segmentation Examples

slide-75
SLIDE 75
  • ur squared curvature (stronger)

7x7 neighborhood

Segmentation Examples

slide-76
SLIDE 76
  • ur squared curvature (stronger)

2x2 neighborhood

Segmentation Examples

slide-77
SLIDE 77

Binary inpainting

length squared curvature

slide-78
SLIDE 78

Conclusions

  • Optimization of Entropy is a useful information-

theoretic interpretation of color model estimation

  • L1 color separation is an easy-to-optimize objective

useful in its own right [ICCV 2013]

  • Global optimization matters: one cut [ICCV13]
  • Trust region, auxiliary cuts, partial enumeration

General approximation techniques

  • for high-order energies [CVPR13]
  • for non-submodular energies [arXiv’13]
  • utperforming state-of-the-art combinatorial optimization methods