Sem Semantic 3D Modelling antic 3D Modelling ubor Ladick work with - - PowerPoint PPT Presentation

sem semantic 3d modelling antic 3d modelling
SMART_READER_LITE
LIVE PREVIEW

Sem Semantic 3D Modelling antic 3D Modelling ubor Ladick work with - - PowerPoint PPT Presentation

Sem Semantic 3D Modelling antic 3D Modelling ubor Ladick work with Christian Hne, Nikolay Savinov, Jianbo Shi, Bernhard Zeisl, Marc Pollefeys Schedule Introduction Discrete MRF Optimization using Graph Cuts Classifiers for


slide-1
SLIDE 1

Sem Semantic 3D Modelling antic 3D Modelling

Ľubor Ladický

work with Christian Häne, Nikolay Savinov, Jianbo Shi,

Bernhard Zeisl, Marc Pollefeys

slide-2
SLIDE 2

Schedule

  • Introduction
  • Discrete MRF Optimization using Graph Cuts
  • Classifiers for Semantic 3D Modelling
  • Higher Order MRFs with Ray Potentials
  • Discrete Formulation
  • Continuous Relaxation
slide-3
SLIDE 3

Schedule

  • Introduction
  • Discrete MRF Optimization using Graph Cuts
  • Classifiers for Semantic 3D modelling
  • Higher Order MRFs with Ray Potentials
  • Discrete formulation
  • Continuous relaxation
slide-4
SLIDE 4

Graph-Cut (st-mincut)

source sink 9 5 6 8 4 2 2 2 5 3 5 5 3 1 1 6 2 3

source set sink set edge costs

Set formulation

slide-5
SLIDE 5

Graph-Cut (st-mincut)

source set sink set edge costs

source sink 9 5 6 8 4 2 2 2 5 3 5 5 3 1 1 6 2 3

S T

cost = 18

Set formulation

slide-6
SLIDE 6

Graph-Cut (st-mincut)

source sink 9 5 6 8 4 2 2 2 5 3 5 5 3 1 1 6 2 3

S T

cost = 18

Algebraic formulation Set formulation

slide-7
SLIDE 7

Graph-Cut (st-mincut)

source sink 9 5 6 8 4 2 2 2 5 3 5 5 3 1 1 6 2 3

S T

cost = 18

Algebraic formulation Set formulation After substitution

slide-8
SLIDE 8

Graph-Cut (st-mincut)

source sink 9 5 6 8 4 2 2 2 5 3 5 5 3 1 1 6 2 3

S T

cost = 18

Algorithms Augmented path method Push-relabel method

slide-9
SLIDE 9

Foreground / Background Estimation

Rother et al. SIGGRAPH04
slide-10
SLIDE 10

Data term Smoothness term Data term Estimated using FG / BG colour models Smoothness term

where

Intensity dependent smoothness

Foreground / Background Estimation

slide-11
SLIDE 11

Foreground / Background Estimation

Data term Smoothness term

slide-12
SLIDE 12

Foreground / Background Estimation

Data term Smoothness term Min-Cut problem

slide-13
SLIDE 13

Foreground / Background Estimation

source sink

slide-14
SLIDE 14

Solvability using GraphCut

Submodularity

slide-15
SLIDE 15

Solvability using GraphCut

all terms submodular Submodularity submodularity = necessary condition

slide-16
SLIDE 16

Solvability using GraphCut

Submodularity General pairwise potential

slide-17
SLIDE 17

Solvability using GraphCut

Submodularity General pairwise potential

where
slide-18
SLIDE 18

Solvability using GraphCut

Submodularity General pairwise potential

where could be arbitrary Submodularity = sufficient condition
slide-19
SLIDE 19

General GraphCut pipeline

Energy minimization transformed into GraphCut :

  • Each state of original variables encoded using binary variables
  • Designed such that the energy under this encoding is pairwise submodular
  • The solution obtained by solving st-mincut and inverting the encoding
Obtain solution Graph Cut Transform into submodular E Encoding energy solution Invert Encoding
slide-20
SLIDE 20

Mulit-label energy with linear pairwise potentials

Encoding

xi xi D xk xk D . . . ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ K K K K

source sink

.

Smoothness term Data term

Ishikawa PAMI03
slide-21
SLIDE 21

Mulit-label energy with linear pairwise potentials

Encoding

xi xi D xk xk D . . . ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ K K K K

source sink

.

Smoothness term Data term

Ishikawa PAMI03
slide-22
SLIDE 22

Mulit-label energy with convex pairwise potentials

xi xi D xk xk D . ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

source sink

.

Smoothness term Data term

Any convex function ∞ ∞ ∞ ∞ ∞ ∞ where Ishikawa PAMI03
slide-23
SLIDE 23

Higher order minimization with GraphCut

Higher order term Pairwise term

Obtain solution Graph Cut Transform into submodular E Encoding energy solution Invert Encoding
slide-24
SLIDE 24

Higher order minimization with GraphCut

Higher order term Pairwise term

Example :

slide-25
SLIDE 25

Higher order minimization with GraphCut

Higher order term Pairwise term

Example :

Kolmogorov ECCV06, Ramalingam et al. DAM12
slide-26
SLIDE 26

General GraphCut pipeline

What if no encoding leads to pairwise submodular problem ?

Obtain solution Graph Cut Transform into submodular E Encoding energy solution Invert Encoding
slide-27
SLIDE 27

Move making algorithms

  • Original problem decomposed into a series of subproblems

solvable with graph cut

  • In each subproblem we find the optimal move from the

current solution in a restricted search space

Update solution Graph Cut Transform into submodular E Encoding Initial solution solution Invert Encoding Propose move Boykov et al., PAMI01
slide-28
SLIDE 28

Move making algorithms

Update solution Graph Cut Transform into submodular E Encoding Initial solution solution Invert Encoding Propose move

α-swap

– Each variable taking label α or  can change its label to α or  – Move space defined by the transformation function Transformation function

slide-29
SLIDE 29

Move making algorithms

Update solution Graph Cut Transform into submodular E Encoding Initial solution solution Invert Encoding Propose move

Transformation function

α-expansion

– Each variable may keep the old label or change to α – Move space defined by the transformation function

slide-30
SLIDE 30

Move making algorithms

Sufficient condition for submodularity of each move :

metricity semi-metricity

α-swap α-expansion

slide-31
SLIDE 31

Semantic Segmentation

Data term Smoothness term Data term Smoothness term

Discriminatively trained classifier

slide-32
SLIDE 32

Semantic Segmentation

Original Image Initial solution grass
slide-33
SLIDE 33

Semantic Segmentation

Original Image Initial solution Building expansion grass grass building
slide-34
SLIDE 34

Semantic Segmentation

Original Image Initial solution Building expansion Sky expansion grass grass grass building building sky
slide-35
SLIDE 35

Semantic Segmentation

Original Image Initial solution Building expansion Sky expansion Tree expansion grass grass grass grass building building building sky sky tree
slide-36
SLIDE 36

Semantic Segmentation

Original Image Initial solution Building expansion Sky expansion Tree expansion Aeroplane expansion grass grass grass grass grass building building building building sky sky sky tree tree aeroplane
slide-37
SLIDE 37

Non-submodular energy minimization

What can we do?

slide-38
SLIDE 38

Non-submodular energy minimization

What can we do?

Relax!

slide-39
SLIDE 39

Non-submodular energy minimization

QPBO

  • Each original variable is encoded using two binary

variables xi and xi s.t. xi = 1 - xi

  • Energy transformed into a submodular over xi and xi

_ _ _

slide-40
SLIDE 40

Non-submodular energy minimization

QPBO

  • Each original variable is encoded using two binary

variables xi and xi s.t. xi = 1 - xi

  • Energy transformed into a submodular over xi and xi

_ _ _

slide-41
SLIDE 41

Non-submodular energy minimization

QPBO

  • Each original variable is encoded using two binary

variables xi and xi s.t. xi = 1 - xi

  • Energy transformed into a submodular over xi and xi
  • Solved by dropping the constraint xi = 1 - xi

_ _ _ _

slide-42
SLIDE 42

Non-submodular energy minimization

QPBO

  • Each original variable is encoded using two binary

variables xi and xi s.t. xi = 1 - xi

  • Energy transformed into a submodular over xi and xi
  • Solved by dropping the constraint xi = 1 - xi
  • All variables satisfying the constraint guaranteed to be

part of globally optimal solution _ _ _ _

slide-43
SLIDE 43

Non-submodular energy minimization

QPBO

  • Each original variable is encoded using two binary

variables xi and xi s.t. xi = 1 - xi

  • Energy transformed into a submodular over xi and xi
  • Solved by dropping the constraint xi = 1 - xi
  • All variables satisfying the constraint guaranteed to be

part of globally optimal solution

  • Remaining variables assigned by iteratively estimated per

node (ICM), or by keeping old labels for move algorithms _ _ _ _

slide-44
SLIDE 44

Other Structural Properties Solvable with Graph-Cut

  • Kohli et al. 07, 08 – label consistency over large cliques (super-pixels)
  • Woodford et al. 08 – planarity constraint
  • Vicente et al. 08 – connectivity constraint
  • Woodford et al. 09 – marginal probability
  • Nowozin & Lampert 09 – connectivity constraint
  • Ladický et al. 09 – consistency over hierarchies (associative potentials)
  • Delong et al. 10 – label occurrence costs
  • Ladický et al. 10 – consistency between domains (semantic + depth)
  • Ladický et al. 10 – detectors in CRF
  • Ladický et al. 10 – co-occurence potentials
  • Savinov et al. 15 – ray potentials (semantic 3D visibility)
slide-45
SLIDE 45

Schedule

  • Introduction
  • Discrete MRF Optimization using Graph Cuts
  • Classifiers for Semantic 3D Modelling
  • Higher Order MRFs with Ray Potentials
  • Discrete Formulation
  • Continuous Relaxation
slide-46
SLIDE 46

Semantic classifier

Shotton et al. ECCV06, Ladický et al. ICCV09
slide-47
SLIDE 47

Data-driven Depth Estimation

  • No common structure of the scene
  • Ground plane not always visible
  • Large variation of viewpoints and of objects in the scene
  • Both things and stuff in the scene
slide-48
SLIDE 48

Data-driven Depth Estimation

Desired properties :

slide-49
SLIDE 49

Data-driven Depth Estimation

Desired properties :

  • 1. Pixel-wise classifier

Super-pixels not necessarily planar

slide-50
SLIDE 50

Data-driven Depth Estimation

Desired properties :

  • 1. Pixel-wise classifier
  • 2. Translation invariant
Classifier response for x and at a depth d window wh around the point xI
slide-51
SLIDE 51

Data-driven Depth Estimation

Desired properties :

  • 1. Pixel-wise classifier
  • 2. Translation invariant
  • 3. Depth transforms with inverse scaling
slide-52
SLIDE 52

Data-driven Depth Estimation

Desired properties :

  • 1. Pixel-wise classifier
  • 2. Translation invariant
  • 3. Depth transforms with inverse scaling

Sufficient to train a binary classifier predicting a single dC

slide-53
SLIDE 53

Data-driven Depth Estimation

Desired properties :

  • 1. Pixel-wise classifier
  • 2. Translation invariant
  • 3. Depth transforms with inverse scaling

Sufficient to train a binary classifier predicting a single dC For other depths d :

slide-54
SLIDE 54

Data-driven Depth Estimation

Desired properties :

  • 1. Pixel-wise classifier
  • 2. Translation invariant
  • 3. Depth transforms with inverse scaling
slide-55
SLIDE 55

Data-driven Depth Estimation

Desired properties :

  • 1. Pixel-wise classifier
  • 2. Translation invariant
  • 3. Depth transforms with inverse scaling

Generalized to multiple semantic classes

semantic label
slide-56
SLIDE 56

Training the classifier

1. Image pyramid is built
slide-57
SLIDE 57

Training the classifier

1. Image pyramid is built 2. Training data randomly sampled
slide-58
SLIDE 58

Training the classifier

1. Image pyramid is built 2. Training data randomly sampled 3. Samples of each class at dC used as positives
slide-59
SLIDE 59

Training the classifier

1. Image pyramid is built 2. Training data randomly sampled 3. Samples of each class at dC used as positives 4. Samples of other classes or at d ≠ dC used as negatives
slide-60
SLIDE 60

Training the classifier

1. Image pyramid is built 2. Training data randomly sampled 3. Samples of each class at dC used as positives 4. Samples of other classes or at d ≠ dC used as negatives 5. Multi-class classifier trained
slide-61
SLIDE 61

Dense Features SIFT, LBP, Self Similarity, Texton

Classifying the patch

slide-62
SLIDE 62

Dense Features SIFT, LBP, Self Similarity, Texton Representation Soft BOW representations in the set of random rectangles

Classifying the patch

slide-63
SLIDE 63

Dense Features SIFT, LBP, Self Similarity, Texton Representation Soft BOW representations in the set of random rectangles Classifier AdaBoost

Classifying the patch

slide-64
SLIDE 64

Experiments

KITTI dataset
  • 30 training & 30 test images (1382 x 512)
  • 12 semantic labels, depth 2-50m (except sky)
  • ratio of neighbouring depths di+1 / di = 1.25
NYU2 dataset
  • 725 training & 724 test images (640 x 480)
  • 40 semantic labels, depth in the range 1-10 m
  • ratio of neighbouring depths di+1 / di = 1.25
slide-65
SLIDE 65

KITTI results

slide-66
SLIDE 66

NYU2 results

slide-67
SLIDE 67

NYU2 results

slide-68
SLIDE 68

Surface Normal Estimation

Not explored much in the literature… so how to approach it?

slide-69
SLIDE 69

Surface Normal Estimation

Not explored much in the literature… so how to approach it?

Pixels or Super-pixels?

slide-70
SLIDE 70

Pixel-based Classifiers

Input image Feature representation

  • Context-based (context pixels or rectangles) feature representations

[Shotton06, Shotton08]

slide-71
SLIDE 71

Pixel-based Classifiers

Input image Feature representation

  • Context-based (context pixels or rectangles) feature representations

[Shotton06, Shotton08]

  • Classifier typically noisy and does not follow object boundaries
slide-72
SLIDE 72

Segment-based Classifiers

Input image Feature representation

  • Based on feature statistics in segments
slide-73
SLIDE 73

Segment-based Classifiers

Input image Feature representation

  • Based on feature statistics in segments
  • Segments expected to be label-consistent
slide-74
SLIDE 74

Segment-based Classifiers

Input image Feature representation

  • Based on feature statistics in segments
  • Segments expected to be label-consistent
  • One particular segmentation has to be chosen
slide-75
SLIDE 75

Input image Independent classifiers

  • Existing optimization methods (Ladicky09) designed for discrete labels

Joint Regularization

slide-76
SLIDE 76
  • Existing optimization methods (Ladicky09) designed for discrete labels
  • Not obvious how to generalize for continuous problems

Joint Regularization

Input image Independent classifiers

slide-77
SLIDE 77
  • Existing optimization methods (Ladicky09) designed for discrete labels
  • Not obvious how to generalize for continuous problems
  • Maybe we can directly learn joint classifier

Joint Regularization

Input image Independent classifiers

slide-78
SLIDE 78

How to convert segment representation into pixel representation?

Joint Learning

Input image Segment representation

slide-79
SLIDE 79

How to convert segment representation into pixel representation?

Joint Learning

Input image Segment representation

  • Representation of a pixel the same as of the segment it belongs to
slide-80
SLIDE 80

How to convert segment representation into pixel representation?

Joint Learning

Input image Segment representation

  • Representation of a pixel the same as of the segment it belongs to
  • Equivalent to weighted segment based approach
slide-81
SLIDE 81

How to convert segment representation into pixel representation?

Joint Learning

  • Representation of a pixel the same as of the segment it belongs to
  • Equivalent to weighted segment based approach
  • Concatenation to combine pixel and multiple segment representations
slide-82
SLIDE 82

Joint Learning

To simplify regression problem

  • Normals clustered using K-means clustering
  • Each represented as weighted sums of cluster centres using local coding
slide-83
SLIDE 83

Joint Learning

To simplify regression problem

  • Normals clustered using K-means clustering
  • Each represented as weighted sums of cluster centres using local coding
  • Learning formulated as a regression into local coding coordinates
slide-84
SLIDE 84

Pipeline of our Method

slide-85
SLIDE 85

RMRC Challenge Results

Input image err = 40.366 Input image err = 32.446 Input image err = 33.636 Input image err = 35.109 Input image err = 35.849 Input image err = 28.379 Input image err = 35.429 Input image err = 37.066 Input image err = 38.043
slide-86
SLIDE 86 Input image err = 37.688 Input image err = 40.784 Input image err = 51.897 Input image err = 68.038 Input image err = 33.174 Input image err = 41.131 Input image err = 38.873 Input image err = 28.216 Input image err = 32.034

RMRC Challenge Results

slide-87
SLIDE 87

Schedule

  • Introduction
  • Discrete MRF Optimization using Graph Cuts
  • Classifiers for Semantic 3D Modelling
  • Higher Order MRFs with Ray Potentials
  • Discrete Formulation
  • Continuous Relaxation
slide-88
SLIDE 88 .

Semantic 3D Reconstruction

Input images Semantic 3D model Depth estimates Semantic estimates

slide-89
SLIDE 89

Semantic 3D Reconstruction

Pixel predictions - prediction of the first occupied voxel along the ray Predictions of the semantic label of the first occupied voxel Predictions of the depth of the first

  • ccupied voxel
slide-90
SLIDE 90

Semantic 3D Reconstruction

Volumetric formulation

slide-91
SLIDE 91

Semantic 3D Reconstruction

Volumetric formulation

Ray potentials Pairwise regularizer

slide-92
SLIDE 92

Semantic 3D Reconstruction

Volumetric formulation

Ray potentials Pairwise regularizer

Ray potentials typically approximated by unary potentials

  • voxels behind the depth estimate should be occupied
  • voxels just in front of the depth estimate should be free space

( Zach 3DPVT08, Häne CVPR13, Kundu ECCV14, ..)

slide-93
SLIDE 93

Semantic 3D Reconstruction

Volumetric formulation

Ray potentials Pairwise regularizer

We try to solve the right problem!

slide-94
SLIDE 94

Semantic 3D Reconstruction

Volumetric formulation

Ray potentials Pairwise regularizer

Cost based on the first occupied voxel along the ray

depth label

freespace
slide-95
SLIDE 95

Two-label problem

Discrete formulation using QPBO relaxation

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 x3 x3
slide-96
SLIDE 96

Two-label problem

Discrete formulation using QPBO relaxation

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 x3 x3

Our goal is to find :

slide-97
SLIDE 97

Two-label problem

Discrete formulation using QPBO relaxation

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 x3 x3

Our goal is to find :

such that is : 1) A pairwise function 2) Number of edges grows linearly with the length for a ray 3) Symmetric to inherit QPBO properties

slide-98
SLIDE 98

Two-label problem

To find we do these steps:

1) Polynomial representation of the ray potential 2) Transformation into submodular function over x and x 3) Pairwise construction using auxiliary variables z 4) Merging variables (Ramalingam12) for linear complexity 5) Symmetrization of the graph

slide-99
SLIDE 99

Polynomial representation of the ray potential

Two-label ray potential takes the form: where xi = 0 for occupied voxel xi = 1 for free-space

slide-100
SLIDE 100

Polynomial representation of the ray potential

Two-label ray potential takes the form: where xi = 0 for occupied voxel xi = 1 for free-space We want to transform the potential into:

slide-101
SLIDE 101

Polynomial representation of the ray potential

Two-label ray potential takes the form: where xi = 0 for occupied voxel xi = 1 for free-space We want to transform the potential into: Plugging it in: thus

slide-102
SLIDE 102

Transformation into a submodular function

  • submodular for
slide-103
SLIDE 103

Transformation into a submodular function

  • submodular for

For :

slide-104
SLIDE 104

Transformation into a submodular function

  • submodular for

For : Starting from the last term, we can iteratively transform:

slide-105
SLIDE 105

Pairwise graph construction

Standard graph constructions (Freedman CVPR05) for negative products:

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 x3 x3 z6 a6 a6 a6 a6 a6 a6 a6 a6
slide-106
SLIDE 106

Pairwise graph construction

Standard graph constructions (Freedman CVPR05) for negative products:

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 x3 x3 z6 b6 b6 b6 b6 b6 b6 b6 b6
slide-107
SLIDE 107

Pairwise graph construction

Standard graph constructions (Freedman CVPR05) for negative products: Leads to a quadratic growth of the number of edges!

slide-108
SLIDE 108

Pairwise graph construction

Non-standard graph constructions for negative products:

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 z0 z1 z2 z6 z5 z4 x3 x3 z3 a6 a6 a6 a6 a6 a6 a6 a6 a6 a6 a6 a6 a6 a6

Optimal z :

slide-109
SLIDE 109

Pairwise graph construction

Non-standard graph constructions for negative products:

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 z’0 z’1 z’2 z'6 z'5 z’4 x3 x3 z’3 b6 b6 b6 b6 b6 b6 b6 b6 b6 b6 b6 b6 b6

Optimal z :

b6
slide-110
SLIDE 110

Merging theorem

If (optimal)

(Ramalingam12)

slide-111
SLIDE 111

Merging theorem

If (optimal) then

(Ramalingam12)

slide-112
SLIDE 112

Pairwise graph construction

Non-standard graph constructions for negative products: Optimal z : Optimal z :

slide-113
SLIDE 113

Pairwise graph construction

slide-114
SLIDE 114

Pairwise graph construction

x0 x0 x1 x1 x2 x2 x6 x5 x4 x4 x5 x6 z0 z1 z2 z6 z5 z4 z0 z1 z2 z6 z5 z4 ’ ’ ’ ’ ’ ’ a0 a1 x3 x3 z3 z3 ’ a2 a3 a4 a5 a6 b0 b1 b2 b3 b4 b5 b6 f0 f1 f2 f3 f4 f5 f6 b0 b1 b2 b3 b4 b5 b6 f6 f5 f4 f3 f2 f1
slide-115
SLIDE 115

Symmetrization of the graph

slide-116
SLIDE 116

Multi-label problem

  • Standard alpha-expansion
  • Multi-label ray potential projects into 2-label ray potential
  • Variables not labelled by QPBO labelled using ICM
slide-117
SLIDE 117

Implementation details

Semantic cost Depth cost For the top n matches :

  • Semantic classifier [Ladický ICCV09]
  • Multi-view stereo depth matches using zero-mean NCC
slide-118
SLIDE 118

Results

Input Depth Semantics 3D model

slide-119
SLIDE 119

Results

Input Depth Semantics 3D model

slide-120
SLIDE 120

Results

slide-121
SLIDE 121

Results

slide-122
SLIDE 122

Conclusions

  • Volumetric optimization over rays is feasible
  • Solvable using QPBO relaxation and suitable graph
  • Results do not suffer from artifacts of ray approximations
  • objects are not fattened
  • holes are not closed
slide-123
SLIDE 123

Continuous Formulation

Continuous approach possible ?

slide-124
SLIDE 124

Continuous Formulation

slide-125
SLIDE 125

Continuous Formulation

The last term can be dropped by:

slide-126
SLIDE 126

Continuous Formulation

Introducing visibility variables :

slide-127
SLIDE 127

Continuous Formulation

Introducing visibility variables : where

slide-128
SLIDE 128

Continuous Formulation

Introducing visibility variables: where convex for ci

l ≤ 0
slide-129
SLIDE 129

Continuous Formulation

Introducing visibility variables: convex for ci

l ≤ 0
slide-130
SLIDE 130

Continuous Formulation

Can we make ci

l ≤ 0 ?

slide-131
SLIDE 131

Continuous Formulation

Can we make ci

l ≤ 0 ?

Yes!

slide-132
SLIDE 132

Continuous Formulation

Can we make ci

l ≤ 0 ?

First, we notice :

slide-133
SLIDE 133

Continuous Formulation

Can we make ci

l ≤ 0 ?

First, we notice : The cost function does not change by adding:

slide-134
SLIDE 134

Continuous Formulation

Can we make ci

l ≤ 0 ?

First, we notice : The cost function does not change by adding:

slide-135
SLIDE 135

Continuous Formulation

Can we make ci

l ≤ 0 ?

First, we notice : The cost function does not change by adding: ci

l ≥ 0
slide-136
SLIDE 136

Integer Formulation

slide-137
SLIDE 137

Convex relaxation

slide-138
SLIDE 138

Convex relaxation

Will it work ?

slide-139
SLIDE 139

Convex relaxation

Will it work ? Unfortunately not

slide-140
SLIDE 140

Convex relaxation

slide-141
SLIDE 141

Convex relaxation

Desired solution

slide-142
SLIDE 142

Convex relaxation

Desired solution Global optimum

slide-143
SLIDE 143

Convex relaxation

Problem solved, if cost is taken, when there is a visibility drop:

slide-144
SLIDE 144

Non-convex Formulation

slide-145
SLIDE 145

Non-convex Formulation

Solved using majorize-minimize strategy:

slide-146
SLIDE 146

Non-convex Formulation

Solved using majorize-minimize strategy: We replace constraint by

slide-147
SLIDE 147

Non-convex Formulation

Solved using majorize-minimize strategy: We replace constraint by where

slide-148
SLIDE 148

Results on Middlebury dataset

slide-149
SLIDE 149

Results

slide-150
SLIDE 150

Results on Thin Structures

TV Flux TV Flux TV Flux (high reg) (medium reg) (low reg) Data Our method

slide-151
SLIDE 151

Multi-class results

Input Data Häne et al. CVPR13 Discrete result Continuous result

slide-152
SLIDE 152

Questions ?