An ideal observer model for grouping and contour integration in - - PowerPoint PPT Presentation

an ideal observer model for grouping and contour
SMART_READER_LITE
LIVE PREVIEW

An ideal observer model for grouping and contour integration in - - PowerPoint PPT Presentation

Departement of Systems and Computational Biology Albert Einstein College of Medicine An ideal observer model for grouping and contour integration in natural images Jonathan Vacher With: Ruben Coen-Cagli (Albert Einstein College of Medicine,


slide-1
SLIDE 1

Departement of Systems and Computational Biology Albert Einstein College of Medicine

An ideal observer model for grouping and contour integration in natural images

Jonathan Vacher

With: Ruben Coen-Cagli (Albert Einstein College of Medicine, New-York) and Pascal Mamassian (LSP, ´ Ecole Normale Sup´ erieure, Paris).

ECVP 29/08/2019

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 1 /13

slide-2
SLIDE 2

Image Segmentation vs Visual Segmentation

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 2 /13

slide-3
SLIDE 3

Image Segmentation vs Visual Segmentation

Great progress with deep learning: ◮ mostly supervised learning (∼ top-down approach) ◮ smart architecture but different from biological vi- sion ◮ performance oriented ◮ work as a black box

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 2 /13

slide-4
SLIDE 4

Image Segmentation vs Visual Segmentation

Great progress with deep learning: ◮ mostly supervised learning (∼ top-down approach) ◮ smart architecture but different from biological vi- sion ◮ performance oriented ◮ work as a black box Visual segmentation is more involved ! ◮ variable but consistent across humans ◮ top-down + bottom-up processing Our goal is to craft an open box model with all the ingredient of vision !

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 2 /13

slide-5
SLIDE 5

Grouping and contours integration: a basis for visual segmentation ?

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 3 /13

slide-6
SLIDE 6

Grouping and contours integration: a basis for visual segmentation ?

Contour perception (Field et al. 1993)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 3 /13

slide-7
SLIDE 7

Grouping and contours integration: a basis for visual segmentation ?

Contour perception (Field et al. 1993) Texture perception (Landy et al. 2001)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 3 /13

slide-8
SLIDE 8

Grouping and contours integration: a basis for visual segmentation ?

Contour perception (Field et al. 1993) Texture perception (Landy et al. 2001) Artificial stimuli !

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 3 /13

slide-9
SLIDE 9

Grouping and contours integration: a basis for visual segmentation ?

Contour perception (Field et al. 1993) Texture perception (Landy et al. 2001) Artificial stimuli ! Tractable models and well-controlled experiments . . .

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 3 /13

slide-10
SLIDE 10

Grouping and contours integration: a basis for visual segmentation ?

Contour perception (Field et al. 1993) Texture perception (Landy et al. 2001) Artificial stimuli ! Tractable models and well-controlled experiments . . . but how to generalize to natural images ?

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 3 /13

slide-11
SLIDE 11

Toward an ideal observer model for visual segmentation

An ideal observer for visual segmentation of natural images !

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 4 /13

slide-12
SLIDE 12

Toward an ideal observer model for visual segmentation

An ideal observer for visual segmentation of natural images !

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 4 /13

slide-13
SLIDE 13

Toward an ideal observer model for visual segmentation

An ideal observer for visual segmentation of natural images ! ◮ To guide future model driven psychophysical experiments (ongoing work, not presented here)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 4 /13

slide-14
SLIDE 14

Toward an ideal observer model for visual segmentation

An ideal observer for visual segmentation of natural images ! ◮ To guide future model driven psychophysical experiments (ongoing work, not presented here) Several constraints:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 4 /13

slide-15
SLIDE 15

Toward an ideal observer model for visual segmentation

An ideal observer for visual segmentation of natural images ! ◮ To guide future model driven psychophysical experiments (ongoing work, not presented here) Several constraints: ◮ Image statistics

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 4 /13

slide-16
SLIDE 16

Toward an ideal observer model for visual segmentation

An ideal observer for visual segmentation of natural images ! ◮ To guide future model driven psychophysical experiments (ongoing work, not presented here) Several constraints: ◮ Image statistics ◮ Cortical features

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 4 /13

slide-17
SLIDE 17

Toward an ideal observer model for visual segmentation

An ideal observer for visual segmentation of natural images ! ◮ To guide future model driven psychophysical experiments (ongoing work, not presented here) Several constraints: ◮ Image statistics ◮ Cortical features ◮ Vision psychophysics

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 4 /13

slide-18
SLIDE 18

Representation and non-Gaussian statistics of natural images

How are images represented ?

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 5 /13

slide-19
SLIDE 19

Representation and non-Gaussian statistics of natural images

How are images represented ? ⇒ decomposition in a wavelet basis (receptive fields) (Bell et al. 1997; Olshausen et al. 1996) ◮ X = (X1, . . . , Xn)T = (w1, I, . . . , wn, I)T

I = (wk)k =

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 5 /13

slide-20
SLIDE 20

Representation and non-Gaussian statistics of natural images

How are images represented ? ⇒ decomposition in a wavelet basis (receptive fields) (Bell et al. 1997; Olshausen et al. 1996) ◮ X = (X1, . . . , Xn)T = (w1, I, . . . , wn, I)T What are the coefficient statistics ?

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 5 /13

slide-21
SLIDE 21

Representation and non-Gaussian statistics of natural images

How are images represented ? ⇒ decomposition in a wavelet basis (receptive fields) (Bell et al. 1997; Olshausen et al. 1996) ◮ X = (X1, . . . , Xn)T = (w1, I, . . . , wn, I)T What are the coefficient statistics ? Non-Gaussian ! (Wainwright et al. 2000)

−5 5 0.0 0.1 0.2 0.3

Histogram of x1

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 5 /13

slide-22
SLIDE 22

Representation and non-Gaussian statistics of natural images

How are images represented ? ⇒ decomposition in a wavelet basis (receptive fields) (Bell et al. 1997; Olshausen et al. 1996) ◮ X = (X1, . . . , Xn)T = (w1, I, . . . , wn, I)T What are the coefficient statistics ? Non-Gaussian ! (Wainwright et al. 2000) Definition (Gaussian Scale Mixture) Gaussian vector of visual features (G ∼ N(0, Σ)) × Contrast between features (Z ∼ L(ν)) X = ZG.

−5 5 0.0 0.2 0.4

Density Z = z ր Density of X1 knowing Z = z

−5 5 0.0 0.1 0.2 0.3

Histogram of x1

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 5 /13

slide-23
SLIDE 23

Representation and non-Gaussian statistics of natural images

How are images represented ? ⇒ decomposition in a wavelet basis (receptive fields) (Bell et al. 1997; Olshausen et al. 1996) ◮ X = (X1, . . . , Xn)T = (w1, I, . . . , wn, I)T What are the coefficient statistics ? Non-Gaussian ! (Wainwright et al. 2000) Definition (Gaussian Scale Mixture) Gaussian vector of visual features (G ∼ N(0, Σ)) × Contrast between features (Z ∼ L(ν)) X = ZG.

−5 5 0.0 0.2 0.4

Density Z = z ր Density of X1 knowing Z = z

−5 5 0.0 0.1 0.2 0.3

Z: random variable

−5 5 0.0 0.1 0.2 0.3 jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 5 /13

slide-24
SLIDE 24

Representation and non-Gaussian statistics of natural images

How are images represented ? ⇒ decomposition in a wavelet basis (receptive fields) (Bell et al. 1997; Olshausen et al. 1996) ◮ X = (X1, . . . , Xn)T = (w1, I, . . . , wn, I)T What are the coefficient statistics ? Non-Gaussian ! (Wainwright et al. 2000) Definition (Gaussian Scale Mixture) Gaussian vector of visual features (G ∼ N(0, Σ)) × Contrast between features (Z ∼ L(ν)) X = ZG.

−5 5 0.0 0.2 0.4

Density Z = z ր Density of X1 knowing Z = z

−5 5 0.0 0.1 0.2 0.3

Z: random variable

−5 5 0.0 0.1 0.2 0.3

◮ Heavy-tailed distribu- tions . . . ◮ and non-linear depen- dencies !

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 5 /13

slide-25
SLIDE 25

Gaussian Scale Mixture explains surround modulation in V1

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 6 /13

slide-26
SLIDE 26

Gaussian Scale Mixture explains surround modulation in V1

Taken from Coen-Cagli, Dayan, et al. 2012

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 6 /13

slide-27
SLIDE 27

Gaussian Scale Mixture explains surround modulation in V1

Taken from Coen-Cagli, Dayan, et al. 2012 GSM ⇒ normalization: X = (X (c), X (s)), G = (G (c), G (s)) G (c) ∝ X (c)

  • ν +

k wkX 2 k jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 6 /13

slide-28
SLIDE 28

Gaussian Scale Mixture explains surround modulation in V1

Taken from Coen-Cagli, Dayan, et al. 2012 GSM ⇒ normalization: X = (X (c), X (s)), G = (G (c), G (s)) G (c) ∝ X (c)

  • ν +

k wkX 2 k

Interpretation: ◮ G: vector of neurons responses (Coen-Cagli and Schwartz 2013; Orb´ an et al. 2016) ◮ Z: normalization (canonical across the brain Carandini et al. 2012)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 6 /13

slide-29
SLIDE 29

Gaussian Scale Mixture explains surround modulation in V1

Taken from Coen-Cagli, Dayan, et al. 2012 GSM ⇒ normalization: X = (X (c), X (s)), G = (G (c), G (s)) G (c) ∝ X (c)

  • ν +

k wkX 2 k

Interpretation: ◮ G: vector of neurons responses (Coen-Cagli and Schwartz 2013; Orb´ an et al. 2016) ◮ Z: normalization (canonical across the brain Carandini et al. 2012) ◮ A normative model: vision is adapted to environmental statistics !

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 6 /13

slide-30
SLIDE 30

Images statistics and V1 physiology suggest Mixtures of GSMs

Key question: How are neurons pooled together (i.e. normalized together) ? Center+Surround or Center/Surround pooling

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 7 /13

slide-31
SLIDE 31

Images statistics and V1 physiology suggest Mixtures of GSMs

Key question: How are neurons pooled together (i.e. normalized together) ? Using a flexible pooling of neurons: Center+Surround or Center/Surround pooling

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 7 /13

slide-32
SLIDE 32

Images statistics and V1 physiology suggest Mixtures of GSMs

Key question: How are neurons pooled together (i.e. normalized together) ? Using a flexible pooling of neurons: ◮ qualitatively different image patches (Schwartz et al. 2006) Center+Surround or Center/Surround pooling Homogeneous: pooled together

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 7 /13

slide-33
SLIDE 33

Images statistics and V1 physiology suggest Mixtures of GSMs

Key question: How are neurons pooled together (i.e. normalized together) ? Using a flexible pooling of neurons: ◮ qualitatively different image patches (Schwartz et al. 2006) Center+Surround or Center/Surround pooling Homogeneous: pooled together Heterogeneous: kept separate

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 7 /13

slide-34
SLIDE 34

Images statistics and V1 physiology suggest Mixtures of GSMs

Key question: How are neurons pooled together (i.e. normalized together) ? Using a flexible pooling of neurons: ◮ qualitatively different image patches (Schwartz et al. 2006) ◮ better explanation of some neurons activity (Coen-Cagli, Kohn, et al. 2015) Center+Surround or Center/Surround pooling Homogeneous: pooled together Heterogeneous: kept separate ◮ Homogeneous: strong sur- round suppression ◮ Heterogeneous: weak sur- round suppression

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 7 /13

slide-35
SLIDE 35

Images statistics and V1 physiology suggest Mixtures of GSMs

Key question: How are neurons pooled together (i.e. normalized together) ? Using a flexible pooling of neurons: ◮ qualitatively different image patches (Schwartz et al. 2006) ◮ better explanation of some neurons activity (Coen-Cagli, Kohn, et al. 2015) Center+Surround or Center/Surround pooling Homogeneous: pooled together Heterogeneous: kept separate ◮ Homogeneous: strong sur- round suppression ◮ basis for segmentation ? ◮ Heterogeneous: weak sur- round suppression ◮ basis for contour detection ?

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 7 /13

slide-36
SLIDE 36

Natural images statistics suggest Mixtures of GSMs

◮ Using a fixed pooling of neurons:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 8 /13

slide-37
SLIDE 37

Natural images statistics suggest Mixtures of GSMs

◮ Using a fixed pooling of neurons: Normalization statistics vary accross an image Z ∼ L(ν)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 8 /13

slide-38
SLIDE 38

Natural images statistics suggest Mixtures of GSMs

◮ Using a fixed pooling of neurons: Normalization statistics vary accross an image Z ∼ L(ν)

−5 5 0.0 0.2 0.4

Density

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 8 /13

slide-39
SLIDE 39

Natural images statistics suggest Mixtures of GSMs

◮ Using a fixed pooling of neurons: Normalization statistics vary accross an image Z ∼ L(ν)

−5 5 0.0 0.2 0.4

Density Covariances vary accross an image G ∼ N(0, Σ)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 8 /13

slide-40
SLIDE 40

Natural images statistics suggest Mixtures of GSMs

◮ Using a fixed pooling of neurons: Normalization statistics vary accross an image Z ∼ L(ν)

−5 5 0.0 0.2 0.4

Density Covariances vary accross an image G ∼ N(0, Σ)

−2 −1 1 2 −2 −1 1 2

G1 G2

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 8 /13

slide-41
SLIDE 41

Natural images statistics suggest Mixtures of GSMs

◮ Using a fixed pooling of neurons: Normalization statistics vary accross an image Z ∼ L(ν)

−5 5 0.0 0.2 0.4

Density Natural images are not stationary ! Covariances vary accross an image G ∼ N(0, Σ)

−2 −1 1 2 −2 −1 1 2

G1 G2

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 8 /13

slide-42
SLIDE 42

Natural images statistics suggest Mixtures of GSMs

◮ Using a fixed pooling of neurons: Normalization statistics vary accross an image Z ∼ L(ν)

−5 5 0.0 0.2 0.4

Density Natural images are not stationary ! Covariances vary accross an image G ∼ N(0, Σ)

−2 −1 1 2 −2 −1 1 2

G1 G2 Two reasons for Mixture of GSMs: ◮ Natural images are non-stationary (statistics vary across space) ◮ Flexible pooling seems necessary (for image stats and physiology)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 8 /13

slide-43
SLIDE 43

Naive Mixture of Gaussian Scale Mixtures for image segmentation

Using wavelet feature vectors at a single pixel location we obtain:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 9 /13

slide-44
SLIDE 44

Naive Mixture of Gaussian Scale Mixtures for image segmentation

Using wavelet feature vectors at a single pixel location we obtain: Homogeneous Heterogeneous

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 9 /13

slide-45
SLIDE 45

Naive Mixture of Gaussian Scale Mixtures for image segmentation

Using wavelet feature vectors at a single pixel location we obtain: Homogeneous Heterogeneous Decomposing the components:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 9 /13

slide-46
SLIDE 46

Naive Mixture of Gaussian Scale Mixtures for image segmentation

Using wavelet feature vectors at a single pixel location we obtain: Homogeneous Heterogeneous Decomposing the components: ◮ homogeneous: smoothing (∼ proximity grouping), see our pre-prints: texture-based segmentation algorithm (Vacher, Mamassian, et al. 2019) + extension using hierarchical features (Vacher and Coen-Cagli 2019). Can achieve state-of-the-art performances.

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 9 /13

slide-47
SLIDE 47

Naive Mixture of Gaussian Scale Mixtures for image segmentation

Using wavelet feature vectors at a single pixel location we obtain: Homogeneous Heterogeneous Decomposing the components: ◮ homogeneous: smoothing (∼ proximity grouping), see our pre-prints: texture-based segmentation algorithm (Vacher, Mamassian, et al. 2019) + extension using hierarchical features (Vacher and Coen-Cagli 2019). Can achieve state-of-the-art performances. ◮ heterogeneous: forcing the covariance structure (∼ flexible pooling)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 9 /13

slide-48
SLIDE 48

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

Heterogeneous: kept separate

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-49
SLIDE 49

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

Edge co-occurrence (Geisler et al. 2001)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-50
SLIDE 50

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

◮ Train a GSM on many natural images using center-surround feature vectors

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-51
SLIDE 51

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

◮ Train a GSM on many natural images using center-surround feature vectors

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-52
SLIDE 52

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

◮ Train a GSM on many natural images using center-surround feature vectors

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-53
SLIDE 53

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

◮ Train a GSM on many natural images using center-surround feature vectors

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-54
SLIDE 54

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

◮ Train a GSM on many natural images using center-surround feature vectors ◮ The covariance contains the association field structure !

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-55
SLIDE 55

Gaussian Scale Mixture: grouping and contour integration Heterogeneous component !

◮ Train a GSM on many natural images using center-surround feature vectors ◮ The covariance contains the association field structure ! ◮ However not strong enough to distinguish contour from non-contour ⇒ enforcing association field

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 10 /13

slide-56
SLIDE 56

Contour component vs “garbage” component

1

11 /13

slide-57
SLIDE 57

Contour component vs “garbage” component

◮ Forcing the association field structure ⇒ block diagonal covariance

1

11 /13

slide-58
SLIDE 58

Contour component vs “garbage” component

◮ Forcing the association field structure ⇒ block diagonal covariance ◮ Equivalent to specify a linear subspace i.e. a basis ⇒ Principal Components (PCA)

1

11 /13

slide-59
SLIDE 59

Contour component vs “garbage” component

◮ Forcing the association field structure ⇒ block diagonal covariance ◮ Equivalent to specify a linear subspace i.e. a basis ⇒ Principal Components (PCA) Principal component of the block diagonal covariance trained on human labeled edges only: 0◦ 45◦ 90◦ 135◦

1

11 /13

slide-60
SLIDE 60

Contour component vs “garbage” component

◮ Forcing the association field structure ⇒ block diagonal covariance ◮ Equivalent to specify a linear subspace i.e. a basis ⇒ Principal Components (PCA) Principal component of the block diagonal covariance trained on human labeled edges only: 0◦ 45◦ 90◦ 135◦ ◮ In practice, it’s better to project the feature vectors onto this subspace before training.

1

11 /13

slide-61
SLIDE 61

Contour component vs “garbage” component

◮ Forcing the association field structure ⇒ block diagonal covariance ◮ Equivalent to specify a linear subspace i.e. a basis ⇒ Principal Components (PCA) Principal component of the block diagonal covariance trained on human labeled edges only: 0◦ 45◦ 90◦ 135◦ ◮ In practice, it’s better to project the feature vectors onto this subspace before training. ◮ Similar to the template matching framework (Geisler 2018; Sebastian et al. 2017)

1

11 /13

slide-62
SLIDE 62

Contour component vs “garbage” component

◮ Forcing the association field structure ⇒ block diagonal covariance ◮ Equivalent to specify a linear subspace i.e. a basis ⇒ Principal Components (PCA) Principal component of the block diagonal covariance trained on human labeled edges only: 0◦ 45◦ 90◦ 135◦ ◮ In practice, it’s better to project the feature vectors onto this subspace before training. ◮ Similar to the template matching framework (Geisler 2018; Sebastian et al. 2017) ◮ Also, coherent with expected long edge receptive fields in the higher visual cortex (V2, V4) (Hosoya et al. 2015; Liu et al. 2016)

1

11 /13

slide-63
SLIDE 63

Contour component vs “garbage” component: results

Encouraging results on different types of images:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 12 /13

slide-64
SLIDE 64

Contour component vs “garbage” component: results

Encouraging results on different types of images:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 12 /13

slide-65
SLIDE 65

Contour component vs “garbage” component: results

Encouraging results on different types of images:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 12 /13

slide-66
SLIDE 66

Contour component vs “garbage” component: results

Encouraging results on different types of images:

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 12 /13

slide-67
SLIDE 67

Conclusion

◮ probabilistic model that accounts for image statistics, physiology and psychophysics ◮ need to improve the contour-based model and quantify its performance

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 13 /13

slide-68
SLIDE 68

Conclusion

◮ probabilistic model that accounts for image statistics, physiology and psychophysics ◮ need to improve the contour-based model and quantify its performance ◮ Next step: compare model predictions to human segmentation maps (see you next year!)

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 13 /13

slide-69
SLIDE 69

Conclusion

◮ probabilistic model that accounts for image statistics, physiology and psychophysics ◮ need to improve the contour-based model and quantify its performance ◮ Next step: compare model predictions to human segmentation maps (see you next year!) Thanks to Pascal Mamassian and Ruben Coen-Cagli

jonathan.vacher@einstein.yu.edu https://jonathanvacher.github.io 13 /13