Scene semantics from long-term observation of people. Jacob Menashe - - PowerPoint PPT Presentation

scene semantics from long term observation of people
SMART_READER_LITE
LIVE PREVIEW

Scene semantics from long-term observation of people. Jacob Menashe - - PowerPoint PPT Presentation

Scene semantics from long-term observation of people. Jacob Menashe October 5, 2012 Introduction Function over form. Introduction Function over form. Form can be unique, function can be descriptive. Introduction Function over


slide-1
SLIDE 1

Scene semantics from long-term observation

  • f people.

Jacob Menashe October 5, 2012

slide-2
SLIDE 2

Introduction

◮ Function over form.

slide-3
SLIDE 3

Introduction

◮ Function over form. ◮ Form can be unique, function can be descriptive.

slide-4
SLIDE 4

Introduction

◮ Function over form. ◮ Form can be unique, function can be descriptive. ◮ Learn semantics through observation.

slide-5
SLIDE 5

Introduction Background Approach Learning Through Video Experiments and Results Discussion and Conclusion

slide-6
SLIDE 6

Introduction Background Motivation Related Work Overview Approach Learning Through Video Experiments and Results Discussion and Conclusion

slide-7
SLIDE 7

Why semantics?

Semantics are great for...

slide-8
SLIDE 8

Why semantics?

Semantics are great for...

◮ Abnormal event detection.

slide-9
SLIDE 9

Why semantics?

Semantics are great for...

◮ Abnormal event detection. ◮ Event prediction.

slide-10
SLIDE 10

Why semantics?

Semantics are great for...

◮ Abnormal event detection. ◮ Event prediction. ◮ Security and Surveillance

slide-11
SLIDE 11

Why semantics?

Semantics are great for...

◮ Abnormal event detection. ◮ Event prediction. ◮ Security and Surveillance ◮ Achieving semantic objectives.

slide-12
SLIDE 12

Why semantics?

Semantics are great for...

◮ Abnormal event detection. ◮ Event prediction. ◮ Security and Surveillance ◮ Achieving semantic objectives.

◮ Robotics - performing tasks.

slide-13
SLIDE 13

Why semantics?

Semantics are great for...

◮ Abnormal event detection. ◮ Event prediction. ◮ Security and Surveillance ◮ Achieving semantic objectives.

◮ Robotics - performing tasks. ◮ Database search - offering suggestions.

slide-14
SLIDE 14

Related Work

◮ Semantic labeling on outdoor scenes: Kohli and Torr

[2008], Shotton et al. [2006].

slide-15
SLIDE 15

Related Work

◮ Semantic labeling on outdoor scenes: Kohli and Torr

[2008], Shotton et al. [2006].

◮ Action recognition on still images: Gupta et al. [2009],

Delaitre et al. [2011].

slide-16
SLIDE 16

Related Work

◮ Semantic labeling on outdoor scenes: Kohli and Torr

[2008], Shotton et al. [2006].

◮ Action recognition on still images: Gupta et al. [2009],

Delaitre et al. [2011].

◮ Object localization on still images: Gupta et al. [2009],

Desai et al. [2010], Stark et al. [2008].

slide-17
SLIDE 17

Related Work

◮ Semantic labeling on outdoor scenes: Kohli and Torr

[2008], Shotton et al. [2006].

◮ Action recognition on still images: Gupta et al. [2009],

Delaitre et al. [2011].

◮ Object localization on still images: Gupta et al. [2009],

Desai et al. [2010], Stark et al. [2008].

◮ Pose estimation on still images: Yao and Fei-fei [2010], Yao

et al. [2011].

slide-18
SLIDE 18

Related Work

◮ Semantic labeling on outdoor scenes: Kohli and Torr

[2008], Shotton et al. [2006].

◮ Action recognition on still images: Gupta et al. [2009],

Delaitre et al. [2011].

◮ Object localization on still images: Gupta et al. [2009],

Desai et al. [2010], Stark et al. [2008].

◮ Pose estimation on still images: Yao and Fei-fei [2010], Yao

et al. [2011].

◮ Coarse functional descriptions for surveillance: Peursum

et al. [2005], Turek et al. [2010], Wang et al. [2006].

slide-19
SLIDE 19

Related Work

◮ Semantic labeling on outdoor scenes: Kohli and Torr

[2008], Shotton et al. [2006].

◮ Action recognition on still images: Gupta et al. [2009],

Delaitre et al. [2011].

◮ Object localization on still images: Gupta et al. [2009],

Desai et al. [2010], Stark et al. [2008].

◮ Pose estimation on still images: Yao and Fei-fei [2010], Yao

et al. [2011].

◮ Coarse functional descriptions for surveillance: Peursum

et al. [2005], Turek et al. [2010], Wang et al. [2006].

◮ Functions or affordances from 3D Reconstructions:

Grabner et al. [2011], Gupta et al. [2011], Gibson [1979].

slide-20
SLIDE 20

Overview

* Image taken from Delaitre et al. [2012]

slide-21
SLIDE 21

Overview

* Image taken from Delaitre et al. [2012]

slide-22
SLIDE 22

Overview

* Image taken from Delaitre et al. [2012]

slide-23
SLIDE 23

Overview

* Image taken from Delaitre et al. [2012]

slide-24
SLIDE 24

Overview

* Image taken from Delaitre et al. [2012]

slide-25
SLIDE 25

Overview

* Image taken from Delaitre et al. [2012]

slide-26
SLIDE 26

Introduction Background Approach Pose Detection Relative Object Location Object Appearance Model Learning Through Video Experiments and Results Discussion and Conclusion

slide-27
SLIDE 27

Pose Detection

◮ Pose detection begins with the person detector from

Yang and Ramanan [2011].

slide-28
SLIDE 28

Pose Detection

◮ Pose detection begins with the person detector from

Yang and Ramanan [2011].

◮ 3 Models, 3 detectors, merged into 1.

slide-29
SLIDE 29

Pose Detection

◮ Pose detection begins with the person detector from

Yang and Ramanan [2011].

◮ 3 Models, 3 detectors, merged into 1. ◮ Standing

slide-30
SLIDE 30

Pose Detection

◮ Pose detection begins with the person detector from

Yang and Ramanan [2011].

◮ 3 Models, 3 detectors, merged into 1. ◮ Standing ◮ Sitting

slide-31
SLIDE 31

Pose Detection

◮ Pose detection begins with the person detector from

Yang and Ramanan [2011].

◮ 3 Models, 3 detectors, merged into 1. ◮ Standing ◮ Sitting ◮ Reaching

slide-32
SLIDE 32

Pose Model

slide-33
SLIDE 33

Pose Model

slide-34
SLIDE 34

Pose Model

slide-35
SLIDE 35

Pose Model

slide-36
SLIDE 36

Relative Object Location

◮ Joint-region overlaps are determined.

* Image taken from Delaitre et al. [2012]

slide-37
SLIDE 37

Relative Object Location

◮ Joint-region overlaps are determined. ◮ Overlaps are aggregated.

* Image taken from Delaitre et al. [2012]

slide-38
SLIDE 38

Relative Object Location

◮ Joint-region overlaps are determined. ◮ Overlaps are aggregated. ◮ Histograms are weighted by pose likelihood.

* Image taken from Delaitre et al. [2012]

slide-39
SLIDE 39

Pose Histogram

hP

k,j,c(R) =

  • d∈D

I(Bj,c, R) 1 + exp (−3sd)qd

k

slide-40
SLIDE 40

Pose Histogram

hP

k,j,c(R) =

  • d∈D

I(Bj,c, R) 1 + exp (−3sd)qd

k ◮ D is the detections.

slide-41
SLIDE 41

Pose Histogram

hP

k,j,c(R) =

  • d∈D

I(Bj,c, R) 1 + exp (−3sd)qd

k ◮ D is the detections. ◮ sd is the score of the detection.

slide-42
SLIDE 42

Pose Histogram

hP

k,j,c(R) =

  • d∈D

I(Bj,c, R) 1 + exp (−3sd)qd

k ◮ D is the detections. ◮ sd is the score of the detection. ◮ qd k is the pose assignment coefficient for pose k.

slide-43
SLIDE 43

Pose Histogram

hP

k,j,c(R) =

  • d∈D

I(Bj,c, R) 1 + exp (−3sd)qd

k ◮ D is the detections. ◮ sd is the score of the detection. ◮ qd k is the pose assignment coefficient for pose k. ◮ I is the intersection.

slide-44
SLIDE 44

Object Appearance Model

Object appearances are modeled with bag-of-words.

slide-45
SLIDE 45

Appearance Histogram

hA(R) =

S

  • k=1
  • f∈Fk

s2

kI(Bf, R)qf ◮ Fk is the SIFT features.

slide-46
SLIDE 46

Appearance Histogram

hA(R) =

S

  • k=1
  • f∈Fk

s2

kI(Bf, R)qf ◮ Fk is the SIFT features. ◮ sk is the window size.

slide-47
SLIDE 47

Appearance Histogram

hA(R) =

S

  • k=1
  • f∈Fk

s2

kI(Bf, R)qf ◮ Fk is the SIFT features. ◮ sk is the window size. ◮ I(Bf, R) is region-box intersection.

slide-48
SLIDE 48

Appearance Histogram

hA(R) =

S

  • k=1
  • f∈Fk

s2

kI(Bf, R)qf ◮ Fk is the SIFT features. ◮ sk is the window size. ◮ I(Bf, R) is region-box intersection. ◮ qf is the soft bag-of-words assignment.

slide-49
SLIDE 49

Location Model

Finally, we model location data.

slide-50
SLIDE 50

Location Model

Finally, we model location data.

◮ Discretize the video frame into cells.

slide-51
SLIDE 51

Location Model

Finally, we model location data.

◮ Discretize the video frame into cells. ◮ hL i (R) is the proportion of pixels in cell i falling into region

R.

slide-52
SLIDE 52

Location Model

Finally, we model location data.

◮ Discretize the video frame into cells. ◮ hL i (R) is the proportion of pixels in cell i falling into region

R.

slide-53
SLIDE 53

Introduction Background Approach Learning Through Video Candidate Object Detection Learning Object Model Inferring Probable Pose Experiments and Results Discussion and Conclusion

slide-54
SLIDE 54

Candidate Object Detection

◮ Video frames are over-segmented into super-pixels.

slide-55
SLIDE 55

Candidate Object Detection

◮ Video frames are over-segmented into super-pixels. ◮ “Background” frame is defined.

slide-56
SLIDE 56

Candidate Object Detection

◮ Video frames are over-segmented into super-pixels. ◮ “Background” frame is defined. ◮ Repeat to reduce noise.

slide-57
SLIDE 57

Learning Object Model

◮ An SVM is trained for each object class:

slide-58
SLIDE 58

Learning Object Model

◮ An SVM is trained for each object class:

◮ Interactive - Bed, Sofa/Armchair, Coffee Table, Chair, Table,

Wardrobe/Cupboard, Christmas tree, Other

slide-59
SLIDE 59

Learning Object Model

◮ An SVM is trained for each object class:

◮ Interactive - Bed, Sofa/Armchair, Coffee Table, Chair, Table,

Wardrobe/Cupboard, Christmas tree, Other

◮ Background - Wall, Ceiling, Floor

slide-60
SLIDE 60

Learning Object Model

◮ An SVM is trained for each object class:

◮ Interactive - Bed, Sofa/Armchair, Coffee Table, Chair, Table,

Wardrobe/Cupboard, Christmas tree, Other

◮ Background - Wall, Ceiling, Floor

◮ Each classifier is binary.

slide-61
SLIDE 61

Inferring Probable Pose

◮ Objective: Choose a likely pose for a given area.

slide-62
SLIDE 62

Inferring Probable Pose

◮ Objective: Choose a likely pose for a given area. ◮ Choose a pose cluster to maximize:

ˆ k = arg max

k J

  • j=1

9

  • c=1
  • pixels i∈Bk

j,c

wyi(k, j, c)

slide-63
SLIDE 63

Inferring Probable Pose

◮ Objective: Choose a likely pose for a given area. ◮ Choose a pose cluster to maximize:

ˆ k = arg max

k J

  • j=1

9

  • c=1
  • pixels i∈Bk

j,c

wyi(k, j, c)

◮ k is the pose

slide-64
SLIDE 64

Inferring Probable Pose

◮ Objective: Choose a likely pose for a given area. ◮ Choose a pose cluster to maximize:

ˆ k = arg max

k J

  • j=1

9

  • c=1
  • pixels i∈Bk

j,c

wyi(k, j, c)

◮ k is the pose ◮ j is the joint

slide-65
SLIDE 65

Inferring Probable Pose

◮ Objective: Choose a likely pose for a given area. ◮ Choose a pose cluster to maximize:

ˆ k = arg max

k J

  • j=1

9

  • c=1
  • pixels i∈Bk

j,c

wyi(k, j, c)

◮ k is the pose ◮ j is the joint ◮ c is the joint cell

slide-66
SLIDE 66

Inferring Probable Pose

◮ Objective: Choose a likely pose for a given area. ◮ Choose a pose cluster to maximize:

ˆ k = arg max

k J

  • j=1

9

  • c=1
  • pixels i∈Bk

j,c

wyi(k, j, c)

◮ k is the pose ◮ j is the joint ◮ c is the joint cell ◮ Bk j,c is the bounding box

slide-67
SLIDE 67

Inferring Probable Pose

◮ Objective: Choose a likely pose for a given area. ◮ Choose a pose cluster to maximize:

ˆ k = arg max

k J

  • j=1

9

  • c=1
  • pixels i∈Bk

j,c

wyi(k, j, c)

◮ k is the pose ◮ j is the joint ◮ c is the joint cell ◮ Bk j,c is the bounding box ◮ wyi(k, j, c) is the learned SVM weights for k, j, c in ˜

h

P(R).

slide-68
SLIDE 68

Introduction Background Approach Learning Through Video Candidate Object Detection Learning Object Model Inferring Probable Pose Experiments and Results Discussion and Conclusion

slide-69
SLIDE 69

Introduction Background Approach Learning Through Video Experiments and Results Annotated Video Datasets Semantic Labeling Functional Surface Estimation Pose-Region Relationships Pose Prediction Discussion and Conclusion

slide-70
SLIDE 70

Annotated Video Datasets

◮ ~150 time-lapse videos of indoor environments

slide-71
SLIDE 71

Annotated Video Datasets

◮ ~150 time-lapse videos of indoor environments ◮ Stationary cameras

slide-72
SLIDE 72

Annotated Video Datasets

◮ ~150 time-lapse videos of indoor environments ◮ Stationary cameras ◮ Manual annotation of single frames

slide-73
SLIDE 73

Annotated Video Datasets

◮ ~150 time-lapse videos of indoor environments ◮ Stationary cameras ◮ Manual annotation of single frames ◮ http://www.youtube.com/watch?v=17HXRdVzsrM

slide-74
SLIDE 74

Semantic Labeling

Labelings are evaluated with AP score.

DPM 1 Alternate 2 (A + L) (P) (A + P) (A + L + P) Wall

  • 75

76 76 82 81 Ceiling

  • 47

53 52 69 69 Floor

  • 59

64 65 76 76 Bed 31 12 14 21 27 26 Sofa/Armchar 26 26 34 32 44 43 Coffee Table 11 11 11 12 17 17 Chair 9.5 6.3 8.3 5.8 11 12 Table 15 18 17 16 22 22 Wardrobe/Cupboard 27 27 28 22 36 36 Christmas Tree 50 55 72 20 76 77 Other Object 12 11 7.9 13 16 16 Average 23 31 35 30 43 43

1Felzenszwalb et al. [2010] 2Hedau et al. [2009]

slide-75
SLIDE 75

Semantic Labeling

Labelings are evaluated with AP score.

◮ Measured against two competing methods. DPM 1 Alternate 2 (A + L) (P) (A + P) (A + L + P) Wall

  • 75

76 76 82 81 Ceiling

  • 47

53 52 69 69 Floor

  • 59

64 65 76 76 Bed 31 12 14 21 27 26 Sofa/Armchar 26 26 34 32 44 43 Coffee Table 11 11 11 12 17 17 Chair 9.5 6.3 8.3 5.8 11 12 Table 15 18 17 16 22 22 Wardrobe/Cupboard 27 27 28 22 36 36 Christmas Tree 50 55 72 20 76 77 Other Object 12 11 7.9 13 16 16 Average 23 31 35 30 43 43

1Felzenszwalb et al. [2010] 2Hedau et al. [2009]

slide-76
SLIDE 76

Semantic Labeling

Labelings are evaluated with AP score.

◮ Measured against two competing methods. ◮ (A+P), (A + L + P) outperform in all cases except for bed

detection.

DPM 1 Alternate 2 (A + L) (P) (A + P) (A + L + P) Wall

  • 75

76 76 82 81 Ceiling

  • 47

53 52 69 69 Floor

  • 59

64 65 76 76 Bed 31 12 14 21 27 26 Sofa/Armchar 26 26 34 32 44 43 Coffee Table 11 11 11 12 17 17 Chair 9.5 6.3 8.3 5.8 11 12 Table 15 18 17 16 22 22 Wardrobe/Cupboard 27 27 28 22 36 36 Christmas Tree 50 55 72 20 76 77 Other Object 12 11 7.9 13 16 16 Average 23 31 35 30 43 43

1Felzenszwalb et al. [2010] 2Hedau et al. [2009]

slide-77
SLIDE 77

Semantic Labeling Output

Background Ground Truth (A + L + P) (P) (A + L)

slide-78
SLIDE 78

Functional Surface Estimation

◮ Measured with AP on functional labels

slide-79
SLIDE 79

Functional Surface Estimation

◮ Measured with AP on functional labels

◮ Walkable: 76%

slide-80
SLIDE 80

Functional Surface Estimation

◮ Measured with AP on functional labels

◮ Walkable: 76% ◮ Sittable: 25%

slide-81
SLIDE 81

Functional Surface Estimation

◮ Measured with AP on functional labels

◮ Walkable: 76% ◮ Sittable: 25% ◮ Reachable: 44%

slide-82
SLIDE 82

Functional Surface Estimation

◮ Measured with AP on functional labels

◮ Walkable: 76% ◮ Sittable: 25% ◮ Reachable: 44%

◮ Average gain of 13% above baseline competitor: Fouhey

et al. [2012]

slide-83
SLIDE 83

Pose-Region Relationships

slide-84
SLIDE 84

Pose-Region Relationships

slide-85
SLIDE 85

Pose-Region Relationships

slide-86
SLIDE 86

Pose-Region Relationships

slide-87
SLIDE 87

Pose-Region Relationships

slide-88
SLIDE 88

Pose-Region Relationships

slide-89
SLIDE 89

Pose Prediction

slide-90
SLIDE 90

Pose Prediction

slide-91
SLIDE 91

Pose Prediction

slide-92
SLIDE 92

Pose Prediction

slide-93
SLIDE 93

Pose Prediction

slide-94
SLIDE 94

Pose Prediction

slide-95
SLIDE 95

Pose Prediction

slide-96
SLIDE 96

Pose Prediction

slide-97
SLIDE 97

Introduction Background Approach Learning Through Video Experiments and Results Discussion and Conclusion Extensions Criticisms Conclusion

slide-98
SLIDE 98

Extensions

◮ Using semantics as probabilistic information

slide-99
SLIDE 99

Extensions

◮ Using semantics as probabilistic information ◮ Learning new objects from observation

slide-100
SLIDE 100

Criticisms

slide-101
SLIDE 101

Criticisms

◮ Lots of frames to learn a scene

slide-102
SLIDE 102

Criticisms

◮ Lots of frames to learn a scene ◮ Weak precision rates

slide-103
SLIDE 103

Criticisms

◮ Lots of frames to learn a scene ◮ Weak precision rates ◮ Manual annotations required

slide-104
SLIDE 104

Conclusion

◮ Use observations to learn semantics.

slide-105
SLIDE 105

Conclusion

◮ Use observations to learn semantics. ◮ Classify by semantic value.

slide-106
SLIDE 106

Conclusion

◮ Use observations to learn semantics. ◮ Classify by semantic value. ◮ General enhancement to common detection systems.

slide-107
SLIDE 107

References I

  • V. Delaitre, J. Sivic, and I. Laptev. Learning person-object

interactions for action recognition in still images. In Advances in Neural Information Processing Systems, 2011.

  • V. Delaitre, D. Fouhey, I. Laptev, J. Sivic, A. Gupta, and A. Efros.

Scene semantics from long-term observation of people. In

  • Proc. 12th European Conference on Computer Vision, 2012.

Chaitanya Desai, Deva Ramanan, and Charless Fowlkes. C.: Discriminative models for static human-object interactions. In In: Workshop on Structured Models in Computer Vision, 2010. Pedro F . Felzenszwalb, Ross B. Girshick, David A. McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell., 32(9):1627–1645, 2010.

slide-108
SLIDE 108

References II

David F . Fouhey, Vincent Delaitre, Abhinav Gupta, Alexei A. Efros, Ivan Laptev, and Josef Sivic. People watching: Human actions as a cue for single-view geometry. In Proc. 12th European Conference on Computer Vision, 2012. James Jerome Gibson. The ecological approach to visual

  • perception. 1979. Houghton Mifflin.

Helmut Grabner, Juergen Gall, and Luc J. Van Gool. What makes a chair a chair? In CVPR, pages 1529–1536. IEEE,

  • 2011. URL http://dblp.uni-trier.de/db/conf/

cvpr/cvpr2011.html#GrabnerGG11.

  • A. Gupta, S. Satkin, A. A. Efros, and M. Hebert. From 3d scene

geometry to human workspace. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’11, pages 1961–1968, Washington, DC, USA, 2011. IEEE Computer Society. ISBN 978-1-4577-0394-2. doi: 10.1109/CVPR.2011.5995448. URL http://dx.doi.org/10.1109/CVPR.2011.5995448.

slide-109
SLIDE 109

References III

Abhinav Gupta, Aniruddha Kembhavi, and Larry S. Davis. Observing human-object interactions: Using spatial and functional compatibility for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31:1775–1789,

  • 2009. ISSN 0162-8828. doi:

http://doi.ieeecomputersociety.org/10.1109/TPAMI.2009.83. Varsha Hedau, Derek Hoiem, and David Forsyth. Recovering the spatial layout of cluttered rooms, 2009. Pushmeet Kohli and Philip H. S. Torr. Robust higher order potentials for enforcing label consistency. In In CVPR, 2008. Patrick Peursum, Geoff West, and Svetha Venkatesh. Combining image regions and human activity for indirect

  • bject recognition in indoor wide-angle views. In Proceedings
  • f the Tenth IEEE International Conference on Computer

Vision (ICCV’05) Volume 1 - Volume 01, ICCV ’05, pages 82–89, Washington, DC, USA, 2005. IEEE Computer

slide-110
SLIDE 110

References IV

  • Society. ISBN 0-7695-2334-X-01. doi: 10.1109/ICCV.2005.57.

URL http://dx.doi.org/10.1109/ICCV.2005.57.

  • J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost:

Joint appearance, shape and context modeling for multi-class

  • bject . . . In IN ECCV, pages 1–15, 2006.

Michael Stark, Philipp Lies, Michael Zillich, Jeremy L. Wyatt, and Bernt Schiele. Functional object class detection based

  • n learned affordance cues. In 6th International Conference
  • n Computer Vision Systems (ICVS), Santorini, Greece,

2008 2008. URL http://www.mis.informatik. tu-darmstadt.de/People/stark/stark08icvs.pdf. Oral presentation.

slide-111
SLIDE 111

References V

Matthew Turek, Anthony Hoogs, and Roderic Collins. Unsupervised learning of functional categories in video

  • scenes. In Kostas Daniilidis, Petros Maragos, and Nikos

Paragios, editors, Computer Vision ECCV 2010, volume 6312 of Lecture Notes in Computer Science, pages 664–677. Springer Berlin / Heidelberg, 2010. ISBN 978-3-642-15551-2. 10.1007/978-3-642-15552-9 48. Xiaogang Wang, Kinh Tieu, and Eric Grimson. Learning semantic scene models by trajectory analysis. In In ECCV (3) (2006, pages 110–123, 2006. Yi Yang and Deva Ramanan. Articulated pose estimation with flexible mixtures-of-parts. In CVPR, pages 1385–1392. IEEE, 2011. Bangpeng Yao and Li Fei-fei. L.: Modeling mutual context of

  • bject and human pose in human-object interaction activities,

2010.

slide-112
SLIDE 112

References VI

Bangpeng Yao, Aditya Khosla, and Li Fei-fei. Classifying actions and measuring action similarity by modeling the mutual context of objects and human poses. In In Proc. ICML, 2011.