object recognition
play

Object Recognition 16-385 Computer Vision (Kris Kitani) Carnegie - PowerPoint PPT Presentation

Henderson and Davis. Shape recognition using hierarchical Constraint Analysis. 1979 Object Recognition 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University What do we mean by object recognition? Is this a street light?


  1. Henderson and Davis. Shape recognition using hierarchical Constraint Analysis. 1979 Object Recognition 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University

  2. What do we mean by ‘object recognition’?

  3. Is this a street light? (Verification / classification)

  4. Where are the people? (Detection)

  5. Is that Potala palace? (Identification)

  6. Sky What’s in the scene? (semantic segmentation) Mountain Trees Building Vendors People Ground

  7. What type of scene is it? (Scene categorization) Outdoor Marketplace City

  8. Challenges (Object Recognition)

  9. Viewpoint variation

  10. Illumination variation

  11. Scale variation

  12. Background clutter

  13. Deformation

  14. Occlusion

  15. Intra-class variation

  16. Common approaches

  17. Common approaches: object recognition Feature Spatial Window Matching reasoning classification

  18. Feature matching

  19. What object do these parts belong to?

  20. Some local feature are very informative An object as a collection of local features (bag-of-features) • deals well with occlusion • scale invariant • rotation invariant Are the positions of the parts important?

  21. Pros • Simple • Efficient algorithms • Robust to deformations Cons • No spatial reasoning

  22. Common approaches: object recognition Feature Spatial Window Matching reasoning classification

  23. Spatial reasoning

  24. The position of every part depends on the positions of all the other parts p o s i t i o n a l d e p e n d e n c e Many parts, many dependencies!

  25. 1. Extract features 2. Match features 3. Spatial verification

  26. 1. Extract features 2. Match features 3. Spatial verification

  27. 1. Extract features 2. Match features 3. Spatial verification an old idea…

  28. Fu and Booth. Grammatical Inference. 1975 Scene Structural (grammatical) description

  29. Description for left edge of face 1972

  30. A more modern probabilistic approach… think of locations as random variables (RV) RV RV RV set of part locations L = { L 1 , L 2 , . . . , L M } vector of RVs: 


  31. A more modern probabilistic approach… think of locations as random variables (RV) RV RV RV set of part locations L = { L 1 , L 2 , . . . , L M } vector of RVs: 
 image (N pixels) What are the dimensions of R.V. L? L 1 L 2 How many possible combinations of part locations? L M

  32. A more modern probabilistic approach… think of locations as random variables (RV) RV RV RV set of part locations L = { L 1 , L 2 , . . . , L M } vector of RVs: 
 image What are the dimensions of R.V. L? L 1 L 2 L m = [ x y ] How many possible combinations of part locations? L M

  33. A more modern probabilistic approach… think of locations as random variables (RV) RV RV RV set of part locations L = { L 1 , L 2 , . . . , L M } vector of RVs: 
 image What are the dimensions of R.V. L? L 1 L 2 L m = [ x y ] How many possible combinations of part locations? L M N M

  34. Most likely set of locations L is found by maximizing: part locations image p ( L | I ) ∝ p ( I | L ) p ( L ) Posterior Likelihood: 
 Prior: 
 How likely it is to observe spatial prior controls the image I given that the M parts geometric configuration of the are at locations L 
 parts (scaled output of a classifier) What kind of prior can we formulate?

  35. Given any collection of selfie images, where would you expect the nose to be? What would be an appropriate prior ? P ( L nose ) =?

  36. A simple factorized model Y p ( L ) = p ( L m ) m Break up the joint probability into smaller (independent) terms

  37. Independent locations Y p ( L ) = p ( L m ) m Each feature is allowed to move independently Does not model the relative location of parts at all

  38. Tree structure (star model) M − 1 Y p ( L ) = p ( L root ) p ( L m | L root ) m =1 Root (reference) Represent the location of node all the parts relative to a single reference part Assumes that one reference part is defined 
 (who will decide this?)

  39. Fully connected (constellation model) p ( L ) = p ( l 1 , . . . , l N ) p o s i t i o Explicitly represents the n a l joint distribution of locations d e p e n d e n c Good model: e Models relative location of parts BUT Intractable for moderate number of parts

  40. Pros • Retains spatial constraints • Robust to deformations Cons • Computationally expensive • Generalization to large inter-class variation (e.g., modeling chairs)

  41. Feature Spatial Window Matching reasoning classification

  42. Window-based

  43. Template Matching 1. get image window 2. extract features 3. classify When does this work and when does it fail? How many templates do you need?

  44. Per-exemplar exemplar template top hits from test data find the ‘nearest’ exemplar, inherit its label

  45. Template Matching 1. get image window 
 2. extract features 3. compare to template (or region proposals) Do this part with one big classifier ‘end to end learning’

  46. Convolutional 
 Neural Networks Convolution Pooling Image patch Image patch (raw pixels values) (raw pixels values) max/min response over a region response of one ‘filter’ response of one ‘filter’ A 96 x 96 image convolved with 400 filters Pooling aggregates statistics and (features) of size 8 x 8 generates about 3 lowers the dimension of convolution million values (89 2 x400)

  47. 96 ‘filters’ 224/4=56 630 million connections 60 millions parameters to learn Krizhevsky, A., Sutskever, I. and Hinton, G. E. ImageNet Classification with Deep Convolutional Neural Networks, NIPS 2012.

  48. Pros • Retains spatial constraints • Efficient test time performance Cons • Many many possible windows to evaluate • Requires large amounts of data • Sometimes (very) slow to train

  49. How to write an effective CV resume

  50. Deep Learning +1-DEEP-LEARNING deeplearning@deeplearning http://deeplearning Summary : Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Experience : Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Education Deep Learning Deep Learning ? Deep Learning Deep Learning Deep Learning Experience Deep Learning Deep Learning . Deep Learning Deep Learning, Deep Learning · Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning · Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning · Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning in another country Deep Learning Deep Learning , Deep Learning , Deep Learning · Deep Learning ... wait.. Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning · Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning Deep Learning · Very Deep Learning Publications 1. Deep Learning in Deep Learning People who do Deep Learning things. Conference of Deep Learning. 2. Shallow Learning... Nawww.. Deep Learning bruh Under submission while Deep Learning Patent 1. System and Method for Deep Learning . Deep Learning, Deep Learning , Deep Learning , Deep Learning

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend