object detection as supervised classification
play

Object detection as supervised classification Tues Nov 10 Kristen - PDF document

11/9/2015 Object detection as supervised classification Tues Nov 10 Kristen Grauman UT Austin Today Supervised classification Window-based generic object detection basic pipeline boosting classifiers face detection as case


  1. 11/9/2015 Object detection as supervised classification Tues Nov 10 Kristen Grauman UT Austin Today • Supervised classification • Window-based generic object detection – basic pipeline – boosting classifiers – face detection as case study 1

  2. 11/9/2015 What kinds of things work best today? Reading license plates, zip codes, checks Frontal face detection Recognizing flat, textured objects (like books, CD Fingerprint recognition covers, posters) What kinds of things work best today? 2

  3. 11/9/2015 Generic category recognition: basic framework • Build/train object model – (Choose a representation) – Learn or fit parameters of model / classifier • Generate candidates in new image • Score the candidates Supervised classification • Given a collection of labeled examples, come up with a function that will predict the labels of new examples. “four” “nine” ? Novel input Training examples • How good is some function we come up with to do the classification? • Depends on – Mistakes made – Cost associated with the mistakes 3

  4. 11/9/2015 Supervised classification • Given a collection of labeled examples, come up with a function that will predict the labels of new examples. • Consider the two-class (binary) decision problem – L(4→9): Loss of classifying a 4 as a 9 – L(9→4): Loss of classifying a 9 as a 4 • Risk of a classifier s is expected loss:               ( ) Pr 4 9 | using 4 9 Pr 9 4 | using 9 4 R s s L s L • We want to choose a classifier so as to minimize this total risk Supervised classification Optimal classifier will minimize total risk. At decision boundary, either choice of label yields same expected Feature value x loss. If we choose class “four” at boundary, expected loss is:     ( class is 9 | ) (9 4) (class is 4 | ) (4 4) P x L P x L   ( class is 9 | ) (9 4) P x L If we choose class “nine” at boundary, expected loss is:   ( class is 4 | ) (4 9) P x L 4

  5. 11/9/2015 Supervised classification Optimal classifier will minimize total risk. At decision boundary, either choice of label yields same expected Feature value x loss. So, best decision boundary is at point x where    ( class is 9 | ) (9 4) P(class is 4 | ) (4 9) P x L x L T o classify a new point, choose class with lowest expected loss; i.e., choose “four” if    ( 4 | ) ( 4 9 ) ( 9 | ) ( 9 4 ) P x L P x L Supervised classification Optimal classifier will P(4 | x ) P(9 | x ) minimize total risk. At decision boundary, either choice of label yields same expected Feature value x loss. So, best decision boundary is at point x where    ( class is 9 | x ) (9 4) P(class is 4 | x ) (4 9) P L L T o classify a new point, choose class with lowest expected loss; i.e., choose “four” if    ( 4 | ) ( 4 9 ) ( 9 | ) ( 9 4 ) P x L P x L 5

  6. 11/9/2015 Probability Basic probability • X is a random variable • P(X) is the probability that X achieves a certain value called a PDF -probability distribution/density function • • or continuous X discrete X • Conditional probability: P(X | Y) – probability of X given that we already know Y Source: Stev e Seitz Example: learning skin colors • We can represent a class-conditional density using a histogram (a “non - parametric” distribution) Percentage of skin pixels in each bin P(x|skin) Feature x = Hue P(x|not skin) Feature x = Hue 6

  7. 11/9/2015 Example: learning skin colors • We can represent a class-conditional density using a histogram (a “non - parametric” distribution) P(x|skin) Feature x = Hue Now we get a new image, P(x|not skin) and want to label each pixel as skin or non-skin. What’s the probability we care about to do skin detection? Feature x = Hue Bayes rule posterior prior likelihood ( | ) ( ) P x skin P skin  ( | ) P skin x ( ) P x  ( | ) ( | ) ( ) P skin x P x skin P skin Where does the prior come from? Why use a prior? 7

  8. 11/9/2015 Example: classifying skin pixels Now for every pixel in a new image, we can estimate probability that it is generated by skin. Brighter pixels  higher probability of being skin Classify pixels based on these probabilities Example: classifying skin pixels Gary Bradski, 1998 8

  9. 11/9/2015 Example: classifying skin pixels Using skin color-based face detection and pose estimation as a video-based interface Gary Bradski, 1998 Generative vs. Discriminative Models • Generative approach: separately model class-conditional densities and priors then evaluate posterior probabilities using Bayes ’ theorem • Discriminative approach: directly model posterior probabilities • In both cases usually work in a feature space Slide f rom Christopher M. Bishop, MSR Cambridge 9

  10. 11/9/2015 General classification This same procedure applies in more general circumstances • More than two classes • More than one dimension Example: face detection • Here, X is an image region – dimension = # pixels – each face can be thought of as a point in a high dimensional space H. Schneiderman, T. Kanade. "A Statistical Method for 3D Object Detection Applied to Faces and Cars". IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2000) H. Schneiderman and T .Kanade http://www-2.cs.cmu.edu/afs/cs.cmu.edu/user/hws/ww w/CVPR00.pdf Source: Stev e Seitz Today • Supervised classification • Window-based generic object detection – basic pipeline – boosting classifiers – face detection as case study 10

  11. 11/9/2015 Generic category recognition: basic framework • Build/train object model – Choose a representation – Learn or fit parameters of model / classifier • Generate candidates in new image • Score the candidates Window-based models Building an object model Given the representation, train a binary classifier Car/non-car Classifier No, not a car. Yes, car. 11

  12. 11/9/2015 Window-based models Generating and scoring candidates Car/non-car Classifier Window-based object detection: recap Training: 1. Obtain training data 2. Define features 3. Define classifier Given new image: 1. Slide window Training examples 2. Score by classifier Car/non-car Classifier Feature extraction 12

  13. 11/9/2015 Discriminative classifier construction Neural networks Nearest neighbor 10 6 examples LeCun, Bottou, Bengio, Haffner 1998 Shakhnarovich, Viola, Darrell 2003 Rowley , Baluja, Kanade 1998 Berg, Berg, Malik 2005... … Conditional Random Fields Support V ector Machines Boosting Guyon, V apnik Viola, Jones 2001, McCallum, Freitag, Pereira Heisele, Serre, Poggio, Torralba et al. 2004, 2000; Kumar, Hebert 2003 2001,… Opelt et al. 2006,… … Slide adapted from Antonio Torralba Boosting intuition W eak Classifier 1 Slide credit: Paul Viola 13

  14. 11/9/2015 Boosting illustration W eights Increased Boosting illustration W eak Classifier 2 14

  15. 11/9/2015 Boosting illustration W eights Increased Boosting illustration W eak Classifier 3 15

  16. 11/9/2015 Boosting illustration Final classifier is a combination of weak classifiers Boosting: training • Initially, weight each training example equally • In each boosting round: – Find the weak learner that achieves the lowest weighted training error – Raise weights of training examples misclassified by current weak learner • Compute final classifier as linear combination of all weak learners (weight of each learner is directly proportional to its accuracy) • Exact formulas for re-weighting and combining weak learners depend on the particular boosting scheme (e.g., AdaBoost) Slide credit: Lana Lazebnik 16

  17. 11/9/2015 Viola-Jones face detector Viola-Jones face detector Main idea: – Represent local texture with efficiently computable “rectangular” features within window of interest – Select discriminative features to be weak classifiers – Use boosted combination of them as final classifier – Form a cascade of such classifiers, rejecting clear negatives quickly 17

  18. 11/9/2015 Viola-Jones detector: features “ Rectangular” filters Feature output is difference between adjacent regions Value at (x,y) is sum of pixels Efficiently computable above and to the with integral image: any left of (x,y) sum can be computed in constant time. Integral image Computing the integral image Lana Lazebnik 18

  19. 11/9/2015 Computing the integral image ii(x, y-1) s(x-1, y) i(x, y) Cumulative row sum: s(x, y) = s(x – 1, y) + i(x, y) Integral image: ii(x, y) = ii(x, y−1) + s(x, y) Lana Lazebnik Computing sum within a rectangle • Let A,B,C,D be the values of the integral image at the corners of a D B rectangle • Then the sum of original image values within the A rectangle can be C computed as: sum = A – B – C + D • Only 3 additions are required for any size of rectangle! Lana Lazebnik 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend