object category detection
play

Object Category Detection Yusuf Aytar & Andrew Zisserman, - PowerPoint PPT Presentation

Tabula Rasa: Model Transfer for Object Category Detection Yusuf Aytar & Andrew Zisserman, Department of Engineering Science Oxford (Presented by Elad Liebman) General Intuition I We have: a discriminatively trained classification


  1. Tabula Rasa: Model Transfer for Object Category Detection Yusuf Aytar & Andrew Zisserman, Department of Engineering Science Oxford (Presented by Elad Liebman)

  2. General Intuition I • We have: a discriminatively trained classification model for category A. • We need: a classifier for a new category B. • Can we use it to make learning a model for category B easier? – Less examples? – Better accuracy?

  3. General Intuition II Tabula Rasa: Model Transfer for Object Category Detection, Aytar & Zisserman Motorbike images courtesy of the Caltech Vision Group, collated by Svetlana Lazebnik

  4. Background I • Good: – There has been considerable progress recently in object category detection. – Successful tools are readily available. • Bad: – current methods require training the detector from scratch. – Training from scratch is very costly in terms of sample size required. – Not scalable in multi-category settings.

  5. Background II • Possible solution: – Represent categories by their attributes, and re-use attributes. – Attributes are learned from multiple classes, so training data is abundant. – Attributes learned can be used even for categories that didn’t “participate” in the learning, as long as they share the attribute.

  6. Background III Wheel Detector Use for detection of objects with “wheel” attributes

  7. (This idea should sound familiar…) “ Sharing visual features for multiclass and multiview object detection ”, Torralba et al., 2007 – Training multiple category classifiers at the same time with lower sample and runtime complexity using shared features. – Uses a variation on boosting and shared regression stumps.

  8. Torralba et al. – cont. I Number of required features Effect on learning 12 different categories 12 views of same category

  9. Torralba et al. – cont. II • There is a difference in motivations here. • Torralba et al. are mostly concerned with scalability. – Reduce the cost of training multiple detectors. – Use shared features when learning full sets of distinctive features per category is infeasible. • Knowledge transfer is more concerned with sample complexity. – Use preexisting related classifiers when new examples are hard to come by.

  10. ( Back to our paper… ) Wheel Detector • Unfortunately, this approach proves inferior in practice to discriminative training (true for both detection and classification). (true to when the paper was published…)

  11. Background IV • An alternative approach: – Benefit from previously-learned category detectors. – Previously learned categories should be similar. • We need a way to transfer information from one classifier to the next.

  12. Aytar & Zisserman I • Consider the SVM discriminative training framework for HOG template models of Dalal & Triggs & Felzenszwalb et al. • Observation: learned template records the spatial layout of positive and negative orientations. • Classes that are geometrically similar will give rise to similar templates.

  13. Aytar & Zisserman II • Apply transfer learning from one detector to another. • To do this, the previously learned template is used as a regularizer in the cost function of the new classifier. • This enables learning with a reduced number of examples.

  14. Some ( a few ) Words on Regularization • From a Bayesian standpoint, it’s similar to introducing a prior. • Often used to prevent overfitting or solve ill posed problems. • A good example for regularization: ridge regression a𝑠𝑕𝑛𝑗𝑜 𝛾 { 𝑍 − 𝑌𝛾 2 + Γ𝛾 2 } Images taken from Andrew Rosenberg’s slides, ML course, CUNY

  15. Model Transfer Support Vector Machines • We wish to detect a target object category. • We already have a well trained detector for a different source category. • Three strategies to transfer knowledge from the source detector to the target detector: – Adaptive SVMs – Projective Model Transfer SVMs – Deformable Adaptive SVMs

  16. Adaptive SVMs I • Learn from the source model 𝑥 𝑡 by regularizing the distance between the learned model 𝑥 and 𝑥 𝑡 . • 𝑦 𝑗 are the training examples, 𝑧 𝑗 ∈ {−1,1} are the labels, and the loss function is the hinge loss: (0, 1 − 𝑧 𝑗 𝑥 𝑈 𝑦 𝑗 + 𝑐 ) 𝑚 𝑦 𝑗 , 𝑧 𝑗 ; 𝑥, 𝑐 = max⁡

  17. Adaptive SVMs II • Reminder: in regular SVMs we want to optimize: 𝑂 𝑥,𝑐 { 𝑥 2 + 𝐷 𝑚(𝑦 𝑗 , 𝑧 𝑗 ; 𝑥, 𝑐)} 𝑀 𝐵 = min 𝑗 • But now, our goal is to optimize: 𝑥,𝑐 { 𝑥 − Γ𝑥 𝑡 2 + 𝐷 𝑚(𝑦 𝑗 , 𝑧 𝑗 ; 𝑥, 𝑐) 𝑂 𝑀 𝐵 = min } 𝑗 • Γ controls the amount of transfer regularization, 𝐷 controls the weight of the loss function and 𝑂 is the number of samples.

  18. An Illustration minimize…

  19. Adaptive SVMs III • We note that if 𝑥 𝑡 is normalized to 1 then: 𝑥 2 - “normal” SVM margin.   (−2Γ 𝑥 𝑑𝑝𝑡𝜄) - the transfer. • We wish to minimize 𝜄 , the angle between 𝑥 𝑡 and 𝑥 . • However, −2Γ 𝑥 𝑑𝑝𝑡𝜄 also encourages 𝑥 to be larger, so Γ controls a tradeoff between margin maximization and knowledge transfer.

  20. Projective Model Transfer SVMs I • Rather than transfer by maximizing 𝑥 𝑑𝑝𝑡𝜄 , we can instead minimize the projection of 𝑥 onto the separating hyperplane orthogonal to 𝑥 𝑡 . • This directly translates to optimizing: • Where 𝑄 is the projection matrix:

  21. Yet another illustration

  22. Projective Model Transfer SVMs II • We note that 𝑄𝑥 2 is the squared norm of the projection of 𝑥 onto the source hyperplane: • 𝑥 𝑈 𝑥 𝑡 ≥ 0 constraints 𝑥 to the positive halfspace defined by 𝑥 𝑡 . • Here too Γ controls the transfer. As Γ → 0 , the PMT-SVM reduces to a classic SVM optimization problem.

  23. Deformable Adaptive SVMs I • Regularization shouldn’t be “equally forced”. • Imagine we have a deformable source template – small local deformations are allowed to better fit the source to the target. • For instance, when transferring from a motorbike wheel to a bicycle wheel: • We need more flexible regularization…

  24. Deformable Adaptive SVMs II • Local deformations are described as a flow of weight vectors from one cell to another, governed by the following flow definition: 𝑡 is • 𝜐 represents the flow transformation, 𝑥 𝑘 the 𝑘 𝑢ℎ cell in the source template, and 𝑔 𝑗𝑘 denotes the amount of transfer from the 𝑘 𝑢ℎ cell in the source to the 𝑗 𝑢ℎ cell in the target.

  25. Deformable Adaptive SVMs III 𝑔 𝑗𝑘 𝑋 𝑗 𝑋 𝑘

  26. Deformable Adaptive SVMs IV • Now, the Deformable-Adaptive-SVM is simply a generalization of the adaptive SVM we’ve seen before, with 𝑥 𝑡 replaced with its deformable version 𝜐(𝑥 𝑡 ) : (𝜇 is the weight of the deformation, 𝑒 𝑗𝑘 is the distance between cells 𝑗, 𝑘 and 𝑒 is the penalty for overflow)

  27. Deformable Adaptive SVMs V • 𝜇 in effect controls the extent of deformability. • High 𝜇 values make the model more rigid (you pay more for the deformations you make), pushing the solution closer to that of the simple adaptive SVM. • Low 𝜇 values allow for a more flexible source template with less regularization. • (Amazingly enough, the term 𝑥⁡ − Γ𝜐(𝑥 𝑡 2 is still convex.)

  28. Experiments I.I • In general, transfer learning can offer three major benefits: – Higher starting point – Higher slope ( we learn faster ) – Higher asymptote ( learning converges into a better classifier )

  29. Experiments I.II • Two types of transfer experiments: – Specialization ( we know how to recognize quadrupeds, now we want to recognize horses ) – Interclass transfer ( we know how to recognize horses, now we want to recognize donkeys )

  30. Experiments II – Interclass • Baseline detectors are the SVM classifiers trained directly without any transfer learning. • Two scenarios studied: – transferring from motorbikes to bicycles – transferring from cows to horses • Two variants discussed: – One shot learning – we can only choose one (!) example from the target class, and study our starting point. – Multiple shot learning

  31. Experiments III – One Shot Learning Top 15 (middle) Low 15 (Looks good, but a bit unfair, especially when using lower-grade examples from the target category…)

  32. Experiments IV – Multiple Shot (We note that by ~10 examples, basic SVM has caught up with us…)

  33. Experiments V – Multiple Shot

  34. Experiments VI - Specialization • “Quadruped” detector trained with instances of cows, sheep and horses. • Then specialization for cows and horses was attempted via transfer. (Once again we note that by ~15-20 examples, basic SVM has caught up with us…)

  35. Discussion • Pros: – An interesting and fairly straightforward expansion of the basic category detection scheme. – Provides a far better starting point for classifying new categories. – A different perspective on multi-category settings. • Cons: – “Closeness” between classes is very poorly defined. – One-shot experiments not particularly convincing. – Advantage degrades the more samples you have. – PMT- SVM doesn’t scale very well…

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend