c cost sensitive active t s iti a ti visual category
play

C Cost-Sensitive Active t S iti A ti Visual Category Learning g - PowerPoint PPT Presentation

C Cost-Sensitive Active t S iti A ti Visual Category Learning g y g Sudheendra Vijayanarasimhan j y Kristen Grauman University of Texas at Austin y Learning visual categories Current category m odels d l Annotators wall koala


  1. C Cost-Sensitive Active t S iti A ti Visual Category Learning g y g Sudheendra Vijayanarasimhan j y Kristen Grauman University of Texas at Austin y

  2. Learning visual categories Current category m odels d l Annotators wall koala person sky sky t tree car pavement Labeled data K. Grauman, Learning Workshop, April 2009

  3. More supervision → better learning? Access to more labeled examples (and “strongly” labeled examples) often leads to more accurate labeled examples) often leads to more accurate recognition results. R Results on the Caltech-101 database lt th C lt h 101 d t b y Accuracy A Number of labeled examples K. Grauman, Learning Workshop, April 2009

  4. Active learning • Traditional active learning reduces supervision by obtaining labels for the most informative or by obtaining labels for the most informative or uncertain examples first. Positive Negative ? ? Unlabeled ? [Mackay 1992, Freund et al. 1997, Tong & Koller 2001, Lindenbaum et al. 2004, Kapoor et al. 2007, Collins et al. 2008, Holub & Perona 2008,...] K. Grauman, Learning Workshop, April 2009

  5. Problem Less expensive to obtain • Multi-label examples • Multiple levels of M l i l l l f annotation are possible More expensive to • Variable cost depending • Variable cost depending obtain on level and example K. Grauman, Learning Workshop, April 2009

  6. Our approach: Cost-sensitive “multi level” active learning multi-level active learning M i Main idea: id Compute decision-theoretic active selection criterion that weighs both: it i th t i h b th – which example to annotate, and – what kind of annotation to request for it h t ki d f t ti t t f it as compared to – the predicted effort the request would require K. Grauman, Learning Workshop, April 2009

  7. Our approach: Cost-sensitive “multi level” active learning multi-level active learning … ffort ffort nfo nfo ef in ef in Most regions are understood, This looks expensive to but this region is unclear. annotate, and it does not seem informative. informative. … effort effort info info This looks expensive to This looks easy to annotate, annotate, but it seems very but its content is already informative. understood. K. Grauman, Learning Workshop, April 2009

  8. Our approach: Cost-sensitive “multi level” active learning multi-level active learning Current Issue request: category category “Get a full m odels segmentation on image #31.” ? poster koala wall koala person sky tree contains car trees pavement Unlabeled data Partially and w eakly Labeled data labeled data K. Grauman, Learning Workshop, April 2009

  9. Multiple-instance learning (MIL) negative negative positive bags g bags bags positive Traditional supervised Traditional supervised Multiple instance Multiple-instance learning learning [Dietterich et al 1997] [Dietterich et al. 1997] K. Grauman, Learning Workshop, April 2009

  10. MIL for visual category learning Positive bag Negative bag g g … • Positive instance : Segment belonging to class • Negative instance : Segment not in class • Positive bag : Image containing class • Negative bag : Image not containing class [Maron & Ratan, Yang & Lozano-Perez, Andrews et al.,…] K. Grauman, Learning Workshop, April 2009

  11. Multi-level active queries Predict which query will be most informative, given the cost of obtaining the annotation. ? ? ? ? ?? ? ? Possible queries: 3. Label all instances 1. Label an 2. Label an in a bag unlabeled instance unlabeled bag K. Grauman, Learning Workshop, April 2009

  12. Decision-theoretic multi-level criterion We measure the value of information (VOI) for choosing a potential query by the expected reduction in total cost: , Current dataset Dataset after z is labeled with true label t - - Risk under the Ri k d th Ri k Risk after adding z f ddi Cost of obtaining C f b i i = current classifier to the labeled set annotation for z K. Grauman, Learning Workshop, April 2009

  13. Decision-theoretic multi-level criterion - - - - Risk under the Risk under the Risk after adding z Risk after adding z Cost of obtaining Cost of obtaining = = current classifier current classifier to the labeled set to the labeled set annotation for z annotation for z K. Grauman, Learning Workshop, April 2009

  14. Decision-theoretic multi-level criterion - - Risk under the Risk after adding z Cost of obtaining = current classifier to the labeled set annotation for z To estimate the risk of incorporating z into labeled set before knowing its true label t, compute expected value: where where denotes all possible labels for . denotes all possible labels for . Easy if we are considering an unlabeled instance or bag. K. Grauman, Learning Workshop, April 2009

  15. Decision-theoretic multi-level criterion - - Risk under the Risk after adding z Cost of obtaining = current classifier to the labeled set annotation for z But if we are considering a positive bag , then . We compute the expected cost using Gibbs sampling: p p g p g k th sample: a label assignment for all instances in the bag K. Grauman, Learning Workshop, April 2009

  16. Decision-theoretic multi-level criterion - - Risk under the Risk after adding z Cost of obtaining = current classifier to the labeled set annotation for z We learn a function to predict the cost (effort) required to obtain any candidate annotation. rt effor info This looks expensive to annotate, This looks expensive to annotate, and it does not seem informative. K. Grauman, Learning Workshop, April 2009

  17. Predicting effort • What manual effort cost would we expect to pay for an unlabeled image? for an unlabeled image? Whi h i Which image would you rather annotate? ld th t t ? K. Grauman, Learning Workshop, April 2009

  18. Predicting effort • What manual effort cost would we expect to pay for an unlabeled image? for an unlabeled image? Whi h i Which image would you rather annotate? ld th t t ? K. Grauman, Learning Workshop, April 2009

  19. Learning from annotation examples Extract cost-indicative image features, and train a support vector regressor to map features to times. support vector regressor to map features to times. Localized Localized measures of edge density Measure of M f how fast color changes g locally σ K. Grauman, Learning Workshop, April 2009

  20. K. Grauman, Learning Workshop, April 2009

  21. Learning from annotation examples Interface on Mechanical Turk Mechanical Turk … … 32 s 24 s 48 48 s Collect about 50 responses per training image. K. Grauman, Learning Workshop, April 2009

  22. Decision-theoretic multi-level criterion - - Risk under the Risk after adding z Cost of obtaining = current classifier to the labeled set annotation for z We learn a function to predict the cost (effort) required to obtain any candidate annotation. rt effor info This looks expensive to annotate, This looks expensive to annotate, and it does not seem informative. K. Grauman, Learning Workshop, April 2009

  23. Recap: actively seeking annotations Annotators Current Issue request: category “G t “Get a full f ll m odels d l segmentation on image #31.” ? poster koala wall koala person sky sky tree t contains car trees pavement Unlabeled data Partially and w eakly Labeled data labeled data K. Grauman, Learning Workshop, April 2009

  24. Results: MSRC dataset • 21 classes, 591 images images • Multi-label data K. Grauman, Learning Workshop, April 2009

  25. Results: predicting effort • Predicted examples are from a novel test set K. Grauman, Learning Workshop, April 2009

  26. Results: predicting effort K. Grauman, Learning Workshop, April 2009

  27. Results: predicting effort K. Grauman, Learning Workshop, April 2009

  28. Results: impact of cost predictions Predicting the amount of effort entailed leads to wiser choices during active selection. g K. Grauman, Learning Workshop, April 2009

  29. Summary • Multi-level active learning formulates annotation requests that specify the example and the task requests that specify the example and the task. • Balance cost and effort to use human attention most efficiently: learn more with less! • Predict which examples are hard/easy to annotate. p y • References : – Vijayanarasimhan & Grauman. Multi-Level Active Prediction of Vijayanarasimhan & Grauman. Multi Level Active Prediction of Useful Image Annotations for Recognition. In NIPS 2008. – Vijayanarasimhan & Grauman. What’s It Going to Cost You? : P Predicting Effort vs. Informativeness for Multi-Label Image di ti Eff t I f ti f M lti L b l I Annotations. To appear, CVPR 2009. K. Grauman, Learning Workshop, April 2009

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend