C Cost-Sensitive Active t S iti A ti Visual Category Learning g - - PowerPoint PPT Presentation

c cost sensitive active t s iti a ti visual category
SMART_READER_LITE
LIVE PREVIEW

C Cost-Sensitive Active t S iti A ti Visual Category Learning g - - PowerPoint PPT Presentation

C Cost-Sensitive Active t S iti A ti Visual Category Learning g y g Sudheendra Vijayanarasimhan j y Kristen Grauman University of Texas at Austin y Learning visual categories Current category m odels d l Annotators wall koala


slide-1
SLIDE 1

C t S iti A ti Cost-Sensitive Active Visual Category Learning g y g

Sudheendra Vijayanarasimhan j y Kristen Grauman University of Texas at Austin y

slide-2
SLIDE 2

Learning visual categories

Current category d l Annotators m odels t sky koala wall person car tree sky pavement Labeled data

  • K. Grauman, Learning Workshop, April 2009
slide-3
SLIDE 3

More supervision → better learning?

Access to more labeled examples (and “strongly” labeled examples) often leads to more accurate labeled examples) often leads to more accurate recognition results.

R lt th C lt h 101 d t b y Results on the Caltech-101 database Accuracy A Number of labeled examples

  • K. Grauman, Learning Workshop, April 2009
slide-4
SLIDE 4

Active learning

  • Traditional active learning reduces supervision

by obtaining labels for the most informative or by obtaining labels for the most informative or uncertain examples first.

Positive Negative Unlabeled

? ? ?

[Mackay 1992, Freund et al. 1997, Tong & Koller 2001, Lindenbaum et al. 2004, Kapoor et al. 2007, Collins et al. 2008, Holub & Perona 2008,...]

  • K. Grauman, Learning Workshop, April 2009
slide-5
SLIDE 5

Problem

Less expensive to

  • btain
  • Multi-label examples

M l i l l l f

More expensive to

  • Multiple levels of

annotation are possible

  • Variable cost depending
  • btain
  • Variable cost depending
  • n level and example
  • K. Grauman, Learning Workshop, April 2009
slide-6
SLIDE 6

Our approach: Cost-sensitive “multi level” active learning

M i id

multi-level active learning

Main idea: Compute decision-theoretic active selection it i th t i h b th criterion that weighs both: – which example to annotate, and h t ki d f t ti t t f it – what kind of annotation to request for it as compared to – the predicted effort the request would require

  • K. Grauman, Learning Workshop, April 2009
slide-7
SLIDE 7

Our approach: Cost-sensitive “multi level” active learning multi-level active learning

ffort nfo ffort nfo Most regions are understood, but this region is unclear. This looks expensive to annotate, and it does not seem informative. ef in ef in informative.

effort info effort info This looks expensive to annotate, but it seems very informative. This looks easy to annotate, but its content is already understood.

  • K. Grauman, Learning Workshop, April 2009
slide-8
SLIDE 8

Our approach: Cost-sensitive “multi level” active learning

Current category Issue request:

multi-level active learning

category m odels “Get a full segmentation on image #31.”

?

koala

poster

koala wall person

contains trees

car tree sky pavement Unlabeled data Partially and w eakly labeled data Labeled data

  • K. Grauman, Learning Workshop, April 2009
slide-9
SLIDE 9

Multiple-instance learning (MIL)

negative positive bags negative bags positive bags g

Traditional supervised Multiple instance Traditional supervised learning

[Dietterich et al 1997]

Multiple-instance learning

[Dietterich et al. 1997]

  • K. Grauman, Learning Workshop, April 2009
slide-10
SLIDE 10

MIL for visual category learning

Positive bag Negative bag g g

  • Positive instance:

Segment belonging to class

  • Negative instance: Segment not in class
  • Positive bag:

Image containing class

  • Negative bag:

Image not containing class

[Maron & Ratan, Yang & Lozano-Perez, Andrews et al.,…]

  • K. Grauman, Learning Workshop, April 2009
slide-11
SLIDE 11

Multi-level active queries

Predict which query will be most informative, given the cost of obtaining the annotation. ? ? ? ? ? ?? ? Possible queries:

  • 1. Label an

unlabeled instance

  • 2. Label an

unlabeled bag

  • 3. Label all instances

in a bag

  • K. Grauman, Learning Workshop, April 2009
slide-12
SLIDE 12

Decision-theoretic multi-level criterion

We measure the value of information (VOI) for choosing a potential query by the expected reduction in total cost: ,

Dataset after z is labeled with true label t Current dataset

Ri k d th Ri k f ddi C f b i i Risk under the current classifier Risk after adding z to the labeled set Cost of obtaining annotation for z =

  • K. Grauman, Learning Workshop, April 2009
slide-13
SLIDE 13

Decision-theoretic multi-level criterion

Risk under the current classifier Risk after adding z to the labeled set Cost of obtaining annotation for z =

  • Risk under the

current classifier Risk after adding z to the labeled set Cost of obtaining annotation for z =

  • K. Grauman, Learning Workshop, April 2009
slide-14
SLIDE 14

Decision-theoretic multi-level criterion

Risk under the current classifier Risk after adding z to the labeled set Cost of obtaining annotation for z =

  • To estimate the risk of incorporating z into labeled set

before knowing its true label t, compute expected value: where denotes all possible labels for . where denotes all possible labels for . Easy if we are considering an unlabeled instance or bag.

  • K. Grauman, Learning Workshop, April 2009
slide-15
SLIDE 15

Decision-theoretic multi-level criterion

Risk under the current classifier Risk after adding z to the labeled set Cost of obtaining annotation for z =

  • But if we are considering a positive bag ,

then . We compute the expected cost using Gibbs sampling: p p g p g

kth sample: a label assignment for all instances in the bag

  • K. Grauman, Learning Workshop, April 2009
slide-16
SLIDE 16

Decision-theoretic multi-level criterion

Risk under the current classifier Risk after adding z to the labeled set Cost of obtaining annotation for z =

  • We learn a function to predict the cost (effort) required to
  • btain any candidate annotation.

rt

This looks expensive to annotate,

effor info

This looks expensive to annotate, and it does not seem informative.

  • K. Grauman, Learning Workshop, April 2009
slide-17
SLIDE 17

Predicting effort

  • What manual effort cost would we expect to pay

for an unlabeled image? for an unlabeled image? Whi h i ld th t t ? Which image would you rather annotate?

  • K. Grauman, Learning Workshop, April 2009
slide-18
SLIDE 18

Predicting effort

  • What manual effort cost would we expect to pay

for an unlabeled image? for an unlabeled image? Whi h i ld th t t ? Which image would you rather annotate?

  • K. Grauman, Learning Workshop, April 2009
slide-19
SLIDE 19

Learning from annotation examples

Extract cost-indicative image features, and train a support vector regressor to map features to times.

Localized

support vector regressor to map features to times.

Localized measures of edge density M f Measure of how fast color changes g locally σ

  • K. Grauman, Learning Workshop, April 2009
slide-20
SLIDE 20
  • K. Grauman, Learning Workshop, April 2009
slide-21
SLIDE 21

Learning from annotation examples

Interface on Mechanical Turk Mechanical Turk

… 32 s 24 s 48 48 s

Collect about 50 responses per training image.

  • K. Grauman, Learning Workshop, April 2009
slide-22
SLIDE 22

Decision-theoretic multi-level criterion

Risk under the current classifier Risk after adding z to the labeled set Cost of obtaining annotation for z =

  • We learn a function to predict the cost (effort) required to
  • btain any candidate annotation.

rt

This looks expensive to annotate,

effor info

This looks expensive to annotate, and it does not seem informative.

  • K. Grauman, Learning Workshop, April 2009
slide-23
SLIDE 23

Recap: actively seeking annotations

Annotators Current category d l Issue request: “G t f ll m odels “Get a full segmentation on image #31.”

?

koala

poster

t sky koala wall person

contains trees

car tree sky pavement Unlabeled data Partially and w eakly labeled data Labeled data

  • K. Grauman, Learning Workshop, April 2009
slide-24
SLIDE 24

Results: MSRC dataset

  • 21 classes, 591

images images

  • Multi-label data
  • K. Grauman, Learning Workshop, April 2009
slide-25
SLIDE 25

Results: predicting effort

  • Predicted examples are from a novel test set
  • K. Grauman, Learning Workshop, April 2009
slide-26
SLIDE 26

Results: predicting effort

  • K. Grauman, Learning Workshop, April 2009
slide-27
SLIDE 27

Results: predicting effort

  • K. Grauman, Learning Workshop, April 2009
slide-28
SLIDE 28

Results: impact of cost predictions

Predicting the amount of effort entailed leads to wiser choices during active selection. g

  • K. Grauman, Learning Workshop, April 2009
slide-29
SLIDE 29

Summary

  • Multi-level active learning formulates annotation

requests that specify the example and the task requests that specify the example and the task.

  • Balance cost and effort to use human attention

most efficiently: learn more with less!

  • Predict which examples are hard/easy to annotate.

p y

  • References:

– Vijayanarasimhan & Grauman. Multi-Level Active Prediction of Vijayanarasimhan & Grauman. Multi Level Active Prediction of Useful Image Annotations for Recognition. In NIPS 2008. – Vijayanarasimhan & Grauman. What’s It Going to Cost You? : P di ti Eff t I f ti f M lti L b l I Predicting Effort vs. Informativeness for Multi-Label Image

  • Annotations. To appear, CVPR 2009.
  • K. Grauman, Learning Workshop, April 2009