Active learning Co-training 3 Subtract-average detection score - - PowerPoint PPT Presentation

active learning co training 3 subtract average detection
SMART_READER_LITE
LIVE PREVIEW

Active learning Co-training 3 Subtract-average detection score - - PowerPoint PPT Presentation

Active learning Co-training 3 Subtract-average detection score Grey-scale detection score Summary Boosting is a method for learning an accurate classifiers by combining many weak classifiers. Boosting is resistant to over-fitting .


slide-1
SLIDE 1

Active learning

slide-2
SLIDE 2

Co-training

slide-3
SLIDE 3

3

slide-4
SLIDE 4

Grey-scale detection score Subtract-average detection score

slide-5
SLIDE 5

Summary

  • Boosting is a method for learning an accurate classifiers by

combining many weak classifiers.

  • Boosting is resistant to over-fitting.
  • Margins quantify prediction confidence.
  • High noise is a serious problem for learning classifiers-

can’t be solved by minimizing convex functions.

  • Robustboost can solve some high noise problems. Exact

characterization still unclear.

  • Jboost - an implementation of ADTrees and various

boosting algorithms in java.

  • Book on boosting coming this spring.
  • Thank you, questions?

5

slide-6
SLIDE 6

5/17/06 UCLA

Pedestrian detection - typical segment

slide-7
SLIDE 7

5/17/06 UCLA

Current best results

slide-8
SLIDE 8

5/17/06 UCLA

Image Features

Unique Binary Features “Rectangle filters” Similar to Haar wavelets Papageorgiou, et al.

ht(xi) = 1 if ft(xi) > θt 0 otherwise ⎧ ⎨ ⎩

Very fast to compute using “integral image”. Combined using adaboost

slide-9
SLIDE 9

5/17/06 UCLA

Yotam’s features

max (p1,p2) < min(q1,q2,q3,q4) Search for a good feature based on genetic programming Faster to calculate than Viola and Jones

slide-10
SLIDE 10

5/17/06 UCLA

Definition

  • Feature works in one of 3

resolutions: full, half, quarter

  • Two sets of up to 6 points

each

  • Each point is an individual

pixel

  • Feature says yes if all

white points have higher values then all black points,

  • r vice versa
slide-11
SLIDE 11

5/17/06 UCLA

  • Deal better with the

variation in illumination, no need to normalize.

  • Highly efficient (3-4

image access

  • perations). 2 times

faster than Viola&Jones

  • 20% of the memory

Advantages

slide-12
SLIDE 12

5/17/06 UCLA

Steps of batch learning

  • Collect labeled examples
  • Run learning algorithm to generate

classification rule

  • Test classification rule on new data.
slide-13
SLIDE 13

5/17/06 UCLA

Labeling process

3 seconds for deciding if box is pedestrian or not. 20 seconds for marking a box around a pedestrian. How to choose “hard” negative examples? 1500 pedestrians Collected 6 Hrs of video -> 540,000 frames 170,000 boxes per frame

slide-14
SLIDE 14

5/17/06 UCLA

Steps of active learning

  • Collect labeled examples
  • Run learning algorithm to generate

classification rule

  • Apply classifier on new data.


and 
 label informative examples.

slide-15
SLIDE 15

5/17/06 UCLA

SEVILLE screen shot 1

slide-16
SLIDE 16

5/17/06 UCLA

SEVILLE screen shot 2

slide-17
SLIDE 17

5/17/06 UCLA

Consider the following:

Margins An example: <x,y> e.g. < ,+1> margin > 0 means correct classification Normalized score:

−1≤ αt

t=1 T

ht x

( )

αt

t=1 T

≤1

The margin is:

y αt

t=1 T

ht x

( )

αt

t=1 T

slide-18
SLIDE 18

5/17/06 UCLA

Display the rectangles inside the margins

slide-19
SLIDE 19

5/17/06 UCLA

large margins => reliable predictions

Validation Learning

10 20 50 100 500 1000 2000 3000

1 0.5 1 0.5

slide-20
SLIDE 20

5/17/06 UCLA

Margin Distributions

slide-21
SLIDE 21

5/17/06 UCLA

Summary of Training effort

slide-22
SLIDE 22

5/17/06 UCLA

Summary of Training

Only examples whose score is in this range are hand - labeled

slide-23
SLIDE 23

5/17/06 UCLA

Few training examples

slide-24
SLIDE 24

5/17/06 UCLA

After re-labeling feedback

slide-25
SLIDE 25

5/17/06 UCLA

Final detector

slide-26
SLIDE 26

5/17/06 UCLA

Examples - easy

Positive Negative

slide-27
SLIDE 27

5/17/06 UCLA

Examples - medium

Positive Negative

slide-28
SLIDE 28

5/17/06 UCLA

Examples - hard

Positive Negative 7 9 8

Iteration

10

slide-29
SLIDE 29

5/17/06 UCLA

And the figure in the gown is..

slide-30
SLIDE 30

5/17/06 UCLA

Seville cycles

slide-31
SLIDE 31

5/17/06 UCLA

Summary

  • Boosting and SVM control over-fitting

using margins.

  • Margins measure the stability of the

prediction, not conditional probability.

  • Margins are useful for co-training and for

active-learning.

31