Administrivia Mini-project 2 due April 7, in class implement - - PowerPoint PPT Presentation

administrivia
SMART_READER_LITE
LIVE PREVIEW

Administrivia Mini-project 2 due April 7, in class implement - - PowerPoint PPT Presentation

Administrivia Mini-project 2 due April 7, in class implement multi-class reductions, naive bayes, kernel perceptron, multi-class logistic regression and two layer neural networks training set: Project proposals


slide-1
SLIDE 1

Subhransu Maji (UMASS) CMPSCI 689 /27

Mini-project 2 due April 7, in class

  • implement multi-class reductions, naive bayes, kernel perceptron,

multi-class logistic regression and two layer neural networks

  • training set:
  • Project proposals due April 2, in class
  • one page describing the project topic, goals, etc
  • list your team members (2+)
  • project presentations: April 23 and 27
  • final report: May 3

Administrivia

1

slide-2
SLIDE 2

Subhransu Maji (UMASS) CMPSCI 689 /27

Kaggle

2

https://www.kaggle.com/competitions

slide-3
SLIDE 3

Subhransu Maji

24 March 2015

CMPSCI 689: Machine Learning

26 March 2015

Kernel Methods

slide-4
SLIDE 4

Subhransu Maji (UMASS) CMPSCI 689 /27

Learn non-linear classifiers by mapping features

Feature mapping

4

Can we learn the XOR function with this mapping?

slide-5
SLIDE 5

Subhransu Maji (UMASS) CMPSCI 689 /27

Let, Then the quadratic feature map is defined as:

  • Contains all single and pairwise terms

There are repetitions, e.g., x1x2 and x2x1, but hopefully the learning algorithm can handle redundant features

Quadratic feature map

5

x = [x1, x2, . . . , xD] φ(x) =[1, √ 2x1, √ 2x2, . . . , √ 2xD, x2

1, x1x2, x1x3 . . . , x1xD,

x2x1, x2

2, x2x3 . . . , x2xD,

. . . , xDx1, xDx2, xDx3 . . . , x2

D]

slide-6
SLIDE 6

Subhransu Maji (UMASS) CMPSCI 689 /27

Computational

  • Suppose training time is linear in feature dimension, quadratic

feature map squares the training time Memory

  • Quadratic feature map squares the memory required to store the

training data Statistical

  • Quadratic feature mapping squares the number of parameters
  • For now lets assume that regularization will deal with overfitting

Drawbacks of feature mapping

6

slide-7
SLIDE 7

Subhransu Maji (UMASS) CMPSCI 689 /27

The dot product between feature maps of x and z is:

  • Thus, we can compute φ(x)ᵀφ(z) in almost the same time as needed

to compute xᵀz (one extra addition and multiplication) We will rewrite various algorithms using only dot products (or kernel evaluations), and not explicit features

Quadratic kernel

7

φ(x)T φ(z) = 1 + 2x1z1 + 2x2z2, . . . , 2xDzD + x2

1z2 1 + x1x2z1z2 + . . . + x1xDz1zD + . . .

. . . + xDx1zDz1 + xDx2zDz2 + . . . + x2

Dz2 D

= 1 + 2 X

i

xizi ! + X

i,j

xixjzizj = 1 + 2

  • xT z
  • + (xT z)2

=

  • 1 + xT z

2 = K(x, z)

quadratic kernel

slide-8
SLIDE 8

Subhransu Maji (UMASS) CMPSCI 689 /19

Initialize for iter = 1,…,T

  • for i = 1,..,n
  • predict according to the current model
  • if , no change
  • else,

Perceptron revisited

8

yi = ˆ yi w ← [0, . . . , 0] (x1, y1), (x2, y2), . . . , (xn, yn) Input: training data feature map φ

Perceptron training algorithm

Obtained by replacing x by φ(x) w ← w + yiφ(xi) dependence on φ ˆ yi = ⇢ +1 if wT φ(xi) > 0 −1

  • therwise
slide-9
SLIDE 9

Subhransu Maji (UMASS) CMPSCI 689 /27

Linear algebra recap:

  • Let U be set of vectors in Rᴰ, i.e., U = {u1,u2, …, uD} and ui ∈ Rᴰ
  • Span(U) is the set of all vectors that can be represented as ∑ᵢaᵢuᵢ,

such that aᵢ ∈ R

  • Null(U) is everything that is left i.e., Rᴰ \ Span(U)

Properties of the weight vector

9

Perceptron representer theorem: During the run of the perceptron training algorithm, the weight vector w is always in the span of φ(x1), φ(x1), …, φ(xD)

w = P

i αiφ(xi)

αi ← αi + yi

updates

wT φ(z) = (P

i αiφ(xi))T φ(z) = P i αiφ(xi)T φ(z)

slide-10
SLIDE 10

Subhransu Maji (UMASS) CMPSCI 689 /19

Initialize for iter = 1,…,T

  • for i = 1,..,n
  • predict according to the current model
  • if , no change
  • else,

Kernelized perceptron

10

yi = ˆ yi (x1, y1), (x2, y2), . . . , (xn, yn) Input: training data feature map φ

Kernelized perceptron training algorithm

ˆ yi = ⇢ +1 if P

n αnφ(xn)T φ(xi) > 0

−1

  • therwise

αi = αi + yi α ← [0, 0, . . . , 0] φ(x)T φ(z) = (1 + xT z)p

polynomial kernel of degree p

slide-11
SLIDE 11

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernels existed long before SVMs, but were popularized by them Does the representer theorem hold for SVMs? Recall that the objective function of an SVM is:

  • Let,

Support vector machines

11

min

w

1 2||w||2 + C X

n

max(0, 1 − ynwT xn) w = wk + w? w ∈ Span({x1, x2, . . . , xn})

Hence,

wT xi = (wk + w?)T xi = wT

k xi + wT ?xi

= wT

k xi

  • nly w|| affects classification

norm decomposes

wT w = (wk + w?)T (wk + w?) = wT

k wk + wT ?w?

≥ wT

k wk

slide-12
SLIDE 12

Subhransu Maji (UMASS) CMPSCI 689 /37

Initialize k centers by picking k points randomly Repeat till convergence (or max iterations)

  • Assign each point to the nearest center (assignment step)
  • Estimate the mean of each group (update step)
  • Representer theorem is easy here —
  • Exercise: show how to compute using dot products

Kernel k-means

12

arg min

S k

X

i=1

X

x∈Si

||φ(x) − µi||2 arg min

S k

X

i=1

X

x∈Si

||φ(x) − µi||2 µi ← 1 |Si| X

x∈Si

φ(x) ||φ(x) − µi||2

slide-13
SLIDE 13

Subhransu Maji (UMASS) CMPSCI 689 /27

A kernel is a mapping K: X xX →R Functions that can be written as dot products are valid kernels

  • Examples: polynomial kernel
  • Alternate characterization of a kernel

A function K: X xX →R is a kernel if K is positive semi-definite (psd) This property is also called as Mercer’s condition This means that for all functions f that are squared integrable except the zero function, the following property holds:

What makes a kernel?

13

K(x, z) = φ(x)T φ(z) Kd

(poly)(x, z) = (1 + xT z)d

Z Z f(x)K(x, z)f(z)dzdx > 0 Z f(x)2dx < ∞

slide-14
SLIDE 14

Subhransu Maji (UMASS) CMPSCI 689 /27

We can prove some properties about kernels that are otherwise hard to prove Theorem: If K1 and K2 are kernels, then K1 + K2 is also a kernel Proof:

  • More generally if K1, K2,…, Kn are kernels then ∑ᵢαi Ki with αi ≥ 0, is a

also a kernel Can build new kernels by linearly combining existing kernels

Why is this characterization useful?

14

Z Z f(x)K(x, z)f(z)dzdx = Z Z f(x) (K1(x, z) + K2(x, z)) f(z)dzdx = Z Z f(x)K1(x, z)f(z)dzdx + Z Z f(x)K2(x, z)f(z)dzdx ≥ 0 + 0

slide-15
SLIDE 15

Subhransu Maji (UMASS) CMPSCI 689 /27

We can show that the Gaussian function is a kernel

  • Also called as radial basis function (RBF) kernel
  • Lets look at the classification function using a SVM with RBF kernel:
  • This is similar to a two layer network with the RBF as the link function

Gaussian kernels are examples of universal kernels — they can approximate any function in the limit as training data goes to infinity

Why is this characterization useful?

15

K(rbf)(x, z) = exp

  • −γ||x − z||2

f(z) = X

i

αiK(rbf)(xi, z) = X

i

αi exp

  • −γ||xi − z||2
slide-16
SLIDE 16

Subhransu Maji (UMASS) CMPSCI 689 /27

Feature mapping via kernels often improves performance MNIST digits test error:

  • 8.4% SVM linear
  • 1.4% SVM RBF
  • 1.1% SVM polynomial (d=4)

Kernels in practice

16

http://yann.lecun.com/exdb/mnist/ 60,000 training examples

slide-17
SLIDE 17

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernels can be defined over any pair of inputs such as strings, trees and graphs! Kernel over trees:

  • This can be computed efficiently using dynamic programming
  • Can be used with SVMs, perceptrons, k-means, etc

For strings number of common substrings is a kernel Graph kernels that measure graph similarity (e.g. number of common subgraphs) have been used to predict toxicity of chemical structures

Kernels over general structures

17

, K(

)

number of common subtrees

=

http://en.wikipedia.org/wiki/Tree_kernel

slide-18
SLIDE 18

Subhransu Maji (UMASS) CMPSCI 689 /27

Histogram intersection kernel between two histograms a and b

  • Introduced by Swain and Ballard 1991 to compare color histograms

Kernels for computer vision

18

a b min(a,b)

slide-19
SLIDE 19

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernel classifiers tradeoffs

19

Accuracy Evaluation ¡time

Non-­‑linear ¡Kernel Linear ¡Kernel

Linear: ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡O ¡(feature ¡dimension) ¡ Non ¡Linear: ¡ ¡ ¡O ¡(N ¡X ¡feature ¡dimension)

h(z) =

N

X

i=1

αiK(xi, z) h(z) = wT z

slide-20
SLIDE 20

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernel classification function

20

h(z) =

N

X

i=1

αiK(xi, z) =

N

X

i=1

αi @

D

X

j=1

min(xij, zj) 1 A

slide-21
SLIDE 21

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernel classification function

21

Key insight: additive property

h(z) =

N

X

i=1

αiK(xi, z) =

N

X

i=1

αi @

D

X

j=1

min(xij, zj) 1 A h(z) =

N

X

i=1

αi @

D

X

j=1

min(xij, zj) 1 A =

D

X

j=1

N X

i=1

αi min(xij, zj) ! =

D

X

j=1

hj(zj) hj(zj) =

N

X

i=1

αi min(xij, zj)

slide-22
SLIDE 22

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernel classification function

22

Algorithm 1

h(z) =

N

X

i=1

αiK(xi, z) =

N

X

i=1

αi @

D

X

j=1

min(xij, zj) 1 A hj(zj) =

N

X

i=1

αi min(xij, zj) O(N)

slide-23
SLIDE 23

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernel classification function

23

Algorithm 1

h(z) =

N

X

i=1

αiK(xi, z) =

N

X

i=1

αi @

D

X

j=1

min(xij, zj) 1 A hj(zj) =

N

X

i=1

αi min(xij, zj) =

N

X

i:xij<zj

αixij +

N

X

i:xij≥zj

αizj

Sort the support vector values in each coordinate, and pre-compute these sums for each rank.

O(N)

To evaluate, find the position of zj the sorted list of support vector xij .This can be done in O(log N) time using binary search.

O(log N)

[Maji et al. PAMI 13]

slide-24
SLIDE 24

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernel classification function

24

For many problems hi is smooth (blue plot). Hence, we can approximate it with fewer uniformly spaced segments (red plot). This saves time and space!

Algorithm 2

h(z) =

N

X

i=1

αiK(xi, z) =

N

X

i=1

αi @

D

X

j=1

min(xij, zj) 1 A hj(zj) =

N

X

i=1

αi min(xij, zj) =

N

X

i:xij<zj

αixij +

N

X

i:xij≥zj

αizj O(log N) O(1) O(N)

[Maji et al. PAMI 13]

slide-25
SLIDE 25

Subhransu Maji (UMASS) CMPSCI 689 /27

Kernel classification function

25

Intersection Chi-­‑squared Jensen-­‑Shannon

Algorithm 2

h(z) =

N

X

i=1

αiK(xi, z) =

N

X

i=1

αi @

D

X

j=1

min(xij, zj) 1 A K(x, z) =

D

X

i=1

ki(xi, zi)

additive kernels

k(a, b) = min(a, b) k(a, b) = 2ab a + b k(a, b) = a log ✓a + b a ◆ + b log ✓a + b b ◆ O(1) O(N)

[Maji et al. PAMI 13]

slide-26
SLIDE 26

Subhransu Maji (UMASS) CMPSCI 689 /27

Dataset Measure Linear SVM IK SVM Speedup

INRIA pedestrians Recall@ 2 FPPI 78.9 86.6 2594 X DC pedestrians Accuracy 72.2 89.0 2253 X Caltech101, 15 examples Accuracy 38.8 50.1 37 X Caltech101, 30 examples Accuracy 44.3 56.6 62 X MNIST digits Error 1.44 0.77 2500 X UIUC cars (Single Scale) Precision@ EER 89.8 98.5 65 X

26

On average more accurate than linear and 100-1000x faster than standard kernel classifier. Similar idea can be applied to training as well. Research question: when can we approximate kernels efficiently?

Linear and intersection kernel SVM

Using histograms of oriented gradients feature:

slide-27
SLIDE 27

Subhransu Maji (UMASS) CMPSCI 689 /27

Some of the slides are based on CIML book by Hal Daume III Experiments on various datasets: “Efficient Classification for Additive Kernel SVMs”, S. Maji, A. C. Berg and J. Malik, PAMI, Jan 2013 Some resources:

  • LIBSVM: kernel SVM classifier training and testing

➡ http://www.csie.ntu.edu.tw/~cjlin/libsvm/

  • LIBLINEAR: fast linear classifier training

➡ http://www.csie.ntu.edu.tw/~cjlin/liblinear/

  • LIBSPLINE: fast additive kernel training and testing

➡ https://github.com/msubhransu/libspline

Slides credit

27