Support Vector Machines & Kernelization Barna Saha Most of the - - PowerPoint PPT Presentation

support vector machines kernelization
SMART_READER_LITE
LIVE PREVIEW

Support Vector Machines & Kernelization Barna Saha Most of the - - PowerPoint PPT Presentation

Support Vector Machines & Kernelization Barna Saha Most of the slides are made using David Sontags course on machine Learning at MIT Support Vector Machines Support Vector Machines What if the data is not linearly separable?


slide-1
SLIDE 1

Support Vector Machines & Kernelization

Barna Saha Most of the slides are made using David Sontag’s course on machine Learning at MIT

slide-2
SLIDE 2
slide-3
SLIDE 3

Support Vector Machines

slide-4
SLIDE 4

Support Vector Machines

slide-5
SLIDE 5

What if the data is not linearly separable?

5

  • General idea: the original feature space can always be mapped to

a different (oDen some higher-dimensional feature space) where the training set is separable: [x1, x2]à[\sqrt(x1

2+x2 2), arctan(x2/x1)]

Φ: x → φ(x)

slide-6
SLIDE 6

What if the data is not linearly separable?

6

  • If there is a separator which “almost” separates, find a separator

that minimizes some kind of loss funcWon.

slide-7
SLIDE 7
slide-8
SLIDE 8

w is normal to the hyperplane w.x+b=0

slide-9
SLIDE 9
slide-10
SLIDE 10
slide-11
SLIDE 11
slide-12
SLIDE 12
slide-13
SLIDE 13
slide-14
SLIDE 14
slide-15
SLIDE 15
slide-16
SLIDE 16
slide-17
SLIDE 17
slide-18
SLIDE 18
slide-19
SLIDE 19

ϕ(x1,x2)à(x1

2, x1x2, x2x1, x2 2)

slide-20
SLIDE 20
slide-21
SLIDE 21

Kernel method enables one to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space but rather by simply computing the inner products between the images of all pairs of data in the feature space. Often computationally cheaper than the explicit computation of the coordinates.

slide-22
SLIDE 22
slide-23
SLIDE 23