Optimization for Kernel Methods S. Sathiya Keerthi Yahoo! Research, - - PDF document

optimization for kernel methods s sathiya keerthi
SMART_READER_LITE
LIVE PREVIEW

Optimization for Kernel Methods S. Sathiya Keerthi Yahoo! Research, - - PDF document

Optimization for Kernel Methods S. Sathiya Keerthi Yahoo! Research, Burbank, CA, USA Kernel methods: Support Vector Machines (SVMs) Kernel Logistic Regression (KLR) Aim: To introduce a variety of optimization problems that arise in the


slide-1
SLIDE 1

Optimization for Kernel Methods

  • S. Sathiya Keerthi

Yahoo! Research, Burbank, CA, USA Kernel methods:

  • Support Vector Machines (SVMs)
  • Kernel Logistic Regression (KLR)

Aim: To introduce a variety of optimization problems that arise in the solution of classification problems by kernel methods, briefly review relevant optimization algorithms, and point out which opti- mization methods are suited for these problems. The lectures in this topic will be divided into 6 parts:

  • 1. Optimization problems arising in kernel methods
  • 2. A review of optimization algorithms
  • 3. Decomposition methods
  • 4. Quadratic programming methods
  • 5. Path tracking methods
  • 6. Finite Newton method

The first two topics form an introduction, the next two topics cover dual methods and the last two topics cover primal methods.

1

slide-2
SLIDE 2

Part I Optimization Problems Arising in Kernel Methods

References:

  • 1. B. Sch¨
  • lkopf and A. Smola, Learning with Kernels, MIT Press,

2002, Chapter 7, Pattern Recognition.

2

slide-3
SLIDE 3

Kernel methods for Classification problems

  • Although kernel methods are used for a range of problems such

as classification (binary/multiclass), regression, ordinal regres- sion, ranking and unsupervised learning, we will focus only on binary classification problems.

  • Training data: {xi, ti}n

i=1

  • xi ∈ Rm is the i-th input vector.
  • ti ∈ {1, −1} is the target for xi, denoting the class to which the

i-th example belongs; 1 denotes class 1 and −1 denotes class 2.

  • Kernel methods transform x to a Reproducing Kernel Hilbert

Space, H via φ : Rm → H and then develop a linear classifier in that space: y(x) = w · φ(x) + b y(x) > 0 ⇒ x ∈ Class 1; y(x) < 0 ⇒ x ∈ Class 2

  • The dot product in H, i.e., k(xi, xj) = φ(xi)·φ(xj) is called the

Kernel function. All computations are done using k only.

  • Example: φ(x) is the vector of all monomials up to degree d on

the components of x. For this example, k(xi, xj) = (1+xi·xj)d. This is the polynomial kernel function. Larger the value of d is, more flexible and powerful is the classifer function.

  • RBF kernel: k(xi, xj) = e−γxi−xj2 is another popular kernel
  • function. Larger the value of γ is, more flexible and powerful

is the classifer function.

  • Training problem: (w, b), which define the classifier are ob-

tained by solving the following optimization problem: min

w,b E = R + CL

  • L is the Empirical error defined as

L =

  • i

l(y(xi), ti) l is a loss function that describes the discrepancy between the classifier output y(xi) and the target ti.

3

slide-4
SLIDE 4
  • Minimizing only L can lead to overfitting on the training data.

The regularizer function R prefers simpler models and helps prevent overfitting.

  • The parameter C helps to establish a trade-off between R and
  • L. C is a hyperparameter. (Other parameters such as d in the

polynomial kernel and γ in the RBF kernel are also hyperpa- rameters.) All hyperparameters need to be tuned at a higher level.

Some commonly used loss functions

  • SVM (Hinge) loss: l(y, t) = 1 − ty if ty < 1; 0 otherwise.
  • KLR (Logistic) loss: l(y, t) = − log(1 + exp(−ty))
  • L2-SVM loss: l(y, t) = (1 − ty)2/2 if ty < 1; 0
  • Modified Huber loss: l(y, t) is: 0 if ξ ≥ 0; ξ2/2 if 0 < ξ < 2;

and 2(ξ − 1) if ξ ≥ 2, where ξ = 1 − ty.

4

slide-5
SLIDE 5

Margin based regularization

  • The margin between the planes defined by y(x) = ±1 is 2/w.

Making the margin big is equivalent to making the function R = 1

2w2 small.

  • This function is a very effective regularizing function. This is

the natural regularizer associated with the RKHS.

  • Although there are other regularizers that have been considered

in the literature, in these lectures I will restrict attention to

  • nly the optimization problems directly related to the above

mentioned natural regularizer. Primal problem: min 1

2w2 + C i l(y(xi), ti)

Sometimes the term 1

2b2 is also added in order to handle w and b

  • uniformly. (This is also equivalent to ignoring b and instead adding

a constant to the kernel function.)

5

slide-6
SLIDE 6

Solution via Wolfe dual

  • w and y(·) have the representation:

w =

  • i

αitiφ(xi), y(x) =

  • i

αitik(x, xi)

  • w could reside in an infinite dimensional space (e.g., in the case
  • f the RBF kernel) and so we have to necessarily handle the

solution via finite dimensional quantities such as the αi’s. This is effectively done via the Wolfe dual (details will be covered in lectures on kernel methods by other speakers). SVM dual: (Convex quadratic program) min E(α) = 1

2

  • i,j titjαiαjk(xi, xj) −

i αi

s.t. 0 ≤ αi ≤ C,

i tiαi = 0

KLR dual: (Convex program) min E(α) = 1 2

  • i,j

titjαiαjk(xi, xj) + C

  • i

g(αi C ) s.t.

  • i

tiαi = 0 where g(δ) = δ log δ + (1 − δ) log(1 − δ). L2-SVM dual: (Convex quadratic program) min E(α) = 1 2

  • i,j

titjαiαj˜ k(xi, xj)−

  • i

αi s.t. αi ≥ 0,

  • i

tiαi = 0 where ˜ k(xi, xj) = k(xi, xj) + 1

Cδij.

Modified Huber: Dual can be written down, but it is a bit more complex.

6

slide-7
SLIDE 7

Ordinal regression

All the ideas for binary classification can be easily extended to or- dinal regression. There are several ways of defining losses for ordinal regression. One way is to define a threshold for each successive class and include a loss term for each pair of classes.

7

slide-8
SLIDE 8

An Alternative: Direct primal design

Primal problem: min 1 2w2 + C

  • i

l(y(xi), ti) (1) Plug into (1), the representation: w =

  • i

βitiφ(xi), y(x) =

  • i

βitik(x, xi) to get the problem min 1 2

  • ij

titjβiβjk(xi, xj) + C

  • i

l(y(xi), ti) (2) We can attempt to directly solve (2) to get the β vector. Such an approach can be particularly attractive when the loss func- tion l is differentiable, such as in the cases of KLR, L2-SVM and Modified Huber loss SVM, since the optimization problem is an un- constrained one. Sparse formulations (minimizing the number of nonzero αi)

  • Approach 1: Replace the regularizer in (2) by the “sparsity-

inducing regularizer”

i |βi| to get the optimization problem:

min

  • i

|βi| + C

  • i

l(y(xi), ti) (3)

  • Approach 2: Include the sparsity regularizer,

i |βi| in a

graded fashion: min λ

  • i

|βi| + 1 2

  • ij

titjβiβjk(xi, xj) + C

  • i

l(y(xi), ti) (4) Large λ will force sparse solutions while small λ will get us back to the original kernel solution.

8

slide-9
SLIDE 9

Semi-supervised Learning

  • In many problems a set of unlabeled examples, {˜

xk} is available

  • E is an edge relation on that set with weights ρkl
  • Then 1

2

  • kl∈E ρkl(y(xk) − y(xl))2 can be included as an addi-

tional regularizer. (Nearby input vectors should have near y values.)

Transductive design

  • Solve the problem

min 1 2w2 + C

  • i

l(y(xi), ti) + ˜ C

  • k

l(y(˜ xk), tk) where the tk ∈ {1, −1} are also variables.

  • This is a combinatorial optimization problem. There exist good

special techniques for solving it. But we will not go into any details in these lectures.

9

slide-10
SLIDE 10

Part II A Review of Optimization Algorithms

References:

  • 1. B. Sch¨
  • lkopf and A. Smola. Learning with Kernels, MIT Press,

2002, Chapter 6, Optimization.

  • 2. D. P. Bertsekas, Nonlinear Programming. Athena Scientific,

1995.

10

slide-11
SLIDE 11

Types of optimization problems

min

θ∈Z E(θ)

  • E : Z → R is continuously differentiable, Z ⊂ Rn
  • Z = Rn ⇒ Unconstrained
  • E = linear, Z =polyhedral ⇒ Linear Programming

E = quadratic, Z =polyhedral ⇒ Quadratic Programming (example: SVM dual) Else, Nonlinear Programming

  • These problems have been traditionally treated separately. Their

methodologies have come closer in later years.

Unconstrained: Optimality conditions

At a minimum:

  • Stationarity: ∇E = 0
  • Non-negative curvature: ∇2E is positive semi-definite

E convex ⇒ local minimum is a global minimum.

11

slide-12
SLIDE 12

Geometry of descent

∇E(θ)′d < 0

12

slide-13
SLIDE 13

A sketch of a descent algorithm

13

slide-14
SLIDE 14

Exact line search:

η⋆ = min

η

φ(η) = E(θ + ηd)

Inexact line search: Armijo condition

14

slide-15
SLIDE 15

Global convergence theorem

  • E is Lipschitz continuous
  • Sufficient angle of descent condition:

−∇E(θk)′dk ≥ δ∇E(θk)dk, δ > 0

  • Armijo line search condition: For some 0 < µ < 0.5

−(1 − µ)η∇E(θk)′dk ≥ E(θk) − E(θk + ηdk) ≥ −µη∇E(θk)′dk Then, either E → −∞ or θk converges to a stationary point θ⋆: ∇E(θ⋆) = 0.

Rate of convergence

  • ǫk = E(θk+1) − E(θk)
  • |ǫk+1| = ρ|ǫk|r in limit as k → ∞
  • r = rate of convergence,

a key factor for speed of convergence of optimization algorithms

  • Linear convergence (r = 1) is quite a bit slower than

quadratic convergence (r = 2) .

  • Many optimization algorithms have superlinear convergence (1 < r < 2)

which is pretty good.

15

slide-16
SLIDE 16

Gradient descent method

  • d = −∇E
  • Linear convergence
  • Very simple; locally good; but often very slow; rarely used in

practice

16

slide-17
SLIDE 17

Newton method

d = −[∇2E]−1∇E, η = 1 Interpretations:

  • θ + d minimizes second order approximation

ˆ E(θ + d) = E(θ) + ∇E(θ)′d + 1 2d′∇2E(θ)d

  • θ + d solves linearized optimality condition

∇E(θ + d) ≈ ∇ ˆ E(θ + d) = ∇E(θ) + ∇2E(θ)d = 0

  • Quadratic rate of convergence

Implementation:

  • Compute H = ∇2E(θ), g = ∇E(θ) and solve Hd = −g
  • Use Cholesky factorization H = LL′
  • Newton method may not converge (or worse, if H is not posi-

tive definite, it may not even be properly defined ) when started far from a minimum

  • Modify the method in 2 ways: (a) Change H to a “nearby”

positive definite matrix whenever it is not; and (b) add line search. Quasi-Newton methods:

  • Instead of calculating Hessian and inverting it, QN methods

build an approximation to the inverse Hessian over many steps using gradient information.

  • Several update methods for the inverse Hessian exist;

the BFGS method is popularly used.

  • Applied to a convex quadratic function with exact line search

they find the minimizer within n steps .

  • With inexact line search the QN methods can be used for mini-

mizing nonlinear functions too. They work well even with loose line search.

17

slide-18
SLIDE 18

Method of conjugate directions

  • Originally developed for convex quadratic minimization : P is

pd min E(θ) = 1 2θ′Pθ − q′θ Equivalently, solve Pθ = q

  • Define a set of P-conjugate search directions :

{d0, d1, . . . , dn−1} such that d′

iPdj = 0 ∀i = j

  • Do exact line search along each direction
  • Main result:

The minimum of E will be reached in exactly n steps.

Conjugate gradient method

  • Choose any initial point, θ0, set d0 = −∇E(θ0)
  • θk+1 = θk + ηkdk where ηk = arg minη E(θk + ηdk)
  • dk+1 = −∇E(θk+1) + βkdk
  • Simply choosing βk so that d′

k+1Adk = 0 is sufficient to ensure

P-conjugacy of dk+1 with all previous directions. βk = ∇E(θk+12 ∇E(θk)2 (Fletcher-Reeves formula) βk = ∇E′(θk+1[∇E′(θk+1 − ∇E(θk)] ∇E(θk)2 (Polak-Ribierre formula)

18

slide-19
SLIDE 19

Nonlinear Conjugate gradient method

  • Simply use the nonlinear gradient function for ∇E for getting

the directions

  • Replace exact line search by inexact line search

Armijo condition needs to be replaced by a more stringent con- dition called the Wolfe condition

  • Convergence not possible in n steps

Practicalities:

  • FR, PR usually behave very differently
  • PR is usually better
  • Methods are very sensitive to line search

19

slide-20
SLIDE 20

Overall comparison of the methods

  • Quasi-Newton methods are robust. But, they require O(n2)

memory space to store the approximate Hessian inverse, and so they are not directly suited for large scale problems. Modifi- cations of these methods called Limited Memory Quasi-Newton methods use O(n) memory and they are suited for large scale problems.

  • Conjugate gradient methods also work well and are well suited

for large scale problems. However they need to be implemented carefully, with a carefully set line search.

  • In some situations block coordinate descent methods (optimiz-

ing a selected subset of variables at a time) can be very much better suited than the above methods. We will say more about this later.

20

slide-21
SLIDE 21

Linear Programming

min E(θ) = c′θ subject to Aθ ≤ b, θ ≥ 0 The solution occurs at a vertex of the fesible polyhedral region.

Simplex algorithm:

  • Starts at a vertex and moves along descending edges until an
  • ptimal vertex is found.
  • In the worst case the algorithm takes a number of steps that is

exponential in n; but, in practice it is very efficient.

  • Interior point methods are alternative methods that are prov-

ably polynomial time. They are also very efficient when imple- mented via certain predictor-corrector ideas.

Quadratic Programming

The (Wolfe) SVM dual is a good example. Many traditional QP methods exist. For instance, the active set method which solves a sequence of equality constrained problems, is a good traditional QP

  • method. We will talk about this method in detail in part IV.

21

slide-22
SLIDE 22

Part III Decomposition methods

References:

  • 1. B. Sch¨
  • lkopf and A. Smola, Learning with Kernels, MIT Press,

2002, Chapter 10, Implementation.

  • 2. T. Joachims, Making large-scale SVM learning practical, In B.

Sch¨

  • lkopf, C. J. C. Burges and A. Smola, editors, Advances

in kernel methods - Support Vector Learning, pp. 169-184, MIT Press, 1999. http://www.cs.cornell.edu/People/tj/ publications/joachims_98c.pdf

  • 3. J. C. Platt, Fast training of support vector machines using se-

quential minimal optimization, In B. Sch¨

  • lkopf, C. J. C. Burges

and A. Smola, editors, Advances in kernel methods - Support Vector Learning, pp. 185-208, MIT Press, 1999. http:// research.microsoft.com/~jplatt/smo.html

  • 4. S. S. Keerthi et al, Improvements to Platt’s SMO algorithm for

SVM classifier design, Neural Computation, Vol. 13, March 2001, pp. 637-649. http://guppy.mpe.nus.edu.sg/~mpessk/ svm/svm.shtml

  • 5. LIBSVM: http://www.csie.ntu.edu.tw/~cjlin/libsvm/
  • 6. SV M light: http://svmlight.joachims.org/

22

slide-23
SLIDE 23

SVM Dual Problem

  • We will take up the details only for SVM (hinge loss). The

ideas are quite similar for optimization problems arising from

  • ther loss functions such as L2-SVM, KLR, Modifier Huber etc.
  • Recall the SVM dual convex quadratic program

min 1 2

  • i,j

titjαiαjk(xi, xj) −

  • i

αi subject to 0 ≤ αi ≤ C,

  • i

tiαi = 0

  • In matrix-vector notations...

min

α

1 2α′Qα − e′α subject to 0 ≤ αi ≤ C, i = 1, . . . , n t′α = 0, where Qij = titjk(xi, xj) and e = [1, . . . , 1]′

Large Dense Quadratic Programming

  • minα f(α) = 1

2α′Qα − e′α, subject to t′α = 0, 0 ≤ αi ≤ C

  • Qij = 0, Q : an n by n fully dense matrix
  • 30,000 training points: 30,000 variables:

(30, 0002 × 8/2) bytes = 3GB RAM to store Q: still difficult

  • Traditional optimization methods:

Newton, Quasi Newton cannot be directly applied since they involve O(n2) storage.

  • Even methods such as CG which require O(n) storage are un-

suitable since ∇f = Qα − e, which requires the entire kernel matrix.

  • Decomposition (Block coordinate descent) methods are best

suited

23

slide-24
SLIDE 24

Decomposition Methods

  • Working on a few variables each time
  • Similar to coordinate-wise minimization. Also referred to as

Chunking.

  • Working set B; N = {1, . . . , n}\B fixed

Size of B usually <= 100

  • Sub-problem in each iteration:

min

αB

1 2 α′

B

(αk

N)′

QBB QBN QNB QNN αB αk

N

e′

B

(ek

N)′

αB αk

N

  • subject to

0 ≤ αl ≤ C, l ∈ B, t′

BαB = −t′ Nαk N

Avoid Memory Problems

  • The new objective function

1 2α′

BQBBαB + (−eB + QBNαk N)′αB + constant

  • B columns of Q needed
  • Calculated as and when needed

Decomposition Method: the Algorithm

  • 1. Find initial feasible α1

Set k = 1.

  • 2. If αk satisfies optimality conditions, stop.

Otherwise, find working set B. Define N ≡ {1, . . . , n}\B

24

slide-25
SLIDE 25
  • 3. Solve sub-problem of αB:

min

αB

1 2α′

BQBBαB + (−eB + QBNαk N)′αB

subject to 0 ≤ αl ≤ C, l ∈ B t′

BαB = −t′ Nαk N,

  • 4. αk+1

N

≡ αk

  • N. Set k ← k + 1; go to Step 2.

Do these methods Really Work?

  • Compared to Newton, Quasi-Newton

Slow convergence (lot more steps to come close to solution)

  • However, no need to have very accurate α

decision function = sgn n

  • i=1

αiyiK(xi, x) + b

  • Prediction not affected much after a certain level of optimality

is reached

  • In some situations, # support vectors ≪ # training points

Initial α1 = 0, many elements are never used

  • An example where machine learning knowledge affects opti-

mization

  • An example of solving 50,000 training instances by LIBSVM

$ ./svm-train -m 200 -c 16 -g 4 22features

  • ptimization finished, #iter = 24981

Total nSV = 3370 real 3m32.828s

  • On a Pentium M 1.7 GHz Laptop
  • Calculating Q may have taken more than 3 minutes

A good case where many αi remain at zero all the time

25

slide-26
SLIDE 26

Working Set Selection

  • Very important

Better selection ⇒ fewer iterations

  • But

Better selection ⇒ higher cost per iteration

  • Two issues:
  • 1. Size

|B| ր, # iterations ց |B| ց, # iterations ր

  • 2. Selecting elements

Size of the Working Set

  • Keeping all nonzero αi in the working set

If all SVs included ⇒ optimum Few iterations (i.e., few sub-problems) Size varies May still have memory problems

  • Existing software

Small and fixed size Memory problems solved Though sometimes slower

Sequential Minimal Optimization (SMO)

  • Consider |B| = 2

|B| ≥ 2 is necessary because of the linear constraint Extreme case of decomposition methods

26

slide-27
SLIDE 27
  • Sub-problem analytically solved; no need to use optimization

software min

αi,αj

1 2 αi αj Qii Qij Qij Qjj αi αj

  • + (QBNαk

N − eB)′

  • αi

αj

  • s.t.

0 ≤ αi, αj ≤ C, tiαi + tjαj = −t′

Nαk N,

  • B = {i, j}
  • Optimization people may not think this a big advantage

Machine learning people do: they like simple code

  • A minor advantage in optimization

No need to have inner and outer stopping conditions

  • B = {i, j}
  • Too slow convergence?

With other tricks, |B| = 2 fine in practice

Selection by KKT Violation

min

α f(α)

= 1 2α′Qα − e′α, subject to t′α = 0, 0 ≤ αi ≤ C

  • KKT optimality condition: α optimal if and only if

∇f(α) + bt = λ − µ, λiαi = 0, µi(C − αi) = 0, λi ≥ 0, µi ≥ 0, i = 1, . . . , n

  • ∇f(α) ≡ Qα − e
  • Rewritten as

∇f(α)i + bti ≥ 0 if αi < C ∇f(α)i + bti ≤ 0 if αi > 0

27

slide-28
SLIDE 28
  • Note ti = ±1
  • KKT further rewritten as

∇f(α)i + b ≥ 0 if αi < C, ti = 1 ∇f(α)i − b ≥ 0 if αi < C, ti = −1 ∇f(α)i + b ≤ 0 if αi > 0, ti = 1 ∇f(α)i − b ≤ 0 if αi > 0, ti = −1

  • A condition on the range of b:

max{−yl∇f(α)l | αl < C, tl = 1 or αl > 0, tl = −1} ≤ b ≤ min{−yl∇f(α)l | αl < C, tl = −1 or αl > 0, tl = 1}

  • Define

Iup(α) ≡ {l | αl < C, tl = 1 or αl > 0, tl = −1}, and Ilow(α) ≡ {l | αl < C, tl = −1 or αl > 0, tl = 1}.

  • α optimal if and only if feasible and

max

i∈Iup(α) −ti∇f(α)i ≤

min

i∈Ilow(α) −ti∇f(α)i.

Violating Pair

  • KKT equivalent to

✲✛

l ∈ Iup(α) l ∈ Ilow(α) −tl∇f(α)l

  • Violating pair

i ∈ Iup(α), j ∈ Ilow(α), and − ti∇f(α)i > −tj∇f(α)j

  • f(αk) strictly decreases if and only if B has at least one vio-

lating pair However, simply choosing a violating pair not enough for con- vergence

28

slide-29
SLIDE 29

Maximal Violating Pair

  • If |B| = 2, natural to choose indices that most violate the KKT:

i ∈ arg max

l∈Iup(αl) −tl∇f(αk)l,

j ∈ arg min

l∈Ilow(αl) −tl∇f(αk)l

  • {i, j} called “maximal violating pair”

Obtained in O(n) operations

Calculating Gradient

  • To find violating pairs, gradient is maintained throughout all

iterations

  • Memory problems occur as ∇f(α) = Qα − e involves Q
  • Solved by using the following tricks
  • 1. α1 = 0 implies ∇f(α1) = Q · 0 − e = −e

Initial gradient easily obtained

  • 2. Update ∇f(α) using only QBB and QBN:

∇f(αk+1) = ∇f(αk) + Q(αk+1 − αk) = ∇f(αk) + Q:,B(αk+1 − αk)B

  • Only |B| columns of Q needed per iteration

SV M light

  • |B| = q; feasible direction vector dB obatined by solving

min

dB

∇f(αk)′

BdB

subject to t′

BdB = 0,

dt ≥ 0, if αk

t = 0, t ∈ B,

dt ≤ 0, if αk

t = C, t ∈ B,

−1 ≤ dt ≤ 1, t ∈ B.

29

slide-30
SLIDE 30
  • A combinatorial problem:

l

q

  • possibilities
  • But optimum is the q/2 most violating pairs
  • From Iup(αk): largest q/2 elements

−ti1∇f(αk)i1 ≥ −ti2∇f(αk)i2 ≥ · · · ≥ −tiq/2∇f(αk)iq/2 From Ilow(αk): smallest q/2 elements −tj1∇f(αk)j1 ≤ · · · ≤ −tjq/2∇f(αk)jq/2

  • An O(lq) procedure
  • Used in popular SVM software: SV M light, LIBSVM (before Ver-

sion 2.8), and others

Caching and Shrinking

  • Speed up decomposition methods
  • Caching

Store recently used Hessian columns in computer memory

  • Example

$ time ./libsvm-2.81/svm-train -m 0.01 a4a 11.463s $ time ./libsvm-2.81/svm-train -m 40 a4a 7.817s

  • Shrinking

Some bounded elements remain until the end Heuristically resized to a smaller problem

  • After certain iterations, most bounded elements identified and

not changed

Stopping Condition

  • In optimization software such conditions are important

However, don’t be surprised if you see no stopping conditions in an optimization code of ML software

30

slide-31
SLIDE 31

Sometimes time/iteration limits more suitable

  • From KKT condition

max

i∈Iup(α) −ti∇f(α)i ≤

min

i∈Ilow(α) −ti∇f(α)i + ǫ

(1) a natural stopping condition

Better Stopping Condition

  • In LIBSVMǫ = 10−3

Experience: ok but sometimes too strict Many a times we get good results even with ǫ = 10−1

  • Large C ⇒ large ∇f(α) components

Too strict ⇒ many iterations Need a relative condition

  • A very important issue not fully addressed yet

Example of Slow Convergence

  • Using C = 1

$./libsvm-2.81/svm-train -c 1 australian_scale

  • ptimization finished, #iter = 508
  • bj = -201.642538, rho = 0.044312
  • Using C = 5000

$./libsvm-2.81/svm-train -c 5000 australian_scale

  • ptimization finished, #iter = 35241
  • bj = -242509.157367, rho = -7.186733
  • Optimization researchers may rush to solve difficult cases
  • It turns out that large C less used than small C

Finite Termination

  • Given ǫ, finite termination can be shown for both, SMO and

SV M light

31

slide-32
SLIDE 32

Effect of hyperparameters

  • If we use C = 20, γ = 400

$./svm-train -c 20 -g 400 train.1.scale $./svm-predict train.1.scale train.1.scale.model o Accuracy = 100% (3089/3089) (classification)

  • 100% training accuracy but

$./svm-predict test.1.scale train.1.scale.model o Accuracy = 82.7% (3308/4000) (classification)

  • Very bad test accuracy
  • Overfitting happens

Overfitting and Underfitting

  • When training and predicting a data,

we should – Avoid underfitting: large training error – Avoid overfitting: too small training error

  • In theory

You can easily achieve 100% training accuracy

  • But this is useless

Parameter Selection

  • Sometimes you can get away with default choices
  • Usually a good idea to tune them correctly
  • Now parameters are

C and kernel parameters

32

slide-33
SLIDE 33
  • Examples:

γ of e−γxi−xj2 a, b, d of (x′

ixj/a + b)d

  • How do we select them ?

Performance Evaluation

  • Training errors not important; only test errors count
  • l training data, xi ∈ Rn, yi ∈ {+1, −1}, i = 1, . . . , l, a learning

machine: x → f(x, α), f(x, α) = 1 or − 1. Different α: different machines

  • The expected test error (generalized error)

R(α) = 1 2|y − f(x, α)|dP(x, y) y: class of x (i.e. 1 or -1)

  • P(x, y) unknown, empirical risk (training error):

Remp(α) = 1 2l

l

  • i=1

|yi − f(xi, α)|

  • 1

2|yi − f(xi, α)| : loss, choose 0 ≤ η ≤ 1, with probability at

least 1 − η: R(α) ≤ Remp(α) + another term – A good pattern recognition method: minimize the combined effect of both terms – Remp(α) → 0 another term → large

33

slide-34
SLIDE 34
  • In practice

Available data ⇒ training, validation, and (testing)

  • Train + validation ⇒ model
  • k-fold cross validation:

– Data randomly separated to k groups. – Each time k − 1 as training and one as testing – Select parameters with highest CV – Another optimization problem

Trying RBF Kernel First

  • Linear kernel: special case of RBF

Leave-one-out cross-validation accuracy of linear the same as RBF under certain parameters Related to optimization as well

  • Polynomial: numerical difficulties

(< 1)d → 0, (> 1)d → ∞ More parameters than RBF

34

slide-35
SLIDE 35

Contour of Parameter Selection

35

slide-36
SLIDE 36

Part IV Quadratic Programming Methods

References:

  • 1. L. Kaufman, Solving the quadratic programming problem aris-

ing in support vector machine classification, In B. Sch¨

  • lkopf, C.
  • J. C. Burges and A. Smola, editors, Advances in kernel methods
  • Support Vector Learning, pp. 147-168, MIT Press, 1999.
  • 2. S. V. N. Vishwanathan, A. Smola and M. N. Murty, Simple

SVM, ICML 2003. http://users.rsise.anu.edu.au/~vishy/ papers/VisSmoMur03.pdf

36

slide-37
SLIDE 37

Active Set Method

Force each αi into one of three groups:

  • O: αi = 0
  • C: αi = C
  • A: only the αi in this (active) set is allowed to change

α = (αA, αC, αO), αC = CeC, αO = 0

  • The optimization problem on only the αA variable set is:

min

αB

1 2α′

AQAAαA + (−eA + CQCAeC)′αA

subject to 0 ≤ αl ≤ C, l ∈ A t′

AαA = −Ct′ CeC,

  • The problem is as messy as the original problem, except for the

fact that the working set is smaller in size. So, what do we do to simplify?

  • Typically, the 0 < αi < C, i ∈ A.
  • Think as if these constraints will be satisfied and solve the

following equality constrained quadratic problem min

αB

1 2α′

AQAAαA + (−eA + CQCAeC)′αA

subject to t′

AαA = −Ct′ CeC,

  • The solution of this system can be obtained by solving a lin-

ear system Hγ = g where γ includes αA together with b, the Lagrange multiplier corresponding to the equality constraint. A factorization of H is maintained (and incrementally updated when H undergoes changes in the overall active set algorithm).

37

slide-38
SLIDE 38

The basic iteration of the active set method consists of the following steps:

  • Solve the above-mentioned equality constrained problem
  • If the solution αA violates a constraint, move the first violated

i of A into C or O.

  • If the solution αA satisfies the constraints then check if any i

in C or O violates optimality (KKT) conditions; if so bring it into A. The algorithm can be initialized by choosing A to be a small set, say two points. With appropriate conditions on the incoming new variable, the al- gorithm can be shown to have finite convergence . (Recall that decomposition methods such as SMO have only asymptotic conver- gence.) The method of bringing in a new variable from C or O into A has a big impact on the overall efficiency of the algorithm. The origi- nal active set algorithm chooses the point of C/O which maximally violates the optimality (KKT) condition. This usually ends up be- ing expensive, especially in large scale problems where large kernel arrays cannot be stored. In the Simple SVM algorithm of Vish- wanathan, Smola and Murty, the indices are randomly traversed and the first violating point is included.

38

slide-39
SLIDE 39

Comparison with a decomposition method

39

slide-40
SLIDE 40

Some Overall Comments

Pros: Finite convergence, and so independent of stopping toler- ances; speed usually unaffected by C; very good when the size of final A is not more than a few thousand. Very well suited when gra- dient based methods (Chapelle et al) are used for model selection. Cons: Storage and factorization can be expensive/impossible when the size of A goes beyond a few thousand. Seeding is not as clean as in decomposition methods since factorization needs to be entirely computed again. Simpler SVM: Vishwanathan et al have modified their Simple SVM method in 2 ways:

  • Replace factorization techniques by CG methods of solving the

linear systems that arise.

  • Instead of choosing the in-coming points randomly, use a heuris-

tically defined priority queue on the points so that those points which are more likely to violate optimality conditions come first.

40

slide-41
SLIDE 41

Part V Path Tracking Methods

References:

  • 1. S. Rosset and J. Zhu, Piecewise linear regularized solution paths,
  • 2004. http://www-stat.stanford.edu/%7Esaharon/papers/

piecewise-revised.pdf

  • 2. S. S. Keerthi, Generalized LARS as an effective feature selection

tool for text classification with SVMs, ICML 2005. http:// www.machinelearning.org/proceedings/icml2005/papers/ 053_GeneralizedLARS_Keerthi.pdf

41

slide-42
SLIDE 42

A general problem formulation

  • Consider the optimization problem

min

β

f(β) = λJ(β) + L(β) where J(β) =

j |βj| and L is a differentiable piecewise convex

quadratic function. (Piecewise: This means that the β-space is divided into a finite number of zones, in each of which L is a convex quadratic function and, at the boundary of the zones, the pieces of L merge properly to maintain differentiablity.)

  • Our aim is to track the minimizer of f as a function of λ.
  • Let g = ∇L.
  • At any one λ let β(λ) be the minimizer, A = {j : βj(λ) = 0}

and Ac be the complement set.

  • Optimality conditions:

gj + λsgn(βj) = 0 ∀ j ∈ A (1) |gj| ≤ λ ∀ j ∈ Ac (2)

  • Within one ‘quadratic zone’ (1) defines a set of linear equations

in βj, j ∈ A. Let γ denote the direction in β space thus defined.

  • At large λ (specifically, λ > maxj |gj(0)|), β = 0 is the solution.

42

slide-43
SLIDE 43

Rosset-Zhu path tracking algorithm

  • Initialize: β = 0, A = arg maxj |gj|, get γ
  • While (max |gj| > 0)

– d1 = arg mind≥0 minj∈Ac |gj(β + dγ)| = λ + d – d2 = arg mind≥0 minj∈A(β+dγ)j = 0 (an active component hits 0) – Find d3, the first d value at which a piecewise quadratic zone boundary is crossed. – set d = min(d1, d2, d3) – If d = d1 then add variable attaining equality at d to A. – If d = d2 then remove variable attaining 0 at d from A. – β ← β + dγ – Update info and compute the new direction vector γ Implementation: The second order matrix and its factorization needed to obtain γ can be efficiently done.

43

slide-44
SLIDE 44

Feature selection in Linear classifiers

L(β) = 1 2β2 +

  • i

l(y(xi), ti) where y(x) = β′x and l is a differentiable, piecewise quadratic loss

  • function. Examples: L2-SVM loss, Modified Huber loss. Even logis-

tic loss can be nicely approximated by piecewise quadratic functions quite well... When the minimum of f = λJ + L is tracked with respect to λ, we get β = 0 at large λ values and we retrieve the minimizer of L as λ → 0. Intermediate λ values give approximations where feature selection is achieved.

44

slide-45
SLIDE 45

Selecting features in Text classification

The following figure shows the application of path tracking to a dataset from the Reuters corpus. The plots show F-measure (larger is the better) as a function of the number of features chosen (which is much the same as λ → 0). The black curve corresponds to keep- ing β2/2, the blue curve corresponds to leaving out β2/2, and the red curve corresponds to feature selection using the informa- tion gain measure. SVM corresponds to the L2-SVM loss while RLS corresponds to regularized least squares.

45

slide-46
SLIDE 46

Forming sparse nonlinear kernel classifiers

  • Consider the nonlinear kernel primal problem

min 1 2w2 + C

  • i

l(y(xi, ti) where l is a differentiable, piecewise quadratic loss function. As before, l can be one of L2-SVM loss, Modified Huber loss or a piecewise quadratic approximation of the logistic loss.

  • Use the primal substitution w = βitiφ(xi) to get

min L(β) = 1 2β′Qβ +

  • i

l(y(xi), ti) where y(x) =

i βitik(x, xi)

  • When the minimum of f = λJ +L is tracked with respect to λ,

we get β = 0 at large λ values and we retrieve the minimizer of L as λ → 0. Intermediate λ values give approximations where sparsity is achieved.

46

slide-47
SLIDE 47

Performance on Two Datasets

47

slide-48
SLIDE 48

Part VI Finite Newton Method (FNM)

References:

  • 1. O. L. Mangasarian, A finite Newton method for classification,

Optimization Methods and Software, Vol. 17, pp. 913-929,

  • 2002. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/01-11.

pdf

  • 2. S. S. Keerthi and D. W. DeCoste, A Modified Finite Newton

Method for Fast Solution of Large Scale Linear SVMs, Jour- nal of Machine Learning Research, Vol. 6, pp. 341-361, 2005. http://jmlr.csail.mit.edu/papers/volume6/keerthi05a/ keerthi05a.pdf

  • 3. J. Zhu and T. Hastie, Kernel logistic regression and the im-

port vector machine, Journal of Computational and Graphi- cal Statistics, Vol. 14, pp. 185-205. http://www.stat.lsa. umich.edu/~jizhu/research/05klr.pdf

  • 4. S. S. Keerthi, O. Chapelle and D. W. DeCoste, Building support

vector machines with reduced classifier complexity, submitted to JMLR. http://www.kyb.tuebingen.mpg.de/bs/people/chapelle/primal/

48

slide-49
SLIDE 49

Introduction

FNM is much more efficient and better suited than dual decom- position methods in certain situations. Differnt dimensions:

  • Linear, Nonlinear kernel machines
  • Classification, Ordinal Regression, Regression
  • Differentiable loss functions:

– Least squares – L2-SVM – Modified Huber – KLR

49

slide-50
SLIDE 50

FNM: A General Format

min

β

f(β) = 1 2β′Rβ +

  • i

li(β) R is positive definite. li is the loss for the i-th example. Assume: li is convex, differentiable and piecewise quadratic. (Piece- wise: Same as mentioned in path tracking.)

FNM Iteration

βk = starting vector at the k-th iteration qi = local quadratic of li at βk. ¯ β = arg min

β

1 2β′Rβ +

  • i

qi(β) Note that ¯ β can be obtained by solving a linear system. Define direction: dk = ¯ β − βk New point by exact line search: βk+1 = βk + δkdk, δk = arg min

δ

f(βk + δdk)

50

slide-51
SLIDE 51

Finite convergence of FNM

First, global convergence theorem (discussed in part II) ensures that βk → β⋆, the minimizer of f. Since ∇f is continuous, β⋆ is also the minimizer (i.e., ¯ β) of every local quadratic approximation of f at β⋆. Thus, there is an open neighborhood around β⋆ such that, from any point there FNM will reach β⋆ in exactly one iteration. Convergence of βk → β⋆ ensures that the above mentioned neigh- borhood will be reached in a finite number of steps. Thus FNM has finite convergence.

Comments

# iterations usually very small (5-50) Linear system in each iteration is of the form: Ak ¯ β = bk, Ak = R +

  • i

γisis′

i

Factorization of Ak can be done incrementally and is very efficient. In many cases, the linear system is of the RLS (Regularized Least Squares) form and so special methods can be called in.

51

slide-52
SLIDE 52

Linear Kernel Machines: Small input dimension

R = I, li(β) = C · h(tiβ′xi) d = dimension of β is small Factorization of Ak is very efficient O(nd2) complexity

52

slide-53
SLIDE 53

Linear Kernel Machines: Large input dimension

Text classification: n ≈ 200, 000; d ≈ 250, 000 Data matrix, X (containing the xi’s) is sparse : 0.1% non-zeros Linear System: Use quadratic conjugate-gradient methods Theoretically CG will need d + 1 iterations for exact convergence. But exact solution is completely unnecessary. With inexact termi- nation, CG requires a very small number of iterations. Example: # CG iterations in various FNM iterations... 522 → 314 → 117 → 60 → 35 → 19 → 9 → 5 → 2 → 1 Each CG iteration requires a couple of calls of the form Xy or X′z. There are about 1000 such calls. Compare: SMO does calculations equivalent to one Xβ calculation in each of its iterations involving the update of a pair of alphas, (αi, αj). SMO uses tens of thousands of such iterations! Unlike SMO, where the number of iterations is very sensitive to C, the number of FNM iterations is not at all sensitive to C.

53

slide-54
SLIDE 54

Heuristics

H1: Terminate first FNM iteration in 10 CG iterations. Most non- support vectors usually get well identified in this phase. These vec- tors will not get involved in the CG iterations of the future FNM

  • iterations. This heuristic is particularly powerful when the number
  • f support vectors is a small fraction of n.

H2: First run FNM with a high tolerance; then do another run with a low tolerance. Example: % SV = 6.8

  • No H: 456.85 secs
  • H1 only: 70.86 secs
  • H2 only: 202.62 secs
  • H1 & H2: 62.18 secs

β-seeding can be used when going from one C value to another nearby C value. β-seeding is more effective than the α-seeding in SMO.

54

slide-55
SLIDE 55

Comparison of FNM (L2-SVM), SV M lightand BSVM

Computing time versus training set size for Adult and Web datasets

55

slide-56
SLIDE 56

Feature selection in Linear Kernel Classifiers

The ideas form a very good alternative to the L1 regularization path tracking ideas discussed earlier. Start with no feature; add features greedily Let β = optimized vector with a current set of features βj = a feature not yet chosen Evaluation criterion: ¯ fj = arg min

βj f(β, βj),

β = fixed where f is the training cost function. Choose the βj with smallest ¯ fj After choosing the best j, solve min f(β, βj) using (β, 0) as the seed. Factorization updates needed for linear system solution can be up- dated very efficiently.

56

slide-57
SLIDE 57

Sparse Nonlinear Kernel Machines

The ideas are parallel to what we discussed earlier on the same topic. Use the primal substitution w = βitiφ(xi) to get min L(β) = 1 2β′Qβ +

  • i

l(y(xi), ti) where y(x) =

i βitik(x, xi)

Note that, except for the regularizer being a more general quadratic function (β′Qβ/2), this problem is essentially in the linear classifier form. New non-zero βj variables can be selected greedily as in the feature selection process of the previous slide. At each step of the greedy process it is usually sufficient to restrict the evaluation to a small, randomly chosen number (say, 50) of βj’s. A similar choice doesn’t exist for the L1 regularization method. The result is an effective algorithm with a clearly defined small com- plexity: O(d2n) algorithm for selecting d kernel basis functions. On many datasets, a small d gives nearly the same accuracy as the full kernel classifier using all basis functions.

57

slide-58
SLIDE 58

Performance on some UCI Datasets

SpSVM SVM Dataset TestErate #Basis TestErate #SV Banana 10.87 17.3 10.54 221.7 Breast 29.22 12.1 28.18 185.8 Diabetis 23.47 13.8 23.73 426.3 Flare 33.90 8.4 33.98 629.4 German 24.90 14.0 24.47 630.4 Heart 15.50 4.3 15.80 166.6 Ringnorm 1.97 12.9 1.68 334.9 Thyroid 5.47 10.6 4.93 57.80 Titanic 22.68 3.3 22.35 150.0 Twonorm 2.96 8.7 2.42 330.30 Waveform 10.66 14.4 10.04 246.9

58

slide-59
SLIDE 59

An Example

59