The Perceptron
CMSC 422 MARINE CARPUAT
marine@cs.umd.edu
Credit: figures by Piyush Rai and Hal Daume III
The Perceptron CMSC 422 M ARINE C ARPUAT marine@cs.umd.edu Credit: - - PowerPoint PPT Presentation
The Perceptron CMSC 422 M ARINE C ARPUAT marine@cs.umd.edu Credit: figures by Piyush Rai and Hal Daume III This week Project 1 posted Form teams! Due Wed March 2 nd by 2:59pm A new model/algorithm the perceptron and its
CMSC 422 MARINE CARPUAT
marine@cs.umd.edu
Credit: figures by Piyush Rai and Hal Daume III
– Form teams! – Due Wed March 2nd by 2:59pm
– the perceptron – and its variants: voted, averaged
– Online vs. batch learning – Error-driven learning
space into two half-spaces
pointing normal vector 𝑥 ∈ ℝ𝐸
– 𝑥 is orthogonal to any vector lying on the hyperplane
the origin, unless we also define a bias term b
decision boundary is a hyperplane
finding a hyperplane 𝑥 that separates positive from negative examples
what side of the hyperplane examples fall
𝑧 = 𝑡𝑗𝑜(𝑥𝑈𝑦 + 𝑐)
Problem setting
– Each instance 𝑦 ∈ 𝑌 is a feature vector 𝑦 = [𝑦1, … , 𝑦𝐸]
– 𝑍 is binary valued {-1; +1}
– Each hypothesis ℎ is a hyperplane in D-dimensional space
Input
} of unknown target function 𝑔 Output
– We look at one example at a time, and update the model as soon as we make an error – As opposed to batch algorithms that update parameters after seeing the entire training set
– We only update parameters/model if we make an error
– Random is better
– Good strategy to avoid overfitting
– voting or averaging
weight vectors
can be rewritten as
– Form teams! – Due Wed March 2nd by 2:59pm
– the perceptron – and its variants: voted, averaged
– Online vs. batch learning – Error-driven learning