Spring 2017 CIS 493, EEC 492, EEC 592:
Autonomous Intelligent Robotics
Instructor: Shiqi Zhang
http://eecs.csuohio.edu/~szhang/teaching/17spring/
Autonomous Intelligent Robotics Instructor: Shiqi Zhang - - PowerPoint PPT Presentation
Spring 2017 CIS 493, EEC 492, EEC 592: Autonomous Intelligent Robotics Instructor: Shiqi Zhang http://eecs.csuohio.edu/~szhang/teaching/17spring/ Machine learning: an introduction Slides adapted from Ray Mooney, Pedro Domingos, James Hays, and
http://eecs.csuohio.edu/~szhang/teaching/17spring/
Slides adapted from Ray Mooney, Pedro Domingos, James Hays, and Yi-Fan Chang
7
–
Personalized news or mail filter
–
Personalized tutoring
–
Market basket analysis (e.g. diapers and beer)
–
Medical text mining (e.g. migraines to calcium channel blockers to magnesium)
8
9
10
T: Playing checkers P: Percentage of games won against an arbitrary opponent E: Playing practice games against itself T: Recognizing hand-written words P: Percentage of words correctly classified E: Database of human-labeled images of handwritten words T: Driving on four-lane highways using vision sensors P: Average distance traveled before a human-judged error E: A sequence of images and steering commands recorded while
T: Categorize email messages as spam or legitimate. P: Percentage of email messages correctly classified. E: Database of emails, some with human-given labels
– Representation – Evaluation – Optimization
– E.g.: Greedy search
– E.g.: Gradient descent
– E.g.: Linear programming
– Training data includes desired outputs
– Training data does not include desired outputs
– Training data includes a few desired outputs
– Rewards from sequence of actions
– Discrete F(X): Classification – Continuous F(X): Regression – F(X) = Probability(X): Probability estimation
17 Supervised learning Unsupervised learning Semi-supervised learning
Slide credit: D. Hoiem and L. Lazebnik
Unsupervised “Weakly” supervised Fully supervised Definition depends on task
Slide credit: L. Lazebnik
a new test set? Training set (labels known) Test set (labels unknown)
Slide credit: L. Lazebnik
–
Bias: how much the average model over all training sets differ from the true model?
model
–
Variance: how much models estimated from different training sets differ from each other
–
High bias and low variance
–
High training error and high test error
–
Low bias and high variance
–
Low training error and high test error
Slide credit: L. Lazebnik
Overfitting Thriller! https://youtu.be/DQWI1kvmwRg
Generative Models
–
Naïve Bayes classifier
–
Bayesian network
– Logistjc regression – SVM – Boosted decision trees
Slide credit: D. Hoiem
Voronoi partitioning of feature space for two-category 2D and 3D data
from Duda et al.
Source: D. Lowe
x x x x x x x x
x1 + +
x x x x x x x x
x1 + +
x x x x x x x x
x1 + +
x x x x x x x x
x1 + +