lecture 13 classification
play

Lecture 13: Classification 6.0002 LECTURE 13 1 uncements Anno - PowerPoint PPT Presentation

Lecture 13: Classification 6.0002 LECTURE 13 1 uncements Anno nounc Reading Chapter 24 Section 5.3.2 (list comprehension) Course evaluations Online evaluation now through noon on Friday, December 16 2 6.0002 LECTURE 13


  1. Lecture 13: Classification 6.0002 LECTURE 13 1

  2. uncements Anno nounc  Reading ◦ Chapter 24 ◦ Section 5.3.2 (list comprehension)  Course evaluations ◦ Online evaluation now through noon on Friday, December 16 2 6.0002 LECTURE 13

  3. earning Super ervis ised ed L Lear  Regression ◦ Predict a real number associated with a feature vector ◦ E.g., use linear regression to fit a curve to data  Classification ◦ Predict a discrete value (label) associated with a feature vector 3 6.0002 LECTURE 13

  4. An n Exam ample ( le (simil ilar t to o ear arlier ier l lecture) e) Features Label Name Egg-laying Scales Poisonous Cold- Number Reptile blooded legs Cobra 1 1 1 1 0 1 Rattlesnake 1 1 1 1 0 1 Boa 0 1 0 1 0 1 constrictor Chicken 1 1 0 1 2 0 Guppy 0 1 0 0 0 0 Dart frog 1 0 1 0 4 0 Zebra 0 0 0 0 4 0 Python 1 1 0 1 0 1 Alligator 1 1 0 1 4 1 4 6.0002 LECTURE 13

  5. ix Distan ance M Matrix Code for producing this table posted 6.0002 LECTURE 13 5

  6. ion Using D Distan ance M Matrix ix for or C Clas assific icatio  Simplest approach is probably nearest neighbor  Remember training data  When predicting the label of a new example ◦ Find the nearest example in the training data ◦ Predict the label associated with that example X 6 6.0002 LECTURE 13

  7. ix Distan ance M Matrix Label R R R ~R ~R ~R 7 6.0002 LECTURE 13

  8. le An E Exam ample 6.0002 LECTURE 13 8

  9. Neighbors K-nearest N X 6.0002 LECTURE 13 9

  10. le An E Exam ample 6.0002 LECTURE 13 10

  11. KNN Advantages es a and D Disad advan antag ages s of K  Advantages ◦ Learning fast, no explicit training ◦ No theory required ◦ Easy to explain method and results  Disadvantages ◦ Memory intensive and predictions can take a long time ◦ Are better algorithms than brute force ◦ No model to shed light on process that generated data 11 6.0002 LECTURE 13

  12. Disaster The e Titanic D  RMS Titanic sank in the North Atlantic the morning of 15 April 1912, after colliding with an iceberg. Of the 1,300 passengers aboard, 812 died. (703 of 918 crew members died.)  Database of 1046 passengers ◦ Cabin class ◦ 1 st , 2 nd , 3 rd ◦ Age ◦ Gender 12 6.0002 LECTURE 13

  13. Enough Is s Accuracy En  If we predict “died”, accuracy will be >62% or passenger and >76% for crew members  Consider a disease that occurs in 0.1% of population ◦ Predicting disease-free has an accuracy of 0.999 13 6.0002 LECTURE 13

  14. ics Ot Other er M Met etric sensitivity = recall specificity = precision 14 6.0002 LECTURE 13

  15. Matters Testing Me Methodology Ma  Leave-one-out  Repeated random subsampling 15 6.0002 LECTURE 13

  16. ut Leave-one ne-out 6.0002 LECTURE 13 16

  17. ng Repe peated R d Rando ndom S Sub ubsampl pling 6.0002 LECTURE 13 17

  18. ng Repe peated R d Rando ndom S Sub ubsampl pling 6.0002 LECTURE 13 18

  19. KNN Let’s Tr Try K 6.0002 LECTURE 13 19

  20. Results Re Average of 10 80/20 splits using KNN (k=3) Accuracy = 0.766 Sensitivity = 0.67 Specificity = 0.836 Pos. Pred. Val. = 0.747 Average of LOO testing using KNN (k=3) Accuracy = 0.769 Sensitivity = 0.663 Specificity = 0.842 Pos. Pred. Val. = 0.743 Considerably better than 62% Not much difference between experiments 20 6.0002 LECTURE 13

  21. Log ogis istic ic R Reg egres essio ion n  Analogous to linear regression  Designed explicitly for predicting probability of an event ◦ Dependent variable can only take on a finite set of values ◦ Usually 0 or 1  Finds weights for each feature ◦ Positive implies variable positively correlated with outcome ◦ Negative implies variable negatively correlated with outcome ◦ Absolute magnitude related to strength of the correlation  Optimization problem a bit complex, key is use of a log function—won’t make you look at it 21 6.0002 LECTURE 13

  22. s LogisticRegression Class fit (sequence of feature vectors, sequence of labels) Returns object of type LogisticRegression coef_ Returns weights of features predict_proba (feature vector) Returns probabilities of labels 22 6.0002 LECTURE 13

  23. del Bui uildi ding ng a a Mode 6.0002 LECTURE 13 23

  24. del Appl plying ng M Mode 6.0002 LECTURE 13 24

  25. nsion List C Compr prehe hens expr for id in L Creates a list by evaluating expr len(L) times with id in expr replaced by each element of L 25 6.0002 LECTURE 13

  26. del Appl plying ng M Mode 6.0002 LECTURE 13 26

  27. er Puttin ing I It t Tog ogether 6.0002 LECTURE 13 27

  28. Results Re Average of 10 80/20 splits LR Accuracy = 0.804 Sensitivity = 0.719 Specificity = 0.859 Pos. Pred. Val. = 0.767 Average of LOO testing using LR Accuracy = 0.786 Sensitivity = 0.705 Specificity = 0.842 Pos. Pred. Val. = 0.754 28 6.0002 LECTURE 13

  29. lts Com ompare t e to K KNN NN R Result Average of 10 80/20 splits LR Average of 10 80/20 splits using KNN (k=3) Accuracy = 0.804 Accuracy = 0.744 Sensitivity = 0.719 Sensitivity = 0.629 Specificity = 0.859 Specificity = 0.829 Pos. Pred. Val. = 0.767 Pos. Pred. Val. = 0.728 Average of LOO testing using LR Average of LOO testing using KNN (k=3) Accuracy = 0.786 Accuracy = 0.769 Sensitivity = 0.705 Sensitivity = 0.663 Specificity = 0.842 Specificity = 0.842 Pos. Pred. Val. = 0.754 Pos. Pred. Val. = 0.743 Performance not much difference Logistic regression slightly better Also provides insight about variables 29 6.0002 LECTURE 13

  30. eights Loo ookin king a at F Fea eature W Wei model.classes_ = ['Died' 'Survived'] For label Survived C1 = 1.66761946545 Be wary of reading too C2 = 0.460354552452 much into the weights C3 = -0.50338282535 Features are often age = -0.0314481062387 correlated male gender = -2.39514860929 30 6.0002 LECTURE 13

  31. Cutoff Cha hang nging ng t the C Try p = 0.1 Try p = 0.9 Accuracy = 0.493 Accuracy = 0.656 Sensitivity = 0.976 Sensitivity = 0.176 Specificity = 0.161 Specificity = 0.984 Pos. Pred. Val. = 0.444 Pos. Pred. Val. = 0.882 6.0002 LECTURE 13 31

  32. ic) ROC ( (Rec ecei eiver r Op Oper eratin ing C Char aracteristic 6.0002 LECTURE 13 32

  33. put Output 6.0002 LECTURE 13 33

  34. MIT OpenCourseWare https://ocw.mit.edu 6.0002 Introduction to Computational Thinking and Data Science Fall 2016 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend