lecture 2
play

Lecture 2: Nearest Neighbour Classifier Aykut Erdem September 2017 - PowerPoint PPT Presentation

Lecture 2: Nearest Neighbour Classifier Aykut Erdem September 2017 Hacettepe University Your 1st Classifier: Nearest Neighbor Classifier Concept Learning Definition: Acquire an operational definition of a general category of objects


  1. Lecture 2: − Nearest Neighbour Classifier Aykut Erdem September 2017 Hacettepe University

  2. Your 1st Classifier: 
 Nearest Neighbor Classifier

  3. Concept Learning • Definition: Acquire an operational definition of a general category of objects given positive and negative training examples. • Also called binary classification , binary supervised learning slide by Thorsten Joachims 3

  4. Concept Learning Example correct color original presentation binder A+ correct color original presentation binder A+ (complete, (yes, no) (yes, no) (clear, unclear, (yes, no) (3) (2) (2) (3) (2) partial, guessing) cryptic) complete yes yes clear no yes 1 complete yes yes clear no yes complete no yes clear no yes 2 complete no yes clear no yes partial yes no unclear no no 3 partial yes no unclear no no complete yes yes clear yes yes 4 complete yes yes clear yes yes • Instance Space X : Set of all possible objects describable by attributes (often called features ). • Concept c : Subset of objects from X ( c is unknown). slide by Thorsten Joachims • Target Function f : Characteristic function indicating membership in c based on attributes (i.e. label ) ( f is unknown). • Training Data S : Set of instances labeled with target function. 4

  5. Concept Learning as Learning 
 A Binary Function • Task 
 – Learn (to imitate) a function f : X → {+1,-1} • Training Examples 
 – Learning algorithm is given the correct value of the 
 function for particular inputs → training examples 
 – An example is a pair ( x , y ) , where x is the input and 
 y = f ( x ) is the output of the target function applied to x . • Goal 
 – Find a function 
 slide by Thorsten Joachims h : X → {+1,-1} 
 that approximates 
 f : X → {+1,-1} 
 as well as possible. 5

  6. Supervised Learning • Task 
 – Learn (to imitate) a function f : X → Y • Training Examples 
 – Learning algorithm is given the correct value of the function 
 for particular inputs → training examples 
 – An example is a pair ( x , f ( x )) , where x is the input and y = f ( x ) is 
 the output of the target function applied to x . • Goal 
 – Find a function 
 slide by Thorsten Joachims h : X → Y 
 that approximates 
 f : X → Y 
 as well as possible. 6

  7. Supervised / Inductive Learning • Given - examples of a function ( x , f ( x )) • Predict function f ( x ) for new examples x - Discrete f ( x ) : Classification - Continuous f ( x ) : Regression - f ( x ) = Probability( x ) : Probability estimation slide by Thorsten Joachims 7

  8. 8

  9. 9 Image Classification : a core task in Computer Vision slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  10. 10 The problem : semantic gap slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  11. 11 Challenges: Viewpoint Variation slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  12. 12 Challenges: Illumination slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  13. 13 Challenges: Deformation slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  14. 14 Challenges: Occlusion slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  15. 15 Challenges: Background clutter slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  16. 16 Challenges: Intraclass variation slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  17. An image classifier slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson Unlike e.g. sorting a list of numbers, no obvious way to hard-code the algorithm for recognizing a cat, or other classes. 17

  18. 18 Attempts have been made slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  19. Data-driven approach: 1.Collect a dataset of images and labels 2.Use Machine Learning to train an image classifier 3.Evaluate the classifier on a withheld set of test images slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson 19

  20. First classifier: Nearest Neighbor Classifier Remember all training images and their labels slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson Predict the label of the most similar training image 20

  21. 21 slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  22. 22 slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  23. How do we compare the images? What is the distance metric ? slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson 23

  24. Nearest Neighbor classifier slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson 24 Lecture 2 - Lecture 2 - 6 Jan 2016 6 Jan 2016 24

  25. Nearest Neighbor classifier remember the training data 25 Lecture 2 - Lecture 2 - 6 Jan 2016 6 Jan 2016 25

  26. Nearest Neighbor classifier slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson for every test image: - find nearest train image with L1 distance - predict the label of nearest training 26 Lecture 2 - Lecture 2 - 6 Jan 2016 6 Jan 2016 image 26

  27. Nearest Neighbor classifier Q: how does the classification speed depend on the size of slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson the training data? 27 Lecture 2 - Lecture 2 - 6 Jan 2016 6 Jan 2016 27

  28. Nearest Neighbor classifier Q: how does the classification speed depend on the size of the slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson training data? linearly :( 28 Lecture 2 - Lecture 2 - 6 Jan 2016 6 Jan 2016 28

  29. Aside: Approximate Nearest Neighbor find approximate nearest neighbors quickly slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson 29 Lecture 2 - Lecture 2 - 6 Jan 2016 6 Jan 2016 29

  30. 30 slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  31. k-Nearest Neighbor find the k nearest images, have them vote on the label slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson 31

  32. K-Nearest Neighbor (kNN) • Given: Training data {( ! 1 , " 1 ),…, ( ! n , " n )} 
 – Attribute vectors: ! # ∈ $ 
 – Labels: " # ∈ % ( 𝑦 ⃗ � , 𝑧 � , … , x � , 𝑧 � ) • • Parameter: 
 𝑦 ⃗ � ∈ 𝑌 – 𝑧 � ∈ 𝑍 – – Similarity function: & ∶ $ × $ → R 
 • – Number of nearest neighbors to consider: k 𝐿 ∶ 𝑌 × 𝑌 ¡ → ¡ℜ – – • Prediction rule 
 • – New example !′ 
 – x’ – K-nearest neighbors: k train examples with largest & ( ! # , !′ ) ⃗ � ) 𝐿(𝑦 ⃗ � , 𝑦 – slide by Thorsten Joachims 32

  33. 33 1-Nearest Neighbor slide by Thorsten Joachims

  34. 34 4-Nearest Neighbors slide by Thorsten Joachims

  35. 4-Nearest Neighbors Sign slide by Thorsten Joachims 35

  36. 36 slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  37. 37 We will talk about this later! slide by Fei-Fei Li & Andrej Karpathy & Justin Johnson

  38. If we get more data • 1 Nearest Neighbor - Converges to perfect solution if clear separation - Twice the minimal error rate 2 p (1- p ) for noisy problems • k-Nearest Neighbor - Converges to perfect solution if clear separation ( but needs more data ) - Converges to minimal error min( p , 1- p ) for noisy problems if k increases 38

  39. Demo 39

  40. Weighted K-Nearest Neighbor • Given: Training data {( ! 1 , " 1 ),…, ( ! n , " n )} 
 – Attribute vectors: ! # ∈ $ 
 𝑦 ⃗ � , 𝑧 � , … , 𝑦 ⃗ � , 𝑧 � • – Target attribute " # ∈ % 𝑦 ⃗ � ∈ 𝑌 – • Parameter: 
 𝑧 � ∈ 𝑍 – • – Similarity function: & ∶ $ × $ → R 
 𝐿 ∶ 𝑌 × 𝑌 ¡ → ¡ℜ – – Number of nearest neighbors to consider: k – • • Prediction rule 
 – x’ – New example !′ 
 ⃗ � 𝐿 𝑦 ⃗ � , 𝑦 – – K-nearest neighbors: k train examples with largest & ( ! # , !′ ) 40

  41. More Nearest Neighbors 
 in Visual Data 41

  42. Where in the World? [Hays & Efros, CVPR 2008] A nearest neighbor 
 recognition example slide by James Hays 42

  43. Where in the World? [Hays & Efros, CVPR 2008] slide by James Hays 43

  44. Where in the World? [Hays & Efros, CVPR 2008] slide by James Hays 44

  45. 6+ million geotagged photos 
 by 109,788 photographers slide by James Hays Annotated by Flickr users 45

  46. 6+ million geotagged photos 
 by 109,788 photographers slide by James Hays Annotated by Flickr users 46

  47. 47 47 slide by James Hays

  48. 48 Scene Matches slide by James Hays

  49. 49 slide by James Hays

  50. 50 Scene Matches slide by James Hays

  51. 51 slide by James Hays

  52. 52 Scene Matches slide by James Hays

  53. 53 slide by James Hays

  54. The Importance of Data slide by James Hays 54

  55. Scene Completion [Hays & Efros, SIGGRAPH07] slide by James Hays 55

  56. slide by James Hays … 200 total 56 Hays and Efros, SIGGRAPH 2007

  57. Context Matching slide by James Hays 57 Hays and Efros, SIGGRAPH 2007

  58. slide by James Hays Graph cut + Poisson blending 58 58 Hays and Efros, SIGGRAPH 2007

  59. slide by James Hays 59 Hays and Efros, SIGGRAPH 2007

  60. slide by James Hays 60 Hays and Efros, SIGGRAPH 2007

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend