K-Nearest Neighbors
Jia-Bin Huang Virginia Tech
Spring 2019
ECE-5424G / CS-5824
K-Nearest Neighbors Jia-Bin Huang Virginia Tech Spring 2019 - - PowerPoint PPT Presentation
K-Nearest Neighbors Jia-Bin Huang Virginia Tech Spring 2019 ECE-5424G / CS-5824 Administrative Check out review materials Probability Linear algebra Python and NumPy Start your HW 0 On your Local machine: Install
Jia-Bin Huang Virginia Tech
Spring 2019
ECE-5424G / CS-5824
Tuesday 11 AM - 12:00 PM Location: Whittmore Hall 457B
Thursday 11 AM - 12:00 PM Location: Whittmore Hall 457B
All are welcome.
More info: https://github.com/vt-vl-lab/reading_group
Supervised Learning Unsupervised Learning Discrete Classification Clustering Continuous Regression Dimensionality reduction
(Labeled dataset)
Slide credit: Dhruv Batra
Hypothesis
Hypothesis Size of house Estimated price
Regression
Hypothesis Unseen image Predicted object class
Image credit: CS231n @ Stanford
Classification
(Feature Extraction)
(Learning)
(Feature Extraction)
(Apply function, evaluate error)
Slide credit: Dhruv Batra
Slide credit: Dhruv Batra
π¦ 1 , π§ 1 , π¦ 2 , π§ 2 , β― , π¦ π , π§ π
Do nothing.
β π¦ = π§(π), where π = argmini πΈ(π¦, π¦(π))
Image credit: MegaFace
https://www.youtube.com/watch?v=TKNNOMddkNc
http://record-player.glitch.me/auth
(C) Dhruv Batra
[Hayes & Efros, SIGGRAPH07]
Hays and Efros, SIGGRAPH 2007
β¦ 200 total
[Hayes & Efros, SIGGRAPH07]
[Hayes & Efros, SIGGRAPH07]
Graph cut + Poisson blending
[Hayes & Efros, SIGGRAPH07]
[Hayes & Efros, SIGGRAPH07]
[Hayes & Efros, SIGGRAPH07]
[Hayes & Efros, SIGGRAPH07]
[Hayes & Efros, SIGGRAPH07]
[Hayes & Efros, SIGGRAPH07]
[Hayes & Efros, SIGGRAPH07]
Slide credit: Dhruv Batra
Slide credit: Carlos Guestrin
Slide credit: Carlos Guestrin
π¦ 1 , π§ 1 , π¦ 2 , π§ 2 , β― , π¦ π , π§ π
Do nothing.
β π¦ = π§(π), where π = argmini πΈ(π¦, π¦(π))
Οπ π¦π β π¦πβ² 2
πΈ π¦, π¦β² = max( π¦π β π¦π
β² )
Οπ ππ
2 π¦π β π¦πβ² 2
πΈ π¦, π¦β² = π¦ β π¦β² β€π΅(π¦ β π¦β²)
histint π¦, π¦β² = 1 β ΰ·
π
min(π¦π, π¦π
β²)
π2 π¦, π¦β² = 1 2 ΰ·
π
π¦π β π¦π
β² 2
π¦π + π¦π
β²
[Rubner et al. IJCV 2000]
π¦(1) π¦(3) π¦(2)
Gene
Large margin nearest neighbor (LMNN)
Slide credit: Carlos Guestrin
k = 3 k = 5
Image credit: Wikipedia
Image credit: CS231 @ Stanford
Slide credit: Carlos Guestrin
strongly, far points weakly
π₯(π) = exp(β π π¦ π , ππ£ππ π§
2
π2 )
?
Slide credit: Carlos Guestrin
x y
Here, this is the closest datapoint
Figure credit: Carlos Guestrin
Figure credit: Andrew Moore
Figure credit: Andrew Moore
Weight π₯(π) = exp(β π π¦ π , ππ£ππ π§
2
π2 ) Prediction (use all the data) π§ = ΰ·
π
π₯ π π§ π / ΰ·
π
π₯(π)
(Our examples use Gaussian)
Slide credit: Carlos Guestrin
Slide credit: Ben Taskar
Kernel regression
Slide credit: Dhruv Batra
π= 2 2π 2π
Evaluate on the test set only a single time, at the very end.
Slide credit: CS231 @ Stanford
Slide credit: CS231 @ Stanford
models on the validation set
Price ($) in 1000βs 500 1000 1500 2000 2500 100 200 300 400 Size in feet^2