kanade lucas tomasi klt tracker
play

Kanade-Lucas-Tomasi (KLT) Tracker 16-385 Computer Vision (Kris - PowerPoint PPT Presentation

Kanade-Lucas-Tomasi (KLT) Tracker 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University https://www.youtube.com/watch?v=rwIjkECpY0M Feature-based tracking Up to now, weve been aligning entire images but we can also track just


  1. Kanade-Lucas-Tomasi (KLT) Tracker 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University

  2. https://www.youtube.com/watch?v=rwIjkECpY0M

  3. Feature-based tracking Up to now, we’ve been aligning entire images 
 but we can also track just small image regions too! How should we select features? How should we track them from frame to frame?

  4. History of the Kanade-Lucas-Tomasi (KLT) Tracker Lucas Kanade An Iterative Image Registration Technique with an Application to Stereo Vision. 1981 Kanade Tomasi Detection and Tracking of Feature Points. 1991 The original KLT algorithm Tomasi Shi Good Features to Track. 1994

  5. Kanade-Lucas-Tomasi How should we track them from frame How should we select features? to frame? Lucas-Kanade Tomasi-Kanade Method for choosing the Method for aligning best feature (image patch) (tracking) an image patch for tracking

  6. What are good features for tracking?

  7. What are good features for tracking? Intuitively, we want to avoid smooth regions and edges. But is there a more is principled way to define good features?

  8. What are good features for tracking? Can be derived from the tracking algorithm

  9. What are good features for tracking? Can be derived from the tracking algorithm ‘A feature is good if it can be tracked well’

  10. Recall the Lucas-Kanade image alignment method: [ I ( W ( x ; p )) − T ( x )] 2 X error function (SSD) x [ I ( W ( x ; p + ∆ p )) − T ( x )] 2 X incremental update x

  11. Recall the Lucas-Kanade image alignment method: [ I ( W ( x ; p )) − T ( x )] 2 X error function (SSD) x [ I ( W ( x ; p + ∆ p )) − T ( x )] 2 X incremental update x � 2  I ( W ( x ; p )) + r I ∂ W X ∂ p ∆ p � T ( x ) linearize x

  12. Recall the Lucas-Kanade image alignment method: [ I ( W ( x ; p )) − T ( x )] 2 X error function (SSD) x [ I ( W ( x ; p + ∆ p )) − T ( x )] 2 X incremental update x � 2  I ( W ( x ; p )) + r I ∂ W X ∂ p ∆ p � T ( x ) linearize x � >  r I ∂ W ∆ p = H � 1 X [ T ( x ) � I ( W ( x ; p ))] ∂ p Gradient update x � >   � r I ∂ W r I ∂ W X H = ∂ p ∂ p x

  13. Recall the Lucas-Kanade image alignment method: [ I ( W ( x ; p )) − T ( x )] 2 X error function (SSD) x [ I ( W ( x ; p + ∆ p )) − T ( x )] 2 X incremental update x � 2  I ( W ( x ; p )) + r I ∂ W X ∂ p ∆ p � T ( x ) linearize x � >  r I ∂ W ∆ p = H � 1 X [ T ( x ) � I ( W ( x ; p ))] ∂ p Gradient update x � >   � r I ∂ W r I ∂ W X H = ∂ p ∂ p x p ← p + ∆ p Update

  14. Stability of gradient decent iterations depends on … � >  r I ∂ W ∆ p = H � 1 X [ T ( x ) � I ( W ( x ; p ))] ∂ p x

  15. Stability of gradient decent iterations depends on … � >  r I ∂ W ∆ p = H � 1 X [ T ( x ) � I ( W ( x ; p ))] ∂ p x Inverting the Hessian � >   � r I ∂ W r I ∂ W X H = ∂ p ∂ p x When does the inversion fail?

  16. Stability of gradient decent iterations depends on … � >  r I ∂ W ∆ p = H � 1 X [ T ( x ) � I ( W ( x ; p ))] ∂ p x Inverting the Hessian � >   � r I ∂ W r I ∂ W X H = ∂ p ∂ p x When does the inversion fail? H is singular. But what does that mean?

  17. Above the noise level λ 1 � 0 λ 2 � 0 both Eigenvalues are large Well-conditioned both Eigenvalues have similar magnitude

  18. Concrete example: Consider translation model  1  x + p 1 � � W 0 W ( x ; p ) = ∂ p = y + p 2 0 1 Hessian � >   � r I ∂ W r I ∂ W X H = ∂ p ∂ p x  1 �  I x ⇤  1 � ⇥ I x � 0 0 X I y = 0 1 I y 0 1 x  P � P x I x I x x I y I x = P P x I x I y x I y I y How are the eigenvalues related to image content?

  19. interpreting eigenvalues λ 2 λ 2 >> λ 1 What kind of image patch does each region represent? λ 1 ∼ 0 λ 1 >> λ 2 λ 2 ∼ 0 λ 1

  20. interpreting eigenvalues λ 2 corner horizontal edge λ 2 >> λ 1 λ 1 ~ λ 2 λ 1 >> λ 2 flat vertical edge λ 1

  21. interpreting eigenvalues λ 2 corner horizontal edge λ 2 >> λ 1 λ 1 ~ λ 2 λ 1 >> λ 2 flat vertical edge λ 1

  22. What are good features for tracking?

  23. What are good features for tracking? min( λ 1 , λ 2 ) > λ

  24. KLT algorithm min( λ 1 , λ 2 ) > λ 1. Find corners satisfying 2. For each corner compute displacement to next frame using the Lucas-Kanade method 3. Store displacement of each corner, update corner position 4. (optional) Add more corner points every M frames using 1 5. Repeat 2 to 3 (4) 6. Returns long trajectories for each corner point

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend