efficient robust lk for mobile vision
play

Efficient & Robust LK for Mobile Vision Instructor - Simon - PowerPoint PPT Presentation

Efficient & Robust LK for Mobile Vision Instructor - Simon Lucey 16-623 - Designing Computer Vision Apps Direct Method Indirect Method (ours) (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey Bit-Planes: Dense Subpixel Alignment of


  1. Efficient & Robust LK for Mobile Vision Instructor - Simon Lucey 16-623 - Designing Computer Vision Apps

  2. Direct Method Indirect Method (ours) (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey “Bit-Planes: Dense Subpixel Alignment of Binary Descriptors.” ACCV 2016.

  3. Direct Method Indirect Method (ours) (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey “Bit-Planes: Dense Subpixel Alignment of Binary Descriptors.” ACCV 2016.

  4. Direct Method Indirect Method (ours) (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey “Bit-Planes: Dense Subpixel Alignment of Binary Descriptors.” ACCV 2016.

  5. Today • Review - LK Algorithm • Inverse Composition • Robust Extensions

  6. Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p 4

  7. Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p 5

  8. Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p “We consider this image to always be static....” 5

  9. Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p 6

  10. Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p ∂ I ( x 0 1 ) ∂ W ( x 1 ; p )     0 T ∂ x 0 T . . . ∂ p T 1 ∂ I ( p ) . . . ...     . . . = . . .     ∂ p T     ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N x 0 = W ( x ; p ) 7

  11. Reminder - Template Coordinates W ( x 1 ; p ) W ( x 1 ; 0 ) T I “Source Image” “Template” 8

  12. Reminder - Template Coordinates x 0 1 x 1 T I “Source Image” “Template” 8

  13. Review - LK Algorithm ∂ I ( x 0 1 ) ∂ W ( x 1 ; p )  0 T    ∂ x 0 T . . . ∂ p T 1 ∂ I ( p ) . . . ...     . . . = . . .     ∂ p T     ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N r y I r x I 9

  14. Which Way? Step 1: Warp Image I ( p ) I Step 2: Estimate Gradients ∗ r x I ( p ) “Horizontal” ∗ r y I ( p ) I ( p ) “Vertical” 10

  15. Which Way? Step 1: Warp Image I ( p ) I Step 2: Estimate Gradients ∗ ∂ I ( x 0 ) “Horizontal” ∂ x ∗ I ( p ) “Vertical” 10

  16. Which Way? Step 1: Warp Image I ( p ) I Step 2: Estimate Gradients ∗ ∂ I ( x 0 ) “Horizontal” ∂ x ∗ I ( p ) “Vertical” 10

  17. Which Way? Step 1: Estimate Gradients ∗ “Horizontal” r x I ∗ I “Vertical” Step 2: Warp Gradients r y I r x I ( p ) r y I ( p ) 11

  18. Which Way? Step 1: Estimate Gradients ∗ “Horizontal” r x I ∗ I “Vertical” Step 2: Warp Gradients r y I ∂ I ( x 0 ) ∂ x 0 11

  19. Review - LK Algorithm ∂ I ( x 0 1 ) ∂ W ( x 1 ; p )  0 T    ∂ x 0 T . . . ∂ p T 1 ∂ I ( p ) . . . ...     . . . = . . .     ∂ p T     ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N 12

  20. ∂ W ( x ; p ) Deriving the ∂ p T • For an affine warp, � 2 3 x  1 − p 1 p 2 p 3 W ( x ; p ) = y 4 5 1 − p 5 p 4 p 6 1  − x � ∂ W ( x ; p ) 1 0 0 0 y = 0 0 0 1 ∂ p T x − y 13

  21. ∂ W ( x ; p ) Deriving the ∂ p T • For an homography warp, � 2 3 x 1  1 − p 1 p 2 p 3 W ( x ; p ) = y 4 5 1 − p 5 p 7 x + p 8 y + p 9 p 4 p 6 1  − x � ∂ W ( x ; p ) 1 0 0 0 y = 0 0 0 1 ∂ p T x − y 14

  22. Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) T ∆ p ∂ p N × 1 N × 1 N × P N = number of pixels P = number of warp parameters 15

  23. Review - LK Algorithm • Often just refer to, J I = ∂ I ( p ) ∂ p T as the “ Jacobian ” matrix. • Also refer to, H I = J T I J I as the “ pseudo-Hessian ”. • Finally, we can refer to, T ( 0 ) = I ( p + ∆ p ) as the “ template ”. 16

  24. LK Algorithm • Actual algorithm is just the application of the following steps, Step 1: ∆ p = H − 1 I J T I [ T ( 0 ) − I ( p )] Step 2: p ← p + ∆ p keep applying steps until converges. ∆ p 17

  25. Examples of LK Alignment 18

  26. Examples of LK Alignment 18

  27. Gauss-Newton Algorithm • LK is essentially an application of the Gauss-Newton algorithm, x || y − F ( x ) || 2 arg min 2 s.t. F : R N → R M Step 1: “Carl Friedrich Gauss” ∆ x || y − F ( x ) − ∂ F ( x ) ∂ x T ∆ x || 2 arg min 2 Step 2: x ← x + ∆ x keep applying steps until converges. ∆ x “Isaac Newton” 19

  28. Optimization Interpretation • The optimization employed with the LK algorithm can be interpreted as Gauss-Newton optimization. • Other non-linear least-squares optimization strategies have been investigated (Baker et al. 2003) • Levenberg-Marquadt • Newton • Steepest-Descent • Gauss-Newton in empirical evaluations has appeared to be the most robust (Baker et al.). 20

  29. LK Event Horizon • Initial warp estimate has to be suitably close to the ground- truth for gradient-search methods to work. • Kind of like a black hole’s event horizon. • You got to be inside it to be sucked in!

  30. Expanding the Event Horizon • Best strategy is to expand the neighborhood across which N gradients are estimated. • Simplest to do this in practice with a blur. • Often apply what is known as “coarse-to-fine” alignment. ∂ I ( x ) ∂ x I ( x + ∆ x ) − I ( x ) x ∈ N

  31. Questions • Why is LK sensitive to the initial guess? • Why can’t you perform the optimization in a single shot?

  32. Today • Review - LK Algorithm • Inverse Composition • Robust Extensions

  33. ? Efficient search is essential on mobile and desktop!!

  34. Computation Concerns • Unfortunately, the LK algorithm can be computationally expensive. • Requires the re-computation of the Jacobian matrix at each iteration. ∂ I ( x 0 1 ) ∂ W ( x 1 ; p )     0 T ∂ x 0 T . . . ∂ p T 1     . . . ... J I = . . .     . . .     ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N • With the additional inversion of the Hessian matrix at each iteration. H I = J T I J I “Is there any way we can pre-compute any of this?” 26

  35. Linearizing the Template? I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) T ∆ p ∂ p 27

  36. Linearizing the Template? T ( 0 ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p “ template ” 27

  37. Linearizing the Template? T ( 0 ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p “ template ” T ( 0 ) + ∂ T ( 0 ) ∂ p T ∆ p ≈ I ( p ) “Why is this useful if the template must be static?” 27

  38. Simple Example N ||W ( x n ; p ) + ∂ W ( x n ; p ) X ∆ p − W ( x n ; 0 ) || 2 arg min 2 ∂ p T ∆ p n =1

  39. Simple Example N ||W ( x n ; p ) + ∂ W ( x n ; p ) X ∆ p − W ( x n ; 0 ) || 2 arg min 2 ∂ p T ∆ p n =1 x 0 1 x 0 = W ( x ; p ) where: x = W ( x ; 0 ) x 0 3 p + ∆ p x 1 x 0 2 x 3 x 2

  40. Simple Example N ||W ( x n ; p ) − W ( x n ; 0 ) − ∂ W ( x n ; 0 ) X ∆ p ∗ || 2 arg min 2 ∂ p T ∆ p ∗ n =1

  41. Simple Example N ||W ( x n ; p ) − W ( x n ; 0 ) − ∂ W ( x n ; 0 ) X ∆ p ∗ || 2 arg min 2 ∂ p T ∆ p ∗ n =1 x 0 1 x 0 = W ( x ; p ) where: x = W ( x ; 0 ) x 0 3 0 + ∆ p ∗ x 1 x 0 2 x 3 x 2

  42. Simple Example N ||W ( x n ; p ) − W ( x n ; 0 ) − ∂ W ( x n ; 0 ) X ∆ p ∗ || 2 arg min 2 ∂ p T ∆ p ∗ n =1 x 0 “Static” 1 x 0 = W ( x ; p ) where: x = W ( x ; 0 ) x 0 3 0 + ∆ p ∗ x 1 x 0 2 x 3 x 2

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend