Efficient & Robust LK for Mobile Vision Instructor - Simon Lucey 16-623 - Designing Computer Vision Apps
Direct Method Indirect Method (ours) (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey “Bit-Planes: Dense Subpixel Alignment of Binary Descriptors.” ACCV 2016.
Direct Method Indirect Method (ours) (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey “Bit-Planes: Dense Subpixel Alignment of Binary Descriptors.” ACCV 2016.
Direct Method Indirect Method (ours) (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey “Bit-Planes: Dense Subpixel Alignment of Binary Descriptors.” ACCV 2016.
Today • Review - LK Algorithm • Inverse Composition • Robust Extensions
Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p 4
Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p 5
Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p “We consider this image to always be static....” 5
Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p 6
Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p ∂ I ( x 0 1 ) ∂ W ( x 1 ; p ) 0 T ∂ x 0 T . . . ∂ p T 1 ∂ I ( p ) . . . ... . . . = . . . ∂ p T ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N x 0 = W ( x ; p ) 7
Reminder - Template Coordinates W ( x 1 ; p ) W ( x 1 ; 0 ) T I “Source Image” “Template” 8
Reminder - Template Coordinates x 0 1 x 1 T I “Source Image” “Template” 8
Review - LK Algorithm ∂ I ( x 0 1 ) ∂ W ( x 1 ; p ) 0 T ∂ x 0 T . . . ∂ p T 1 ∂ I ( p ) . . . ... . . . = . . . ∂ p T ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N r y I r x I 9
Which Way? Step 1: Warp Image I ( p ) I Step 2: Estimate Gradients ∗ r x I ( p ) “Horizontal” ∗ r y I ( p ) I ( p ) “Vertical” 10
Which Way? Step 1: Warp Image I ( p ) I Step 2: Estimate Gradients ∗ ∂ I ( x 0 ) “Horizontal” ∂ x ∗ I ( p ) “Vertical” 10
Which Way? Step 1: Warp Image I ( p ) I Step 2: Estimate Gradients ∗ ∂ I ( x 0 ) “Horizontal” ∂ x ∗ I ( p ) “Vertical” 10
Which Way? Step 1: Estimate Gradients ∗ “Horizontal” r x I ∗ I “Vertical” Step 2: Warp Gradients r y I r x I ( p ) r y I ( p ) 11
Which Way? Step 1: Estimate Gradients ∗ “Horizontal” r x I ∗ I “Vertical” Step 2: Warp Gradients r y I ∂ I ( x 0 ) ∂ x 0 11
Review - LK Algorithm ∂ I ( x 0 1 ) ∂ W ( x 1 ; p ) 0 T ∂ x 0 T . . . ∂ p T 1 ∂ I ( p ) . . . ... . . . = . . . ∂ p T ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N 12
∂ W ( x ; p ) Deriving the ∂ p T • For an affine warp, � 2 3 x 1 − p 1 p 2 p 3 W ( x ; p ) = y 4 5 1 − p 5 p 4 p 6 1 − x � ∂ W ( x ; p ) 1 0 0 0 y = 0 0 0 1 ∂ p T x − y 13
∂ W ( x ; p ) Deriving the ∂ p T • For an homography warp, � 2 3 x 1 1 − p 1 p 2 p 3 W ( x ; p ) = y 4 5 1 − p 5 p 7 x + p 8 y + p 9 p 4 p 6 1 − x � ∂ W ( x ; p ) 1 0 0 0 y = 0 0 0 1 ∂ p T x − y 14
Review - LK Algorithm • Lucas & Kanade (1981) realized this and proposed a method for estimating warp displacement using the principles of gradients and spatial coherence . • Technique applies Taylor series approximation to any spatially coherent area governed by the warp . W ( x ; p ) I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) T ∆ p ∂ p N × 1 N × 1 N × P N = number of pixels P = number of warp parameters 15
Review - LK Algorithm • Often just refer to, J I = ∂ I ( p ) ∂ p T as the “ Jacobian ” matrix. • Also refer to, H I = J T I J I as the “ pseudo-Hessian ”. • Finally, we can refer to, T ( 0 ) = I ( p + ∆ p ) as the “ template ”. 16
LK Algorithm • Actual algorithm is just the application of the following steps, Step 1: ∆ p = H − 1 I J T I [ T ( 0 ) − I ( p )] Step 2: p ← p + ∆ p keep applying steps until converges. ∆ p 17
Examples of LK Alignment 18
Examples of LK Alignment 18
Gauss-Newton Algorithm • LK is essentially an application of the Gauss-Newton algorithm, x || y − F ( x ) || 2 arg min 2 s.t. F : R N → R M Step 1: “Carl Friedrich Gauss” ∆ x || y − F ( x ) − ∂ F ( x ) ∂ x T ∆ x || 2 arg min 2 Step 2: x ← x + ∆ x keep applying steps until converges. ∆ x “Isaac Newton” 19
Optimization Interpretation • The optimization employed with the LK algorithm can be interpreted as Gauss-Newton optimization. • Other non-linear least-squares optimization strategies have been investigated (Baker et al. 2003) • Levenberg-Marquadt • Newton • Steepest-Descent • Gauss-Newton in empirical evaluations has appeared to be the most robust (Baker et al.). 20
LK Event Horizon • Initial warp estimate has to be suitably close to the ground- truth for gradient-search methods to work. • Kind of like a black hole’s event horizon. • You got to be inside it to be sucked in!
Expanding the Event Horizon • Best strategy is to expand the neighborhood across which N gradients are estimated. • Simplest to do this in practice with a blur. • Often apply what is known as “coarse-to-fine” alignment. ∂ I ( x ) ∂ x I ( x + ∆ x ) − I ( x ) x ∈ N
Questions • Why is LK sensitive to the initial guess? • Why can’t you perform the optimization in a single shot?
Today • Review - LK Algorithm • Inverse Composition • Robust Extensions
? Efficient search is essential on mobile and desktop!!
Computation Concerns • Unfortunately, the LK algorithm can be computationally expensive. • Requires the re-computation of the Jacobian matrix at each iteration. ∂ I ( x 0 1 ) ∂ W ( x 1 ; p ) 0 T ∂ x 0 T . . . ∂ p T 1 . . . ... J I = . . . . . . ∂ I ( x 0 ∂ W ( x N ; p ) N ) 0 T ∂ p T ∂ x 0 T . . . N • With the additional inversion of the Hessian matrix at each iteration. H I = J T I J I “Is there any way we can pre-compute any of this?” 26
Linearizing the Template? I ( p + ∆ p ) ≈ I ( p ) + ∂ I ( p ) T ∆ p ∂ p 27
Linearizing the Template? T ( 0 ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p “ template ” 27
Linearizing the Template? T ( 0 ) ≈ I ( p ) + ∂ I ( p ) ∂ p T ∆ p “ template ” T ( 0 ) + ∂ T ( 0 ) ∂ p T ∆ p ≈ I ( p ) “Why is this useful if the template must be static?” 27
Simple Example N ||W ( x n ; p ) + ∂ W ( x n ; p ) X ∆ p − W ( x n ; 0 ) || 2 arg min 2 ∂ p T ∆ p n =1
Simple Example N ||W ( x n ; p ) + ∂ W ( x n ; p ) X ∆ p − W ( x n ; 0 ) || 2 arg min 2 ∂ p T ∆ p n =1 x 0 1 x 0 = W ( x ; p ) where: x = W ( x ; 0 ) x 0 3 p + ∆ p x 1 x 0 2 x 3 x 2
Simple Example N ||W ( x n ; p ) − W ( x n ; 0 ) − ∂ W ( x n ; 0 ) X ∆ p ∗ || 2 arg min 2 ∂ p T ∆ p ∗ n =1
Simple Example N ||W ( x n ; p ) − W ( x n ; 0 ) − ∂ W ( x n ; 0 ) X ∆ p ∗ || 2 arg min 2 ∂ p T ∆ p ∗ n =1 x 0 1 x 0 = W ( x ; p ) where: x = W ( x ; 0 ) x 0 3 0 + ∆ p ∗ x 1 x 0 2 x 3 x 2
Simple Example N ||W ( x n ; p ) − W ( x n ; 0 ) − ∂ W ( x n ; 0 ) X ∆ p ∗ || 2 arg min 2 ∂ p T ∆ p ∗ n =1 x 0 “Static” 1 x 0 = W ( x ; p ) where: x = W ( x ; 0 ) x 0 3 0 + ∆ p ∗ x 1 x 0 2 x 3 x 2
Recommend
More recommend