image segmentation

Image Segmentation Philipp Kr ahenb uhl Stanford University - PowerPoint PPT Presentation

Image Segmentation Philipp Kr ahenb uhl Stanford University April 24, 2013 Philipp Kr ahenb uhl (Stanford University) Segmentation April 24, 2013 1 / 63 Image Segmentation Goal: identify groups of pixels that go together


  1. Images as Graphs q w pq p ‐ (Fully-Connected) Graph ◮ Node (vertex) for every pixel ◮ Link between (every) pair of pixels, (p,q) ◮ Affinity weight w pq for each link (edge) ⋆ w pq measures similarity ⋆ Inverse proportional to distance (difference in color and position) Slide Credit: Steve Seitz Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 16 / 63 Lecture 8 - ‐ ‐ Fei-Fei Li

  2. Segmentation by Graph Cuts A B C Break Graph into Segments (cliques) ◮ Delete links that cross between segments ◮ Easiest to break links that low similarity (low affinity weight) ⋆ Similar pixels should be in the same segment ⋆ Dissimilar pixels should be if different segments Slide Credit: Steve Seitz Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 17 / 63 ‐ ‐ Lecture 8 - Fei-Fei Li

  3. Measuring Affinity Distance exp( − 1 2 σ 2 � x − y � 2 ) Intensity exp( − 1 2 σ 2 � I ( x ) − I ( y ) � 2 ) Color exp( − 1 2 σ 2 dist ( c ( x ) , c ( y )) 2 ) � �� � suitable color distance Texture exp( − 1 � 2 ) 2 σ 2 � f ( x ) − f ( y ) � �� � Filter output Source: Forsyth & Ponce Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 18 / 63

  4. Measuring Affinity Distance exp( − 1 2 σ 2 � x − y � 2 ) Intensity exp( − 1 2 σ 2 � I ( x ) − I ( y ) � 2 ) Color exp( − 1 2 σ 2 dist ( c ( x ) , c ( y )) 2 ) � �� � suitable color distance Texture exp( − 1 � 2 ) 2 σ 2 � f ( x ) − f ( y ) � �� � Filter output Source: Forsyth & Ponce Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 18 / 63

  5. Measuring Affinity Distance exp( − 1 2 σ 2 � x − y � 2 ) Intensity exp( − 1 2 σ 2 � I ( x ) − I ( y ) � 2 ) Color exp( − 1 2 σ 2 dist ( c ( x ) , c ( y )) 2 ) � �� � suitable color distance Texture exp( − 1 � 2 ) 2 σ 2 � f ( x ) − f ( y ) � �� � Filter output Source: Forsyth & Ponce Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 18 / 63

  6. Measuring Affinity Distance exp( − 1 2 σ 2 � x − y � 2 ) Intensity exp( − 1 2 σ 2 � I ( x ) − I ( y ) � 2 ) Color exp( − 1 2 σ 2 dist ( c ( x ) , c ( y )) 2 ) � �� � suitable color distance Texture exp( − 1 � 2 ) 2 σ 2 � f ( x ) − f ( y ) � �� � Filter output Source: Forsyth & Ponce Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 18 / 63

  7. Scale Affects Affinity Small σ : group only nearby points Large σ : group far-away points 1.0 2.0 0.8 1.5 affinity 0.6 1.0 0.4 0.5 0.2 0.0 0.0 0 20 40 60 80 100 0.0 0.5 1.0 1.5 2.0 distance 2 small σ medium σ large σ Slide Credit: Svetlana Lazebnik Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 19 / 63

  8. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  9. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  10. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  11. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  12. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  13. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  14. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  15. Graph Cut: Using Eigenvalues Affinity matrix W Extract a single good cluster ( v n ) ◮ v n ( i ): probability of point i belonging to the cluster ◮ Elements have high affinity with each other v ⊤ n Wv n ◮ Constraint v ⊤ n v n = 1 ⋆ Prevents v n → ∞ Constraint objective v ⊤ n Wv n − λ (1 − v ⊤ n v n ) A Reduces to Eigenvalue problem v ⊤ n W = λ v n Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 20 / 63

  16. Graph Cut: Using Eigenvalues 2.0 1.5 1.0 0.5 0.0 0.0 0.5 1.0 1.5 2.0 4 largest eigenvalues 0.30 0.30 0.30 0.30 0.25 0.25 0.25 0.25 0.20 0.20 0.20 0.20 0.15 0.15 0.15 0.15 0.10 0.10 0.10 0.10 0.05 0.05 0.05 0.05 0.00 0.00 0.00 0.00 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 21 / 63

  17. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  18. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  19. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  20. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  21. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  22. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  23. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  24. Clustering by Graph Eigenvectors 1 Construct an affinity matrix 2 Compute the eigenvalues and eigenvectors of the affinity matrix 3 Until there are sufficient clusters ◮ Take the eigenvector corresponding to the largest unprocessed eigenvalue ◮ zero all components corresponding to elements that have already been clustered ◮ threshold the remaining components to determine which element belongs to this cluster, ⋆ choose a threshold by clustering the components, or using a threshold fixed in advance. ◮ If all elements have been accounted for, there are sufficient clusters: end Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 22 / 63

  25. Graph Cut: Using Eigenvalues Effects of the scaling 0.40 0.30 0.15 0.35 0.14 0.25 0.30 0.13 0.20 0.25 0.12 0.20 0.15 0.11 0.15 0.10 0.10 0.10 0.05 0.09 0.05 0.00 0.00 0.08 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 small σ medium σ large σ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 23 / 63

  26. Graph Cut B A Find set of edges whose removal makes graph disconnected Cost of a cut ◮ Sum of weights of cut edges: cut ( A , B ) = � p ∈ A , q ∈ B w pq ∑ Graph cut gives us a segmentation = ◮ What is a “good” graph cut and how do we find one? ∈ ∈ Slide Credit: Steve Seitz Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 24 / 63 Lecture 8 - ‐ ‐ Fei-Fei Li

  27. Graph Cut Here, the cut is nicely defined by the block-diagonal structure of the affinity matrix. ⇒ How can this be generalized? Image Source: Forsyth & Ponce Lecture 8 - ‐ ‐ Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 25 / 63

  28. Minimum Cut We can do segmentation by finding the minimum cut in a graph ◮ a minimum cut of a graph is a cut whose cutset has the smallest affinity. ◮ Efficient algorithms exist for doing this (max-flow) Drawback ◮ Weight of cut proportional to number of edges in the cut ◮ Minimum cut tends to cut off very small, isolated components – Cuts with lesser weight than the ideal cut Ideal Cut Lecture 8 - ‐ ‐ Fei-Fei Li Slide Credit: Khurram Hassan-Shafique Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 26 / 63

  29. Normalized Cut (NCut) A minimum cut penalizes large segments This can be fixed by normalizing for size of segments The normalized cut cost is: cut ( A , B ) cut ( A , B ) Ncut ( A , B ) = assoc ( A , V ) + assoc ( B , V ) � � 1 1 = cut ( A , B ) + � � p ∈ A , q w p , q q ∈ B , p w p , q assoc ( A , V ) = sum of weights of all edges in V that touch A The exact solution is NP-hard but an approximation can be computed by solving a generalized eigenvalue problem. J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI 2000 Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 27 / 63

  30. Interpretation as a Dynamical System Interpretation as a Dynamical System eitz teve S • lide credit: S Treat the links as springs and shake the system ◮ Elasticity proportional to cost ◮ Vibration “modes” correspond to segments ⋆ Can compute these by solving a generalized eigenvector problem S Lecture 8 - ‐ ‐ Fei-Fei Li Slide Credit: Steve Seitz Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 28 / 63

  31. NCuts as a Generalized Eigenvalue Problem Definitions ◮ W : the affinity matrix ◮ D : diagonal matrix, D ii = � j W ij = ◮ x : a vector in {− 1 , 1 } N , W W i j w : ( , ) ; ∑ Rewriting the Normalized Cut in matrix form = : ( , ) ( , ); − = ⇔ ∈ x cut ( A , B ) x i cut ( A , B ) i A . Ncut ( A , B ) = assoc ( A , V ) + assoc ( B , V ) = . . . = + ∑ + − + − − − > = + = ∑ − Slide Credit: Jitentra Malik = Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 29 / 63 Lecture 8 - ‐ ‐ Fei-Fei Li

  32. Some more math... 28 ‐ ‐ Lecture 8 - Fei-Fei Li Slide Credit: Jitentra Malik Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 30 / 63

  33. NCuts as a Generalized Eigenvalue Problem After simplifications, we get Ncut ( A , B ) = y ⊤ ( D − W ) y Hard as a discrete y ⊤ Dy problem with y i ∈ {− 1 , b } and y ⊤ D 1 = 0 ⇓ This is the Rayleigh Quotient Continuous ◮ Solution given by the generalized eigenvalue approximation problem ( D − W ) y = λ Dy Subtleties ◮ Optimal solution is second smallest eigenvector ◮ Gives continuous result—must convert into discrete values of y Slide Credit: Jitentra Malik Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 31 / 63

  34. NCuts as a Generalized Eigenvalue Problem After simplifications, we get Ncut ( A , B ) = y ⊤ ( D − W ) y Hard as a discrete y ⊤ Dy problem with y i ∈ {− 1 , b } and y ⊤ D 1 = 0 ⇓ This is the Rayleigh Quotient Continuous ◮ Solution given by the generalized eigenvalue approximation problem ( D − W ) y = λ Dy Subtleties ◮ Optimal solution is second smallest eigenvector ◮ Gives continuous result—must convert into discrete values of y Slide Credit: Jitentra Malik Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 31 / 63

  35. NCuts as a Generalized Eigenvalue Problem After simplifications, we get Ncut ( A , B ) = y ⊤ ( D − W ) y Hard as a discrete y ⊤ Dy problem with y i ∈ {− 1 , b } and y ⊤ D 1 = 0 ⇓ This is the Rayleigh Quotient Continuous ◮ Solution given by the generalized eigenvalue approximation problem ( D − W ) y = λ Dy Subtleties ◮ Optimal solution is second smallest eigenvector ◮ Gives continuous result—must convert into discrete values of y Slide Credit: Jitentra Malik Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 31 / 63

  36. NCuts Example Smallest eigenvectors NCuts segments Image source: S hi & Malik ‐ ‐ Lecture 8 - Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 32 / 63

  37. NCuts Example Problem: eigenvectors take on continuous values ◮ How to choose the splitting point to binarize the image? Image Eigenvector NCut scores • Possible procedures ◮ Pick a constant value (0, or 0.5). ◮ Pick the median value as splitting point. ◮ Look for the splitting point that has the minimum NCut value: Choose n possible splitting points. 1 Compute NCut value. 2 Pick minimum. 3 Lecture 8 - ‐ ‐ Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 33 / 63

  38. NCuts: Overall Procedure 1 Construct a weighted graph G = ( V , E ) from an image. 2 Connect each pair of pixels, and assign graph edge weights W ij = Prob. that i and j belong to the same region. 3 Solve ( D − W ) y = λ Dy for the smallest few eigenvectors. This yields a continuous solution. 4 Threshold eigenvectors to get a discrete cut ◮ This is where the approximation is made (we’re not solving NP). 5 Recursively subdivide if NCut value is below a pre-specified value. NCuts Matlab code available at http://www.cis.upenn.edu/~jshi/software/ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 34 / 63

  39. NCuts: Overall Procedure 1 Construct a weighted graph G = ( V , E ) from an image. 2 Connect each pair of pixels, and assign graph edge weights W ij = Prob. that i and j belong to the same region. 3 Solve ( D − W ) y = λ Dy for the smallest few eigenvectors. This yields a continuous solution. 4 Threshold eigenvectors to get a discrete cut ◮ This is where the approximation is made (we’re not solving NP). 5 Recursively subdivide if NCut value is below a pre-specified value. NCuts Matlab code available at http://www.cis.upenn.edu/~jshi/software/ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 34 / 63

  40. NCuts: Overall Procedure 1 Construct a weighted graph G = ( V , E ) from an image. 2 Connect each pair of pixels, and assign graph edge weights W ij = Prob. that i and j belong to the same region. 3 Solve ( D − W ) y = λ Dy for the smallest few eigenvectors. This yields a continuous solution. 4 Threshold eigenvectors to get a discrete cut ◮ This is where the approximation is made (we’re not solving NP). 5 Recursively subdivide if NCut value is below a pre-specified value. NCuts Matlab code available at http://www.cis.upenn.edu/~jshi/software/ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 34 / 63

  41. NCuts: Overall Procedure 1 Construct a weighted graph G = ( V , E ) from an image. 2 Connect each pair of pixels, and assign graph edge weights W ij = Prob. that i and j belong to the same region. 3 Solve ( D − W ) y = λ Dy for the smallest few eigenvectors. This yields a continuous solution. 4 Threshold eigenvectors to get a discrete cut ◮ This is where the approximation is made (we’re not solving NP). 5 Recursively subdivide if NCut value is below a pre-specified value. NCuts Matlab code available at http://www.cis.upenn.edu/~jshi/software/ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 34 / 63

  42. NCuts: Overall Procedure 1 Construct a weighted graph G = ( V , E ) from an image. 2 Connect each pair of pixels, and assign graph edge weights W ij = Prob. that i and j belong to the same region. 3 Solve ( D − W ) y = λ Dy for the smallest few eigenvectors. This yields a continuous solution. 4 Threshold eigenvectors to get a discrete cut ◮ This is where the approximation is made (we’re not solving NP). 5 Recursively subdivide if NCut value is below a pre-specified value. NCuts Matlab code available at http://www.cis.upenn.edu/~jshi/software/ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 34 / 63

  43. NCuts: Overall Procedure 1 Construct a weighted graph G = ( V , E ) from an image. 2 Connect each pair of pixels, and assign graph edge weights W ij = Prob. that i and j belong to the same region. 3 Solve ( D − W ) y = λ Dy for the smallest few eigenvectors. This yields a continuous solution. 4 Threshold eigenvectors to get a discrete cut ◮ This is where the approximation is made (we’re not solving NP). 5 Recursively subdivide if NCut value is below a pre-specified value. NCuts Matlab code available at http://www.cis.upenn.edu/~jshi/software/ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 34 / 63

  44. NCuts Results hi & Malik ource: S Image S ‐ ‐ Lecture 8 - Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 35 / 63

  45. Using Texture Features for Segmentation Texture descriptor is vector of filter bank outputs • J. Malik, S. Belongie, T. Leung and J. Shi. ‐ “Contour and Texture Analysis for Image Segmentation”. IJCV 43(1),7-27,2001 Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 36 / 63 ‐ ‐ Lecture 8 - Fei-Fei Li

  46. Using Texture Features for Segmentation Texture descriptor is vector of filter bank outputs. Textons are found by clustering. ◮ Bag of words Slide Credit: Svetlana Lazebnik ‐ ‐ Lecture 8 - Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 37 / 63

  47. Using Texture Features for Segmentation Texture descriptor is vector of filter bank outputs. Textons are found by clustering. ◮ Bag of words Affinities are given by similarities of texton histograms over windows given by the “local scale” of the texture. ‐ ‐ Lecture 8 - Fei-Fei Li Slide Credit: Svetlana Lazebnik Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 38 / 63

  48. Results with Color and Texture Lecture 8 - ‐ ‐ Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 39 / 63

  49. Summary: Normalized Cuts ⇒ Pros: ◮ Generic framework, flexible to choice of function that computes weights (“affinities”) between nodes ◮ Does not require any model of the data distribution Cons: ◮ Time and memory complexity can be high ⋆ Dense, highly connected graphs → many affinity computations ⋆ Solving eigenvalue problem for each cut ◮ Preference for balanced partitions ⋆ If a region is uniform, NCuts will find the modes of vibration of the image dimensions ‐ ‐ Lecture 8 - Fei-Fei Li Slide Credit: Kristen Grauman Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 40 / 63

  50. What we will learn today? Graph theoretic segmentation ◮ Normalized Cuts ◮ Using texture features Segmentation as Energy Minimization ◮ Markov Random Fields (MRF) / Conditional Random Fields (CRF) ◮ Graph cuts for image segmentation ◮ Applications Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 41 / 63

  51. What we will learn today? Graph theoretic segmentation ◮ Normalized Cuts ◮ Using texture features Segmentation as Energy Minimization ◮ Markov Random Fields (MRF) / Conditional Random Fields (CRF) ◮ Graph cuts for image segmentation ◮ Applications Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 41 / 63

  52. Markov Random Fields Allow rich probabilistic models for images But built in a local, modular way ◮ Learn/model local effects, get global effects out Observed evidence Hidden “true states” Neighborhood relations Lecture 8 - ‐ ‐ Fei-Fei Li Slide Credit: William Freeman Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 42 / 63

  53. MRF Nodes as Pixels Original image Degraded image Reconstruction from MRF modeling pixel neighborhood statistics Lecture 8 - ‐ ‐ Fei-Fei Li Slide Credit: Bastian Leibe Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 43 / 63

  54. MRF Nodes as Patches Image patches Scene patches Image Φ x y ( , ) i i lide credit: William Freeman Ψ x x ( , ) i j Scene S ‐ ‐ Lecture 8 - Fei-Fei Li Slide Credit: William Freeman Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 44 / 63

  55. Network Joint Probability ∏ ∏ = Φ Ψ P x y x y x x ( , ) ( , ) ( , ) i i i j i i j , Scene Image-scene Scene-scene compatibility compatibility Image function function lide credit: William Freeman Neighboring Local scene nodes observations S ‐ ‐ Lecture 8 - Fei-Fei Li Slide Credit: William Freeman Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 45 / 63

  56. Energy Formulation Joint probability P ( x , y ) = 1 � � Φ( x i , y i ) Ψ( x i , x j ) Z i ij Taking the log turns this into an Energy optimization � � E ( x , y ) = ϕ ( x i , y i ) + ψ ( x i , x j ) i ij This is similar to free-energy problems in statistical mechanics (spin glass theory). We therefore draw the analogy and call E an energy function. ϕ and ψ are called potentials. Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 46 / 63

  57. Energy Formulation Energy function � � E ( x , y ) = ϕ ( x i , y i ) + ψ ( x i , x j ) � �� � � �� � i ij unary term pairwise term tion Unary potential ϕ ◮ Encode local information about the given pixel/patch ϕ x y ( , ) i i ◮ How likely is a pixel/patch to belong to a certain class ∑ ∑ = ϕ + ψ ) ψ x x ( , ) (e.g. foreground/background)? i j i i j Pairwise potential ψ ◮ Encode neighborhood information ϕ ◮ How different is a pixel/patch’s label from that of its ‐ neighbor? (e.g. based on intensity/color/texture difference, edges) ψ Slide Credit: Bastian Leibe Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 47 / 63 ‐ ‐ Lecture 8 - Fei-Fei Li

  58. Segmentation using MRFs/CRFs Boykov and Jolly (2001) � � E ( x , y ) = ϕ ( x i , y i ) + ψ ( x i , x j ) i ij Variables ◮ x i : Binary variable ⋆ foreground/background ◮ y i : Annotation ⋆ foreground/background/empty Unary term ◮ ϕ ( x i , y i ) = K [ x i � = y i ] ◮ Pay a penalty for disregarding the annotation Pairwise term ◮ ψ ( x i , x j ) = [ x i � = x j ] w ij ◮ Encourage smooth annotations ◮ w ij affinity between pixels i and j Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 48 / 63

  59. Efficient solutions Grid structured random fields ◮ Efficient solution using Maxflow/Mincut ◮ Optimal solution for binary labeling ◮ Boykov & Kolmogorov, “An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision”, PAMI 26(9): 1124-1137 (2004) Fully connected models ◮ Efficient solution using convolution mean-field ◮ Kr¨ ahenb¨ uhl and Koltun, “Efficient Inference in Fully-Connected CRFs with Gaussian edge potentials”, NIPS 2011 Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 49 / 63

  60. GrabCut: Interactive Foreground Extraction Extraction Slides credit: Carsten Rother ‐ ‐ Lecture 8 - Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 50 / 63

  61. What GrabCut Does Magic Wand Intelligent Scissors GrabCut (Adobe, 2002) Mortensen and Barrett (1995) User Input Result Regions Regions & Boundary Boundary Lecture 8 - ‐ ‐ Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 51 / 63

  62. GrabCut Energy function � � E ( x , k , θ | I ) = ϕ ( x i , k i , θ | z i ) + ψ ( x i , x j | z i , z j ) i ij Variables ◮ x i ∈ { 0 , 1 } : Foreground/background label ◮ k i ∈ { 0 , . . . , K } : Gaussian mixture component ◮ θ : Model parameters (GMM parameters) ◮ I = { z 1 , . . . , z N } : RGB Image Unary term ϕ ( x i , k i , θ | z i ) ◮ Gaussian mixture model (log of a GMM) Pairwise term ψ ( x i , x j | z i , z j ) = [ x i � = x j ] exp( − β � z i − z j � 2 ) Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 52 / 63

  63. GrabCut Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 53 / 63

  64. GrabCut Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 53 / 63

  65. GrabCut - Unary term Gaussian Mixture Model � P ( z i | x i , θ ) = π ( x i , k ) p ( z k | k , θ ) ◮ Hard to optimize ( � k k ) Tractable solution ◮ Assign each variable x i a single mixture component k i P ( z i | x i , k i , θ ) = π ( x i , k i ) p ( z k | k i , θ ) ◮ Optimize over k i Unary term ϕ ( x i , k i , θ | z i ) = − log π ( x i , k i ) − log p ( z k | k i , θ ) = − log π ( x i , k i ) + 1 2 log | Σ( k i ) | + 1 2( z i − µ ( k i )) ⊤ Σ( k i ) − 1 ( z i − µ ( k i )) Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 54 / 63

  66. GrabCut - Unary term Unary term ϕ ( x i , k i , θ | z i ) = − log π ( x i , k i ) + 1 2 log | Σ( k i ) | + 1 2( z i − µ ( k i )) ⊤ Σ( k i ) − 1 ( z i − µ ( k i )) Model parameters θ = { π ( x i , k i ) , µ ( k i ) , Σ( k i ) } � �� � � �� � mixture weight mean and variance Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 55 / 63

  67. GrabCut - Iterative optimization 1 Initialize Mixture Models 2 Assign GMM components ? k i = arg min k ϕ ( x i , k i , θ | z i ) 3 Learn GMM parameters = θ α θ Initialization � θ θ = arg min ϕ ( x i , k i , θ | z i ) i Foreground & Background 4 Estimate segmentation using mincut θ x = arg min E ( x , k , θ | I ) Background 5 Repeat from 2 until convergence = + α θ α θ α α θ = θ α θ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 56 / 63 θ ‐ ‐ Lecture 8 - Fei-Fei Li θ = + α θ α θ α θ α ‐ ‐ Lecture 8 - Fei-Fei Li

  68. GrabCut - Iterative optimization 1 Initialize Mixture Models 2 Assign GMM components k i = arg min k ϕ ( x i , k i , θ | z i ) Foreg round 3 Learn GMM parameters � θ = arg min ϕ ( x i , k i , θ | z i ) i Backg round 4 Estimate segmentation using mincut x = arg min E ( x , k , θ | I ) 5 Repeat from 2 until convergence Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 56 / 63

  69. GrabCut - Iterative optimization 1 Initialize Mixture Models 2 Assign GMM components k i = arg min k ϕ ( x i , k i , θ | z i ) Foreground 3 Learn GMM parameters � θ = arg min ϕ ( x i , k i , θ | z i ) i Background 4 Estimate segmentation using mincut x = arg min E ( x , k , θ | I ) = α α θ 5 Repeat from 2 until convergence α Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 56 / 63 ‐ ‐ = + α θ α θ α θ α ‐ ‐ Lecture 8 - Fei-Fei Li

  70. GrabCut - Iterative optimization 1 Initialize Mixture Models 2 Assign GMM components k i = arg min k ϕ ( x i , k i , θ | z i ) 3 Learn GMM parameters � θ = arg min ϕ ( x i , k i , θ | z i ) i 4 Estimate segmentation using mincut x = arg min E ( x , k , θ | I ) = θ α θ 5 Repeat from 2 until convergence θ Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 56 / 63 θ = + α θ α θ α θ α ‐ ‐ Lecture 8 - Fei-Fei Li

  71. GrabCut - Iterative optimization 1 Initialize Mixture Models 2 Assign GMM components k i = arg min k ϕ ( x i , k i , θ | z i ) 3 Learn GMM parameters � θ = arg min ϕ ( x i , k i , θ | z i ) i 4 Estimate segmentation using mincut x = arg min E ( x , k , θ | I ) 5 Repeat from 2 until convergence Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 56 / 63

  72. GrabCut - Iterative optimization 1 2 3 4 Result Energy after each Iteration ‐ ‐ Lecture 8 - Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 57 / 63

  73. GrabCut - Further editing Automatic Segmentation Automatic Segmentation Lecture 8 - ‐ ‐ Fei-Fei Li Philipp Kr¨ ahenb¨ uhl (Stanford University) Segmentation April 24, 2013 58 / 63

Recommend


More recommend