edge detection
play

Edge Detection CS/BIOEN 4640: Image Processing Basics February 9, - PowerPoint PPT Presentation

Edge Detection CS/BIOEN 4640: Image Processing Basics February 9, 2012 Gaussian Blurring for Derivatives We have seen Prewitt and Sobel derivative operators They use averaging in the orthogonal direction to the derivative Why not


  1. Edge Detection CS/BIOEN 4640: Image Processing Basics February 9, 2012

  2. Gaussian Blurring for Derivatives ◮ We have seen Prewitt and Sobel derivative operators ◮ They use averaging in the orthogonal direction to the derivative ◮ Why not average in both directions? ◮ How about using Gaussian blurring for averaging? ◮ How do we know what width ( σ value) to use?

  3. Gaussian-Blurred Edges We’ve seen in 1D that If we blur a step edge with a Gaussian, we get this function 1.2 1 0.8 0.6 0.4 0.2 0 -0.2 -3 -2 -1 0 1 2 3 � x This is the error function: erf ( x ) = −∞ g σ ( t ) dt

  4. Derivatives of Edges ◮ Using the error function as our model for an edge 1.2 1 0.8 0.6 0.4 0.2 0 -0.2 -3 -2 -1 0 1 2 3

  5. Derivatives of Edges ◮ Using the error function as our model for an edge 0.4 ◮ The derivative of an edge is 0.35 0.3 a Gaussian 0.25 0.2 0.15 0.1 0.05 0 -3 -2 -1 0 1 2 3

  6. Derivatives of Edges ◮ Using the error function as our model for an edge 0.25 ◮ The derivative of an edge is 0.2 0.15 0.1 a Gaussian 0.05 0 ◮ The second derivative of an -0.05 -0.1 -0.15 edge is the first derivative -0.2 -0.25 of a Gaussian -3 -2 -1 0 1 2 3

  7. Derivatives and Convolution ◮ Let D = [ 0 . 5 0 − 0 . 5 ] be our central difference kernel, and G a Gaussian kernel ◮ Remember, blurring and derivatives can be done in either order ( I ∗ D ) ∗ G = ( I ∗ G ) ∗ D ◮ Also, our blurring kernel and derivative kernel can be combined first, then applied to the image ( I ∗ D ) ∗ G = I ∗ ( D ∗ G )

  8. DOGs 0.25 0.2 0.2 0.1 0.15 0.1 0 0.05 0 -0.1 -0.05 -0.2 -0.1 -0.15 -0.3 -0.2 -0.25 -0.4 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Image derivatives computed by Second derivatives can also be convolution with a Derivative of computed as convolution with Gaussian (DOG) kernel second derivative of Gaussian D ∗ G D ∗ D ∗ G

  9. Thresholding Edges ◮ Gradient magnitude image has floating point values ◮ High values where there are strong edges ◮ Low values where there are weak or no edges ◮ Thresholding can remove the weak edges and leave just the ones we want ◮ Converts image into a binary edge image

  10. Thresholding Edges 0.4 ◮ At an edge, gradient 0.35 0.3 magnitude looks like a 0.25 0.2 Gaussian 0.15 0.1 0.05 0 -3 -2 -1 0 1 2 3

  11. Thresholding Edges ◮ At an edge, gradient 0.4 0.35 magnitude looks like a 0.3 0.25 Gaussian 0.2 0.15 ◮ Threshold at some value 0.1 0.05 0 -3 -2 -1 0 1 2 3

  12. Thresholding Edges ◮ At an edge, gradient 0.4 0.35 magnitude looks like a 0.3 0.25 Gaussian 0.2 0.15 ◮ Threshold at some value 0.1 0.05 ◮ Leaves behind a “fat” edge 0 -3 -2 -1 0 1 2 3

  13. Zero-Crossings of the Second Derivative ◮ We would prefer to choose just the peak edge response 0.4 ◮ So, we want a local 0.35 0.3 maximum 0.25 0.2 0.15 0.1 0.05 0 -3 -2 -1 0 1 2 3

  14. Zero-Crossings of the Second Derivative ◮ We would prefer to choose just the peak edge response 0.25 ◮ So, we want a local 0.2 0.15 0.1 maximum 0.05 0 ◮ This is where the image -0.05 -0.1 -0.15 second derivative is zero -0.2 -0.25 -3 -2 -1 0 1 2 3 ◮ Second derivative of an edge looks like 1st derivative of Gaussian

  15. That’s Great, But What About 2D? ◮ In 2D we have an x and y derivative ◮ Gradient of the image ∇ I is computed as before, but now with DOG kernels for x and y ◮ Zero-crossings of the second derivative now looks at the Laplacian of the image: ∆ I ( x , y ) = ∂ 2 ∂ x 2 I ( x , y ) + ∂ 2 ∂ y 2 I ( x , y )

  16. 2D Gaussian 0.16 0.14 0.12 0.1 0.08 0.06 0.04 3 0.02 2 1 0 -3 0 -2 -1 -1 0 1 -2 2 3-3 − x 2 + y 2 � � 1 G σ ( x , y ) = 2 πσ 2 exp 2 σ 2

  17. 2D Gaussian X-Derivative 0.1 0.08 0.06 0.04 0.02 0 -0.02 -0.04 -0.06 3 2 -0.08 1 -0.1 -3 0 -2 -1 -1 0 1 -2 2 3-3 − x 2 + y 2 � � ∂ xG σ ( x , y ) = − x ∂ 2 πσ 4 exp 2 σ 2

  18. 2D Gaussian Y-Derivative 0.1 0.08 0.06 0.04 0.02 0 -0.02 -0.04 -0.06 3 2 -0.08 1 -0.1 -3 0 -2 -1 -1 0 1 -2 2 3-3 − x 2 + y 2 � � ∂ yG σ ( x , y ) = − y ∂ 2 πσ 4 exp 2 σ 2

  19. 2D Gaussian Second X-Derivative 0.1 0.05 0 -0.05 -0.1 3 -0.15 2 1 -0.2 -3 0 -2 -1 -1 0 1 -2 2 3-3 − x 2 + y 2 ∂ x 2 G σ ( x , y ) = x 2 − σ 2 ∂ 2 � � 2 πσ 6 exp 2 σ 2

  20. 2D Gaussian Second X-Derivative 0.1 0.05 0 -0.05 -0.1 3 -0.15 2 1 -0.2 -3 0 -2 -1 -1 0 1 -2 2 3-3 − x 2 + y 2 ∂ y 2 G σ ( x , y ) = y 2 − σ 2 ∂ 2 � � 2 πσ 6 exp 2 σ 2

  21. 2D Edge Detection Algorithm with Laplacian 1. Compute gradient magnitude image using DOG kernels 2. Threshold this image to include only edges with high magnitude - binary image 3. Compute Laplacian image using second-DOG kernels 4. Find zero crossings of Laplacian - binary image (1 at crossing, 0 elswhere) 5. AND operation between thresholded gradient magnitude and Laplacian zero crossings

  22. More Details on DOG Kernels Let D = [ 0 . 5 0 − 0 . 5 ] be our central difference kernel, and G σ a Gaussian kernel. Then we can compute our DOG kernel H = G σ ∗ D as follows: H ( k ) = 1 2 ( g σ ( k + 1 ) − g σ ( k − 1 )) Here k goes from − R to R and g σ is the 1D Gaussian function

  23. Second Derivative Finite Difference Operator A second derivative finite difference looks like this: δ 2 f ( x ) = f ( x − 1 ) − 2 f ( x ) + f ( x + 1 ) This can be computed as a convolution with the kernel D 2 = [ 1 − 2 1 ] Be careful! δ 2 f � = δ ( δ f )

  24. Second Derivative of Gaussian Kernels Now let D 2 = [ 1 − 2 1 ] be our second derivative kernel, and G σ be the Gaussian kernel. Then we can compute the second-DOG kernel H 2 = G σ ∗ D 2 as follows: H 2 ( k ) = g σ ( k − 1 ) − 2 g σ ( k ) + g σ ( k + 1 )

  25. How To Compute Zero Crossings Important step in the Laplacian edge detector. Do the following for each pixel I ( u , v ) : 1. Look at your four neighbors, left, right, up and down 2. If they all have the same sign as you, then you are not a zero crossing 3. Else, if you have the smallest absolute value compared to your neighbors with opposite sign, then you are a zero crossing

  26. FeatureJ This is a nice package for computing derivatives, edge-detection, and more (ImageJ plugin): www.imagescience.org/meijering/software/featurej/

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend