Edge Detection CS/BIOEN 4640: Image Processing Basics February 9, - - PowerPoint PPT Presentation

edge detection
SMART_READER_LITE
LIVE PREVIEW

Edge Detection CS/BIOEN 4640: Image Processing Basics February 9, - - PowerPoint PPT Presentation

Edge Detection CS/BIOEN 4640: Image Processing Basics February 9, 2012 Gaussian Blurring for Derivatives We have seen Prewitt and Sobel derivative operators They use averaging in the orthogonal direction to the derivative Why not


slide-1
SLIDE 1

Edge Detection

CS/BIOEN 4640: Image Processing Basics February 9, 2012

slide-2
SLIDE 2

Gaussian Blurring for Derivatives

◮ We have seen Prewitt and Sobel derivative

  • perators

◮ They use averaging in the orthogonal direction to

the derivative

◮ Why not average in both directions? ◮ How about using Gaussian blurring for averaging? ◮ How do we know what width (σ value) to use?

slide-3
SLIDE 3

Gaussian-Blurred Edges

We’ve seen in 1D that If we blur a step edge with a Gaussian, we get this function

  • 0.2

0.2 0.4 0.6 0.8 1 1.2

  • 3
  • 2
  • 1

1 2 3

This is the error function: erf(x) =

x

−∞ gσ(t) dt

slide-4
SLIDE 4

Derivatives of Edges

◮ Using the error function as

  • ur model for an edge
  • 0.2

0.2 0.4 0.6 0.8 1 1.2

  • 3
  • 2
  • 1

1 2 3

slide-5
SLIDE 5

Derivatives of Edges

◮ Using the error function as

  • ur model for an edge

◮ The derivative of an edge is

a Gaussian

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

  • 3
  • 2
  • 1

1 2 3

slide-6
SLIDE 6

Derivatives of Edges

◮ Using the error function as

  • ur model for an edge

◮ The derivative of an edge is

a Gaussian

◮ The second derivative of an

edge is the first derivative

  • f a Gaussian
  • 0.25
  • 0.2
  • 0.15
  • 0.1
  • 0.05

0.05 0.1 0.15 0.2 0.25

  • 3
  • 2
  • 1

1 2 3

slide-7
SLIDE 7

Derivatives and Convolution

◮ Let D = [0.5

0 −0.5] be our central difference

kernel, and G a Gaussian kernel

◮ Remember, blurring and derivatives can be done in

either order

(I ∗ D) ∗ G = (I ∗ G) ∗ D

◮ Also, our blurring kernel and derivative kernel can

be combined first, then applied to the image

(I ∗ D) ∗ G = I ∗ (D ∗ G)

slide-8
SLIDE 8

DOGs

  • 0.25
  • 0.2
  • 0.15
  • 0.1
  • 0.05

0.05 0.1 0.15 0.2 0.25

  • 3
  • 2
  • 1

1 2 3

Image derivatives computed by convolution with a Derivative of Gaussian (DOG) kernel

D ∗ G

  • 0.4
  • 0.3
  • 0.2
  • 0.1

0.1 0.2

  • 3
  • 2
  • 1

1 2 3

Second derivatives can also be computed as convolution with second derivative of Gaussian

D ∗ D ∗ G

slide-9
SLIDE 9

Thresholding Edges

◮ Gradient magnitude image has floating point values ◮ High values where there are strong edges ◮ Low values where there are weak or no edges ◮ Thresholding can remove the weak edges and

leave just the ones we want

◮ Converts image into a binary edge image

slide-10
SLIDE 10

Thresholding Edges

◮ At an edge, gradient

magnitude looks like a Gaussian

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

  • 3
  • 2
  • 1

1 2 3

slide-11
SLIDE 11

Thresholding Edges

◮ At an edge, gradient

magnitude looks like a Gaussian

◮ Threshold at some value

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

  • 3
  • 2
  • 1

1 2 3

slide-12
SLIDE 12

Thresholding Edges

◮ At an edge, gradient

magnitude looks like a Gaussian

◮ Threshold at some value ◮ Leaves behind a “fat” edge

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

  • 3
  • 2
  • 1

1 2 3

slide-13
SLIDE 13

Zero-Crossings of the Second Derivative

◮ We would prefer to choose

just the peak edge response

◮ So, we want a local

maximum

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

  • 3
  • 2
  • 1

1 2 3

slide-14
SLIDE 14

Zero-Crossings of the Second Derivative

◮ We would prefer to choose

just the peak edge response

◮ So, we want a local

maximum

◮ This is where the image

second derivative is zero

◮ Second derivative of an

edge looks like 1st derivative of Gaussian

  • 0.25
  • 0.2
  • 0.15
  • 0.1
  • 0.05

0.05 0.1 0.15 0.2 0.25

  • 3
  • 2
  • 1

1 2 3

slide-15
SLIDE 15

That’s Great, But What About 2D?

◮ In 2D we have an x and y derivative ◮ Gradient of the image ∇I is computed as before,

but now with DOG kernels for x and y

◮ Zero-crossings of the second derivative now looks

at the Laplacian of the image:

∆I(x, y) = ∂2 ∂x2I(x, y) + ∂2 ∂y2I(x, y)

slide-16
SLIDE 16

2D Gaussian

  • 3
  • 2
  • 1

1 2 3-3

  • 2
  • 1

1 2 3 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16

Gσ(x, y) = 1 2πσ2 exp

  • −x2 + y2

2σ2

slide-17
SLIDE 17

2D Gaussian X-Derivative

  • 3
  • 2
  • 1

1 2 3-3

  • 2
  • 1

1 2 3

  • 0.1
  • 0.08
  • 0.06
  • 0.04
  • 0.02

0.02 0.04 0.06 0.08 0.1

∂ ∂xGσ(x, y) = −x 2πσ4 exp

  • −x2 + y2

2σ2

slide-18
SLIDE 18

2D Gaussian Y-Derivative

  • 3
  • 2
  • 1

1 2 3-3

  • 2
  • 1

1 2 3

  • 0.1
  • 0.08
  • 0.06
  • 0.04
  • 0.02

0.02 0.04 0.06 0.08 0.1

∂ ∂yGσ(x, y) = −y 2πσ4 exp

  • −x2 + y2

2σ2

slide-19
SLIDE 19

2D Gaussian Second X-Derivative

  • 3
  • 2
  • 1

1 2 3-3

  • 2
  • 1

1 2 3

  • 0.2
  • 0.15
  • 0.1
  • 0.05

0.05 0.1

∂2 ∂x2Gσ(x, y) = x2 − σ2 2πσ6 exp

  • −x2 + y2

2σ2

slide-20
SLIDE 20

2D Gaussian Second X-Derivative

  • 3
  • 2
  • 1

1 2 3-3

  • 2
  • 1

1 2 3

  • 0.2
  • 0.15
  • 0.1
  • 0.05

0.05 0.1

∂2 ∂y2Gσ(x, y) = y2 − σ2 2πσ6 exp

  • −x2 + y2

2σ2

slide-21
SLIDE 21

2D Edge Detection Algorithm with Laplacian

  • 1. Compute gradient magnitude image using DOG

kernels

  • 2. Threshold this image to include only edges with

high magnitude - binary image

  • 3. Compute Laplacian image using second-DOG

kernels

  • 4. Find zero crossings of Laplacian - binary image (1

at crossing, 0 elswhere)

  • 5. AND operation between thresholded gradient

magnitude and Laplacian zero crossings

slide-22
SLIDE 22

More Details on DOG Kernels

Let D = [0.5

0 −0.5] be our central difference

kernel, and Gσ a Gaussian kernel. Then we can compute our DOG kernel H = Gσ ∗ D as follows:

H(k) = 1 2(gσ(k + 1) − gσ(k − 1))

Here k goes from −R to R and gσ is the 1D Gaussian function

slide-23
SLIDE 23

Second Derivative Finite Difference Operator

A second derivative finite difference looks like this:

δ2 f(x) = f(x − 1) − 2f(x) + f(x + 1)

This can be computed as a convolution with the kernel

D2 = [1 −2 1]

Be careful! δ2 f = δ(δf)

slide-24
SLIDE 24

Second Derivative of Gaussian Kernels

Now let D2 = [1 −2 1] be our second derivative kernel, and Gσ be the Gaussian kernel. Then we can compute the second-DOG kernel H2 = Gσ ∗ D2 as follows:

H2(k) = gσ(k − 1) − 2gσ(k) + gσ(k + 1)

slide-25
SLIDE 25

How To Compute Zero Crossings

Important step in the Laplacian edge detector. Do the following for each pixel I(u, v):

  • 1. Look at your four neighbors, left, right, up and down
  • 2. If they all have the same sign as you, then you are

not a zero crossing

  • 3. Else, if you have the smallest absolute value

compared to your neighbors with opposite sign, then you are a zero crossing

slide-26
SLIDE 26

FeatureJ

This is a nice package for computing derivatives, edge-detection, and more (ImageJ plugin):

www.imagescience.org/meijering/software/featurej/