Computer Vision Advanced Edge Detectors Prof. Flvio Cardeal - - PowerPoint PPT Presentation

computer vision
SMART_READER_LITE
LIVE PREVIEW

Computer Vision Advanced Edge Detectors Prof. Flvio Cardeal - - PowerPoint PPT Presentation

Computer Vision Advanced Edge Detectors Prof. Flvio Cardeal DECOM / CEFET-MG cardeal@decom.cefetmg.br Abstract This lecture discusses advanced edge detectors that combine multiple approaches


slide-1
SLIDE 1

Advanced Edge Detectors

Computer Vision

  • Prof. ¡Flávio ¡Cardeal ¡– ¡DECOM ¡/ ¡CEFET-­‑MG ¡

cardeal@decom.cefetmg.br ¡ ¡

slide-2
SLIDE 2

Abstract

  • This lecture discusses advanced edge

detectors that combine multiple approaches into a single algorithm.

2 ¡

slide-3
SLIDE 3

LoG and DoG

  • The Laplacian of Gaussian (LoG) and the

difference of Gaussians (DoG) are very important basic image transforms.

  • They are applied in several different domains

in computer vision.

3 ¡

slide-4
SLIDE 4

LoG Edge Detector

  • Applying the Laplacian for a Gauss-filtered

image can be done in one step of convolution, based on the theorem:

4 ¡

∇2(Gσ ∗ I) = I ∗ ∇2Gσ

Note that for calculating the Laplacian of a Gauss-filtered image, we only have to perform one convolution with . ∇2Gσ

slide-5
SLIDE 5

LoG Edge Detector

  • The filter kernel for is not limited to be a 3 x 3

kernel as shown below:

  • In fact, because the Gauss function is continuous,

we can calculate its exact Laplacian. How?

5 ¡

∇2Gσ

slide-6
SLIDE 6

LoG Edge Detector

  • For the first partial derivative with respect to , we
  • btain that

6 ¡

x

∂Gσ ∂x (x,y) = − x 2πσ 4 e−(x2+y2)/2σ 2

= = Gσ(x,y) = 1 2πσ 2 exp

  • −x2 + y2

2σ 2

  • = 1

πs · e− x2

2σ 2 · e− y2 2σ 2

slide-7
SLIDE 7

LoG Edge Detector

  • We then repeat the derivative for and and obtain

the LoG as follows:

  • The LoG is also known as the Mexican hat function.

In fact, it is an “inverted Mexican hat”. The zero- crossings define the edges.

7 ¡

x y

∇2Gσ(x,y) = 1 2πσ 4 x2 + y2 − 2σ 2 σ 2

  • e−(x2+y2)/2σ 2
slide-8
SLIDE 8

LoG Edge Detector

8 ¡ Source: ¡R. ¡KleKe ¡

slide-9
SLIDE 9

Sampling the LoG Kernel

  • Let’s sample this Laplacian into a

filter kernel for an appropriate value of .

  • But what is an appropriate value for ?
  • We start with estimating the standard deviation

for the given class of input images, and an appropriate value of follows from this.

9 ¡

(2k +1)×(2k +1) k k σ k

slide-10
SLIDE 10

Sampling the LoG Kernel

  • By doing and, for example, , we

have both zero-crossings in the x axis as the roots

  • f , that is:
  • Consider a parameter given by:

10 ¡

x1 = −σ 2 x2 = 2σ 2

∇2Gσ (x, y) = 0

y = 0 x2 = +σ 2 w w = x1 − x2 = 2σ 2

slide-11
SLIDE 11

Parameter w

11 ¡ Source: ¡R. ¡KleKe ¡

slide-12
SLIDE 12

Sampling the LoG Kernel

  • For representing the Mexican hat function properly

by samples, it is proposed to use a window size

  • f .
  • In conclusion, we have that:

12 ¡

3w×3w = 6σ 2 ×6σ 2 2k + 1 × 2k + 1 = ceil6 √ 2σ × ceil6 √ 2σ

Smallest integer equal to or larger than the argument

slide-13
SLIDE 13

Sampling the LoG Kernel

  • The value of needs to be estimated for the given

image data.

  • Smoothing an image with a very “narrow” (i.e. )

Gauss function does not make much sense.

  • So, let us consider . The smallest kernel ( ,

thus 3 = 8.485) will be of size 9 x 9 (i.e., = 4).

13 ¡

σ σ <1

σ ≥1

σ =1 k w

slide-14
SLIDE 14

LoG Scale Space

14 ¡

σ = 0.5 σ =1 σ = 2 σ = 4 σ = 8 σ =16

Source: ¡R. ¡KleKe ¡

Consider again this Gaussian scale space with six layers.

slide-15
SLIDE 15

LoG Scale Space

15 ¡

σ = 0.5 σ =1 σ = 2 σ = 4 σ = 8 σ =16

Source: ¡R. ¡KleKe ¡

So, here we have the resulting images after computing the Laplacians of those six layers.

slide-16
SLIDE 16

LoG Scale Space

  • This is an example of a LoG scale space.
  • As in a Gaussian scale space, each layer is defined

by the scale , the used standard deviation in the Gauss function.

  • We can generate subsequent layers when starting

at an initial scale and using subsequent scales for and .

16 ¡

σ σ an ⋅σ a >1 n = 0,1,...,m

slide-17
SLIDE 17

Difference of Gaussians (DoG)

  • The difference of Gaussians (DoG) operator is a

common approximation of the LoG operator, justified by reduced run time.

  • Consider again the equation below, which defines a

centered (i.e. zero-mean) Gauss function :

17 ¡

Gσ(x,y) = 1 2πσ 2 exp

  • −x2 + y2

2σ 2

slide-18
SLIDE 18

Difference of Gaussians (DoG)

  • The DoG is defined by an initial scale and a

scaling factor as follows:

  • So, it is the difference between a blurred copy of

image and an even more blurred copy of .

18 ¡

σ a >1

Dσ,a(x,y) = L(x,y,σ) − L(x,y,aσ)

I I

slide-19
SLIDE 19

Difference of Gaussians (DoG)

  • As for LoG, edges (following the step-edge model)

are detected at zero-crossings.

  • Why DoG is an approximation of LoG? Because:

with as a recommended parameter.

19 ¡

∇2Gσ(x,y) ≈ Gaσ (x,y) − Gσ(x,y) (a − 1)σ 2

a =1.6

slide-20
SLIDE 20

DoG Scale Space

  • Different scales produce layers in the DoG

scale space.

  • Let’s see the next figure for a comparison of three

layers in the DoG scale space.

  • The scaling factor used is: .

20 ¡

σ Dσ,a a =1.6

slide-21
SLIDE 21

DoG Scale Space

21 ¡

σ = 0.5

Source: ¡R. ¡KleKe ¡

LoG

σ = 0.5

DoG

a =1.6

slide-22
SLIDE 22

DoG Scale Space

22 ¡

σ = 0.5

Source: ¡R. ¡KleKe ¡

LoG

σ = 0.5

DoG

a = (1.6)3

slide-23
SLIDE 23

DoG Scale Space

23 ¡

σ = 0.5

Source: ¡R. ¡KleKe ¡

LoG

σ = 0.5

DoG

a = (1.6)5

slide-24
SLIDE 24

Embedded Confidence

  • A confidence measure is quantified information

derived from calculated data, to be used for deciding about the existence of a particular feature.

  • If the calculated data match the underlying model of

the feature detector reasonably well, then this should correspond to high values of the measure.

24 ¡

slide-25
SLIDE 25

The Meer-Georgescu Algorithm

  • The Meer-Georgescu algorithm detects edges while

applying a confidence measure based on the assumption of the validity of the step-edge model.

  • Four parameters are considered in this method.

25 ¡

slide-26
SLIDE 26

The Meer-Georgescu Algorithm

  • Specifically, for a gradient vector

at a pixel location , those parameters are:

  • 1. The estimated gradient magnitude ;
  • 2. The estimated gradient direction ;
  • 3. An edge confidence value ;
  • 4. The percentile of the cumulative gradient

magnitude distribution.

26 ¡

g(p) = ∇I(x, y) p = (x, y)

g(p) = g(p) 2 θ(p) η(p) ρk

slide-27
SLIDE 27

Computing the Gradient

  • Let be a matrix representation of a

window centered at the current pixel location in input image .

  • Moreover, let: be a

matrix of weights, obtained as the product of two vectors and .

27 ¡

W = sd

Τ

A (2k +1)×(2k +1) p I (2k +1)×(2k +1) d =[d1,...,d2k+1] s =[s1,...,s2k+1]

slide-28
SLIDE 28

Computing the Gradient

  • Vectors and meet the following requirements:
  • 1. Both are unit vectors in the L1-norm, i.e.

and ;

  • 2. is an asymmetric vector, i.e. ,

, …, .

  • 3. is a symmetric vector, i. e.

.

28 ¡

d d d1 +... d2k+1 =1 d1 = −d2k+1 s s1 +... s2k+1 =1 d2 = −d2k dk+1 = 0 s s1 = s2k+1 ≤... ≤ s2 = s2k ≤.... ≤ sk+1

slide-29
SLIDE 29

Computing the Gradient

  • For example, the vectors and below define a 5 x

5 matrix :

29 ¡

d s

d =[−0.125, -0.25, 0, 0.25, 0.125]Τ

W

s =[0.0625, 0.25, 0.375, 0.25, 0.0625]Τ

W = sdΤ =

  • 0.0078 -0.0156 0 0.0156 0.0078
  • 0.0312 -0.0625 0 0.0625 0.0312
  • 0.0469 -0.0938 0 0.0938 0.0469
  • 0.0312 -0.0625 0 0.0625 0.0312
  • 0.0078 -0.0156 0 0.0156 0.0078

" # $ $ $ $ $ $ % & ' ' ' ' ' '

slide-30
SLIDE 30

Computing the Gradient

  • Let be the i-th row of matrix . By using:

we obtain the first two parameters used in the algorithm:

30 ¡

ai A

d1 = Tr(WA) = Tr

  • sd⊤A
  • d2 = Tr
  • W⊤A
  • = s⊤Ad =

2k+1

  • i=1

si

  • d⊤ai
  • g(p) =
  • d2

1 + d2 2

θ(p) = arctan d1 d2

slide-31
SLIDE 31

Generating the ρη Diagram

  • Let be a matrix representing

a template of an ideal step edge having the gradient direction .

  • The value specifies the proposed

confidence measure.

31 ¡

θ(p) Aideal (2k +1)×(2k +1)

η(p) = Tr(Aideal

Τ

A)

Aideal

slide-32
SLIDE 32

Generating the ρη Diagram

  • The values in and are normalized such

that , with in case of a perfect match with the ideal step edge.

  • Let be the ordered list of

distinct (rounded) gradient-magnitudes in image , with cumulative distribution values (probabilities):

32 ¡

η(p) =1 A 0 ≤η(p) ≤1

ρk = Prob g ≤ gk

[ ]

Aideal I g1 <... < gk <... < gN

for 1 ≤ k ≤ N. F magnitude

slide-33
SLIDE 33

Generating the ρη Diagram

  • For a given pixel in , assume that is the closest

real to its edge magnitude ; then we have the percentile .

  • Altogether, for each pixel , we have a percentile

and a confidence between 0 and 1.

33 ¡

I gk g(p) ρ(p) = ρk p ρ(p) η(p)

slide-34
SLIDE 34

Generating the ρη Diagram

  • These values and for any pixel in define

a 2D - diagram for image .

34 ¡

ρη ρ(p) η(p) I I

We consider curves in the space given in implicit form. For example, this can be just a vertical line passing the square, or an elliptical arc.

ρη

Source: ¡R. ¡KleKe ¡

slide-35
SLIDE 35

Non-Maxima Supression

  • For the pixel , determine neighbors and in the

gradient direction and their and values by interpolation values at adjacent pixel locations.

35 ¡

A 3 x 3 neighborhood of pixel location and virtual neighbors and in estimated gradient direction.

p

p q1 q2 ρ(p) η(p)

q1 q2

slide-36
SLIDE 36

Non-Maxima Supression

  • A pixel location describes with respect to a curve

X in space a maximum if both virtual neighbors and have a negative sign for X.

  • We suppress non-maxima by using this selected

curve X and the remaining pixels are the candidates for the edge map.

36 ¡

p ρη q1 q2

slide-37
SLIDE 37

Hysteresis Thresholding

  • Hysteresis thresholding is a general technique to

decide in a process based on previously obtained results.

  • In the Meer-Georgescu algorithm, hysteresis

thresholding is based on two curves L and H in the space, called the two hysteresis thresholds.

37 ¡

ρη

slide-38
SLIDE 38

Hysteresis Thresholding

  • Those curves are allowed to intersect.

38 ¡ Source: ¡R. ¡KleKe ¡

slide-39
SLIDE 39

Hysteresis Thresholding

  • At pixel we have values and . It stays
  • n the edge map if:
  • 1. and or
  • 2. It is adjacent to a pixel in the edge map and satisfies

.

  • The second condition (2) describes the hysteresis

thresholding process; it is applied recursively.

39 ¡

p ρ(p) η(p)

L(ρ,η) > 0 H(ρ,η) ≥ 0 L(ρ,η)⋅ H(ρ,η) < 0

slide-40
SLIDE 40

The Meer-Georgescu Algorithm

40 ¡

Larger filter kernel Smaller filter kernel

slide-41
SLIDE 41

Next Lecture

  • Basic Image Topology

4- and 8-Adjacency for Binary Images. Topologically Sound Pixel Adjacency. Border Tracing.

  • Suggested reading

Section 3.1 of textbook.

41 ¡