the gray code kernels the gray code kernels the gray code
play

The Gray Code Kernels The Gray Code Kernels The Gray Code Kernels - PowerPoint PPT Presentation

The Gray Code Kernels The Gray Code Kernels The Gray Code Kernels Gil Ben-Artzi Hagit Hel-Or Yacov Hel-Or Bar-Ilan University Haifa University IDC 1 Motivation Motivation Image filtering with a successive set of kernels is very


  1. The Gray Code Kernels The Gray Code Kernels The Gray Code Kernels Gil Ben-Artzi Hagit Hel-Or Yacov Hel-Or Bar-Ilan University Haifa University IDC 1

  2. Motivation Motivation • Image filtering with a successive set of kernels is very common in many applications: – Pattern classification – Pattern matching – Texture analysis – Image Denoising In some applications applying a large set of filter kernels is prohibited due to time limitation.

  3. Example 1: Pattern detection Example 1: Pattern detection • Pattern Detection : Given a pattern subjected to some type of deformations, detect occurrences of this pattern in an image. • Detection should be: – Accurate (small number of mis-detections/false-alarms). – As fast as possible.

  4. Pattern Detection as a Classification Problem Pattern Detection as a Classification Problem Pattern detection requires a separation between two classes: The detection complexity is a. The Target class. dominated by the feature extraction b. The Clutter class. z 1 • • • z 2 z 3 Classifier { } { } z 4 ℜ → + − n L C z , z , z , 1 , 1 : 1 2 3 z 5 Feature extraction

  5. Feature Selection Feature Selection • In order to optimize classification complexity, the feature set should be selected according to the following criteria: 1. Informative: high “separation” power 2. Fast to apply.

  6. Example 2: Pattern Matching Example 2: Pattern Matching • A known pattern is sought in an image. • The pattern may appear at any location in the image. • A degenerated classification problem. • • •

  7. The Euclidean Distance The Euclidean Distance • • • [ ] ( ) ( ) ( ) ∑ = − − − 2 d u , v I x u , y v P x , y E ∈ x , y N [ ] ( ) ( ) ( ) ∑ = − − − − 2 d u , v , t I x u , y v , t w P x , y , t E ∈ x , y , t N

  8. Complexity (2D case) Complexity (2D case) Run Time for Average # 1Kx1K Image Integer Operations per Space 32x32 pattern PIII, Arithm . Pixel 1.8 Ghz +: 2k 2 n 2 Naive 5.14 seconds Yes *: k 2 +: 36 log n n 2 Fourier No 4.3 seconds *: 24 log n Far from real-time performance

  9. Suggested Solution: Bound Distances using Suggested Solution: Bound Distances using Projection Kernels (Hel-Or 2 03) Projection Kernels (Hel-Or 2 03) • Representing an image window and the pattern as points in R kxk : || 2 d E (p,q)= ||p-q|| 2 = || - • If p and q were projected onto a kernel u , it follows from the Cauchy-Schwarz Inequality: d E (p,q) ≥ |u| -2 d E (p T u, q T u) q p u

  10. Distance Measure in Sub-space (Cont.) Distance Measure in Sub-space (Cont.) • If q and p were projected onto a set of kernels [U]: q u 2 p u 1 ( ) r 1 ( ) ∑ ≥ T T d p , q d p u , q u E E k k 2 S = k 1 k

  11. How can we Expedite the Distance Calculations? How can we Expedite the Distance Calculations? Two necessary requirements: 1. Choose informative projecting kernels [U]; having high probability to be parallel to the vector p-q. 2. Choose projecting kernels that are fast to apply. Natural Images u 1

  12. Our Goal Our Goal Design a set of filter kernels with the following properties: – “Informative” in some sense. – Efficient to apply successively to images. – Consists of a large variety of kernels. – Forms a basis, thus allowing approximating any set of filter kernels.

  13. Fast Filter Kernels Fast Filter Kernels • Previous work: – Summed-area table / Franklin [1984] Average / difference – Boxlets/ Simard, et. Al. [1999] kernels – Integral image/ Viola & Jones [2001] • Limitations: – A limited variety of filter kernels. – Approximation of large sets might be inefficient. – Does not form a basis and thus inefficient to compose other kernels.

  14. Our work based upon Our work based upon Real-Time projection kernels [Hel-Or 2 03] • A set of Walsh-Hadamard basis kernels. • Each window in a natural image is closely spanned by the first few kernel vectors. • Can be applied very fast in a recursive manner.

  15. The Walsh-Hadamard Kernels: The Walsh-Hadamard Kernels:

  16. Walsh-Hadamard v.s. Standard Basis: Walsh-Hadamard v.s. Standard Basis: The lower bound for distance value in % The lower bound for distance value in % v.s. number of standard basis projections, v.s. number of Walsh-Hadamard Averaged over 100 pattern-image pairs of projections, size 256x256 . Averaged over 100 pattern-image pairs of size 256x256 .

  17. The Walsh-Hadamard Tree (1D case) The Walsh-Hadamard Tree (1D case) + - + + - + + - - + + + - + - + - - + + + - - + + + + - - - - + + + + + - + - + - + - + - + - - + - + + - - + - + + - + - - + + - - + + + - - + + - - + + - - - - + + + + + + - - - - + + + + + + + +

  18. The Walsh-Hadamard Tree - Example The Walsh-Hadamard Tree - Example 15 6 10 8 8 5 10 1 + - + 21 16 18 16 13 15 11 9 -4 2 0 3 -5 9 + - + + - - + + + - + - + - - + + + - - 11 -4 5 -5 12 39 32 31 31 24 + + + + 7 -4 -1 5 6 3 0 5 1 2

  19. Properties: Properties: • Descending from a node to its child requires one addition operation per pixel. • The depth of the tree is log k where k is the kernel’s size. • Successive application of WH kernels requires between O(1) to O(log k) ops per kernel per pixel. + • Requires n log k memory size. - + • Linear scanning of tree leaves. + - + + - - + + + - + - + - - + + + - - + + + +

  20. Walsh-Hadamard Tree (2D): Walsh-Hadamard Tree (2D): • For the 2D case, the projection is performed in a similar manner where the tree depth is 2log k • The complexity is calculated accordingly. + + - + + + - + + + + + + - - + + + - + - + + + + - + + - - - + + Construction tree for 2x2 basis

  21. WH for Pattern Matching WH for Pattern Matching – Iteratively apply Walsh-Hadamard kernels to each window w i in the image. – At each iteration and for each w i calculate a lower- bound Lb i for |p-w i | 2 . – If the lower-bound Lb i is greater than a pre-defined threshold, reject the window w i and ignore it in further projections.

  22. Example: Example: Sought Pattern Sought Pattern Initial Image: 65536 candidates Initial Image: 65536 candidates

  23. After the 1 st projection: 563 candidates After the 1 st projection: 563 candidates

  24. After the 2 nd projection: 16 candidates After the 2 nd projection: 16 candidates

  25. After the 3 rd projection: 1 candidate After the 3 rd projection: 1 candidate

  26. Percentage of windows remaining following each projection, averaged over 100 pattern-image pairs. Image size = 256x256, pattern size = 16x16.

  27. Example with Noise Example with Noise Original Noise Level = 40 Detected patterns. Number of projections required to find all patterns, as a function of noise level. (Threshold is set to minimum).

  28. 35 30 4 25 3 % Windows Remaining 20 2 15 1 0 10 0 5 10 15 5 0 -5 0 50 100 150 200 250 Projection # Percentage of windows remaining following each projection, at various noise levels. Image size = 256x256, pattern size = 16x16.

  29. DC-invariant Pattern Matching DC-invariant Pattern Matching Illumination Original Detected patterns. gradient added Five projections are required to find all 10 patterns (Threshold is set to minimum).

  30. Complexity (2D case) Complexity (2D case) Run Time for Average # 1Kx1K Image Integer Operations per Space 32x32 pattern PIII, Arithm . Pixel 1.8 Ghz +: 2k 2 Naive n 2 4.86 seconds Yes *: k 2 +: 36 log n n 2 Fourier 3.5 seconds No *: 24 log n +: 2 log k + ε n 2 log k New Yes 78 msec

  31. Advantages: Advantages: – WH kernels can be applied very fast. – Projections are performed with additions/subtractions only (no multiplications). – Integer operations (3 times faster for additions). – Possible to perform pattern matching at video rate. – Can be easily extended to higher dim.

  32. Limitations Limitations – Limited set - only the Walsh-Hadamard kernels. – Each kernel is applied in O(1)-O(d log k) – Limited order of kernels. – Limited to dyadic sized kernels. – Requires maintaining d log k images in memory.

  33. The Gray Code Kernels (GCK): The Gray Code Kernels (GCK): • Allowing convolution of large set of kernels in O(1): – Independent of the kernel size. – Independent of the kernel dimension. – Allows various computation orders of kernels. – Various size of kernels other than 2^n. – Requires maintaining 2 images in memory.

  34. The Gray Code Kernels – Definitions (1D) The Gray Code Kernels – Definitions (1D) Input r 1. A seed vector . s 2. A set of coefficients α 1 , α 2 … α k ∈ {+1,-1}. Output A set of recursively built kernels : α 1 V 0 α 3 V 2 α 2 V 1 S v 0 v 1 v 2 v 3

  35. GCK - Formal Definitions GCK - Formal Definitions v ( ) = 0 V s s [ ] { } v v ( ) ( ) ( ) − − = α k k 1 k 1 V v v s k v { } ( ) ( ) − ∈ α ∈ + − k 1 k - 1 s . t . v V and 1 , 1 s k

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend