Segmentation-level Fusion for Iris Recognition Peter Wild 1 , 3 , - - PowerPoint PPT Presentation

segmentation level fusion for iris recognition
SMART_READER_LITE
LIVE PREVIEW

Segmentation-level Fusion for Iris Recognition Peter Wild 1 , 3 , - - PowerPoint PPT Presentation

Segmentation-level Fusion for Iris Recognition Peter Wild 1 , 3 , Heinz Hofbauer 2 , James Ferryman 1 , Andreas Uhl 2 1 School of Systems Engineering, University of Reading, Reading RG6 6AY, UK. 2 Dept. of Computer Sciences, University of Salzburg,


slide-1
SLIDE 1

Segmentation-level Fusion for Iris Recognition

Peter Wild1,3, Heinz Hofbauer2, James Ferryman1, Andreas Uhl2

1 School of Systems Engineering, University of Reading, Reading RG6 6AY, UK. 2 Dept. of Computer Sciences, University of Salzburg, 5020 Salzburg, Austria. 3 AIT Austrian Institute of Technology GmbH, 2444 Seibersdorf, Austria.

peter.wild@ait.ac.at, {hhofbaue, uhl}@cosy.sbg.ac.at, j.m.ferryman@reading.ac.uk

14th Int’l Conf. of the Biometrics Special Interest Group (BIOSIG)

September 2015

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

1/20

slide-2
SLIDE 2

Outline

1

Introduction

2

Multi-segmentation Fusion Methods

3

Experimental Study

4

Conclusion

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

2/20

slide-3
SLIDE 3

Motivation

Challenge Existing: fusion methods at data/feature, score and rank/decision level. Widely ignored: fusion at normalisation/segmentation level prior to feature extraction. Missing: any standards for interchange of segmentation results Ambition Motivation 1: better accuracy for less invasive recording conditions? Motivation 2: potentially faster alternative to multi-algorithm fusion? Motivation 3: improved understanding of types of segmentation errors.

Impact

Investigate and suggest methods for effective multi-segmentation fusion, tested on public datasets with open source software.

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

3/20

slide-4
SLIDE 4

Related Work

Super-resolution

[Huang et al. BMVC’03]

among first data-level fusion aproaches for iris present a Markov network learning-based fusion method to enhance the resolution of iris images Iris image-fusion

[Hollingsw. et al. TIFS’09]

combine high-res. images from multiple frames

[Jillela et al. WACV’11]

image-level fusion with Principal

  • Comp. Transform

[Llano et al. ICB’15]

PCA vs. Laplacian Pyramid & Exp. Mean fusion Segmentation fusion

[Uhl et al. ICIAR’13]

proof-of-concept human (manual) ground truth segmentation: 97.46% - 97.64% GAR at 0.01% FAR no automated algorithms

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

4/20

slide-5
SLIDE 5

Fusion Framework

Fusion Segmentation Algorithm 1

. . .

Segmentation Algorithm k

Iris Segmentation

I N1 Rubbersheet transform {P1, L1, EU

1 , EL 1 }

{Pk, Lk, EU

k , EL k }

N {P, L, EU, EL} Nk Iris Texture Noise Mask

Input: inner/outer boundaries P, L : [0, 2π) → [0, m] × [0, n]. Output 1: refined boundaries for “Rubbersheet” mapping R(θ, r) := (1 − r) · P(θ) + r · L(θ). Output 2: texture and noise masks T, M : [0, 2π) × [0, 1] → C (C is the target color space, M = N ◦ R, T = I ◦ R for the original n × m image I and noise mask N).

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

5/20

slide-6
SLIDE 6

Investigated Questions

1 Does the combination of automated iris segmentation results

yield more accurate result than each of the employed original segmentation algorithms?

2 How does the choice of database and segmentation algorithms

impact on iris segmentation fusion?

3 How do outliers impact on overall recognition accuracy and how

do ground-truth-based vs. recognition-based evaluations relate to each other?

Contribution

analysis of reference methods for iris segmentation-level fusion; considering ground-truth and recognition-based assessment.

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

6/20

slide-7
SLIDE 7

Error Measures

Ground-truth evaluation: Assessing segmentation noise mask; measures suggested by Noisy Iris Challenge Evaluation - Part I (NICE.I) and F-measure [Hofbauer et al. ICPR’14]: E1 := 1 k

k

  • i=1

fpi + fni mn ; E2 := 1 2

  • 1

k

k

  • i=1

fpi fpi + tni

  • +1

2

  • 1

k

k

  • i=1

fni fni + tpi

  • (1)

F-measure = F1 := 1 k

k

  • i=1

tpi tpi + 1

2(fni + fpi)

(2) Recognition-based evaluation: Account for feature-based tolerance

  • f false segmentations. Use Equal Error Rate (EER) as main

performance indicator; McNemar test [McNemar, Psy.’47].

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

7/20

slide-8
SLIDE 8

Segmentation Fusion

Sum-Rule Interpolation: This fusion rule combines boundary points Bi(θ) of curves B1, B2, . . . Bk : [0, 2π) → [0, m] × [0, n] into a single boundary B, for pupillary and limbic boundaries, in analogy to the sum rule. B(θ) := 1 k

k

  • i=1

Bi(θ); (3) Augmented-Model Interpolation: This model combines boundaries B1, . . . , Bk within a jointly applied parametrisation model ModelFit minimizing the model-error (e.g., Fitzgibbon’s ellipse-, or least-squares circular fitting), executed separately for inner and

  • uter iris boundaries. Models are

combined, not only points. B(θ) := ModelFit k

  • i=1

Bi

  • (θ)

(4)

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

8/20

slide-9
SLIDE 9

Iris Scanning and Pruning Process

Overview over the iris scanning and pruning process

horizontal scan area vertical scana area r µr + 2.5σr µr − 2.5σr Cr

  • utlier
  • utlier

(a) With outliers (b) With outliers pruned

Input: Segmentation masks N; Method: Augmented model interpolation based on mask-scan; N equidistant scan lines are used to generate points; Outlier detection and removal using center

  • f gravity Cr (z-score

> 2.5).

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

9/20

slide-10
SLIDE 10

Iris Scanning and Pruning Process: Details

High number of scan lines is desirable ( N = 100 ); If the mask contains holes (noise), they should be closed by an dilate+erode morphological operation; OSIRIS algorithm produces masks, which extend over the actual boundaries, therefore a restriction step is introduced. Actual mask is generated by fitting an ellipse to the point clouds by a least-squares method.

(a) Original (b) Corrected boundaries (c) Without noise

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

10/20

slide-11
SLIDE 11

Tested Segmentation Algorithms

CAHT - Contrast Adaptive Hough Trans. [Rathgeb et al. 2012]

traditional sequential (limbic-after-pupillary) method; based on circular HT and contrast-enhancement;

WAHET - Weighted Adaptive HT & ET [Uhl et al. BTAS’12]

two-stage adaptive multi-scale HT segmentation, elliptical;

OSIRIS - Open Source for Iris [Petrovska et al. 2007]

circular HT-based method with boundary refinement;

IFPP - Iterat. Fourier Pulling & Pushing [Uhl et al. ICIAR’12]

iterative Fourier-series approximation and “pulling and pushing”;

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

11/20

slide-12
SLIDE 12

Impact on Recognition Accuracy

Equal error rate (EER) for combinations using USIT (Uni Salzburg Iris Toolkit [Rathgeb et al. 2012]) algorithms Ma (wavelet zero-crossing based) and Masek (1D Log-Gabor):

Casia v4 Interval database Equal-error rate [%] of Masek CAHT WAHET OSIRIS IFPP CAHT 1.22 0.92 1.03 1.30 WAHET 1.89 1.02 1.41 OSIRIS 1.04 1.44 IFPP 8.10 Equal-error rate [%] of Ma CAHT WAHET OSIRIS IFPP CAHT 0.99 0.64 0.84 1.17 WAHET 1.72 0.89 1.22 OSIRIS 0.73 1.53 IFPP 8.78 IIT Delhi database Equal-error rate [%] of Masek CAHT WAHET OSIRIS IFPP CAHT 1.85 3.60 1.65 1.38 WAHET 6.82 3.90 3.70 OSIRIS 1.40 1.94 IFPP 3.87 Equal-error rate [%] of Ma CAHT WAHET OSIRIS IFPP CAHT 1.72 4.06 1.95 1.43 WAHET 7.43 4.86 4.23 OSIRIS 1.21 2.40 IFPP 4.36

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

12/20

slide-13
SLIDE 13

Results of the McNemar test, reported as X 2 values

Casia v4 Interval database X 2 statistic for Masek single method CAHT WAHET OSIRIS IFPP fused with CAHT 24742 8 246149 WAHET 2543 13 247450 OSIRIS 1158 22002 243734 IFPP 928 8110 3729 X 2 statistic for Ma single method CAHT WAHET OSIRIS IFPP fused with CAHT 28739 135 273347 WAHET 3993 1649 276351 OSIRIS 1620 15752 261445 IFPP 1438 7076 10532 IIT Delhi database X 2 statistic for Masek single method CAHT WAHET OSIRIS IFPP CAHT 49180 169 35918 WAHET 20317 42328 24 OSIRIS 1746 27835 17116 IFPP 3193 38721 3655 X 2 statistic for Ma single method CAHT WAHET OSIRIS IFPP CAHT 21271 4614 61327 WAHET 52945 78177 53 OSIRIS 368 10149 26311 IFPP 1145 21256 11669

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

13/20

slide-14
SLIDE 14

Results of Segmentation-level Fusion (Recognition)

Segmentation fusion increased performance in 10 out of 24 combination scenarios; Only one setup, IFPP + WAHET, which consistently increases the performance; Only one case, OSIRIS + CAHT using Ma on IITD deteriorates performance of both individual results. McNemar tests using χ2 approximation with the continuity correction proposed by Edwards reveals EERs are different (critical value X 2∗ ≥ 6.64 indicates a rejection of the null hypothesis — with at least 99% significance).

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

14/20

slide-15
SLIDE 15

Ground-truth Segmentation Accuracy

Good vs bad segmentation-fusion

Casia v4 Interval database Segmentation error [%] E1 E2 Good Bad Good Bad CAHT 1.98 2.76 3.02 4.10 WAHET (NIR) 2.30 6.05 3.54 8.90 Fusion (Sum Rule) 1.87 3.85 2.87 5.61 IIT Delhi database Segmentation error [%] E1 E2 Good Bad Good Bad CAHT 2.61 5.00 3.48 8.33 WAHET (NIR) 2.77 15.31 3.73 20.76 Fusion (Sum Rule) 2.40 9.95 3.23 13.84

Sum Rule segmentation fusion performance on “good” versus “bad” segmentations (distance

  • f centers/radii);

E1, E2 test: Fusion accuracy

  • n the “good” set improved,

while averaging performance for the “bad” set; F-Measure test: Fusion exhibits a closer conformity to the ground truth than each individual segmentation algorithm → reduction in

  • utliers.
  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

15/20

slide-16
SLIDE 16

F-Measure Test (Casia-v4-Interval Ground-truth)

IFPP WAHET IFPP+WAHET

0.2 0.4 0.6 0.8 1 F-measure 0.2 0.4 0.6 0.8 1 F-measure 0.2 0.4 0.6 0.8 1 S1001L01 S1011R02 S1028L07 S1042R03 S1057R04 S1069R02 S1079R05 S1089L02 S1098L01 S1107L05 S1116R01 S1125R05 S1135L07 S1144R03 S1159R03 S1167R08 S1180R05 S1196R03 S1212L07 S1228L06 S1249R02 F-measure Subject id

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

16/20

slide-17
SLIDE 17

Possible Effects of Combining Masks

Positive Neutral

(a) Shape mis- match correction (b) Boundary mis- match correction (c) Discrepancy due to cut-off iris (d) Matching er- rors

Negative

(a) Detection flaw (b) Missed boundary (c) Pruning failure

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

17/20

slide-18
SLIDE 18

Conclusion

Investigated topic

Multisegmentation fusion using pairwise combinations of CAHT, WAHET, IFPP and OSIRIS iris segmentation algorithms.

Results

10/24 cases: autocorrective behaviour (augmented model fusion); best: 0.64% EER for WAHET+CAHT vs 0.99% EER for CAHT; better correction, if iris undershoots rather than overshoots; non-convex and miss shaped masks can lead to fusion problems; ground-truth evaluations miss corrective behaviour for outliers.

Next steps

advanced, sequential approaches with more than 2 algorithms taking processing time into account.

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

18/20

slide-19
SLIDE 19

References

  • F. Alonso-Fernandez and J. Bigun.

Quality factors affecting iris segmentation and matching. In Proc. Int’l Conf. on Biometrics (ICB), 2013.

  • H. Hofbauer, F. Alonso-Fernandez, P

. Wild, J. Bigun, and A. Uhl. A Ground Truth for Iris Segmentation. In Proc. 22nd Int’l Conf. Pattern Rec. (ICPR), 2014.

  • E. Llano, J. Vargas, M. Garca-Vzquez, L. Fuentes, and A. Ramrez-Acosta.

Cross-sensor iris verification applying robust fused segmentation algorithms. In Proc. Int’l Conf. on Biometrics (ICB), 2015, pages 1–6, 2015.

  • Q. McNemar.

Note on the sampling error of difference betw. correlated proportions of percent. Psychometrika, 12(2):153–157, 1947.

  • C. Rathgeb, A. Uhl, and P

. Wild. Iris Recognition: From Segmentation to Template Security, vol. 59 Adv. Inf. Sec.. Springer, 2012.

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

19/20

slide-20
SLIDE 20

Thank you for your attention!

Any questions?

The work has been supported by the FastPass project. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 312583. This publication only reflects the authors view and the European Union is not liable for any use that may be made of the information contained therein. All document contained therein cannot be copied, reproduced or modified in the whole or in the part for any purpose without written permission from the FastPass Coordinator with acceptance of the Project Consortium.

  • P. Wild et al.: Segmentation-level Fusion for Iris Recognition

20/20