evaluation of segmentation quality via adaptive
play

Evaluation of Segmentation Quality via Adaptive Composition of - PowerPoint PPT Presentation

Evaluation of Segmentation Quality via Adaptive Composition of Reference Segmentations Bo Peng 1 , Lei Zhang 2 , Xuanqin Mou 3 , and Ming-Hsuan Yang 4 1 Southwest Jiaotong University, 2 Hong Kong Polytechnic University, 3 Xi'an Jiaotong University,


  1. Evaluation of Segmentation Quality via Adaptive Composition of Reference Segmentations Bo Peng 1 , Lei Zhang 2 , Xuanqin Mou 3 , and Ming-Hsuan Yang 4 1 Southwest Jiaotong University, 2 Hong Kong Polytechnic University, 3 Xi'an Jiaotong University, 4 University of California at Merced 1

  2. Introduction Ø What is image segmentation? Extract “regions” = or “boundaries” hard bone soft labeling bone tissue 2

  3. Introduction Ø Evaluation of image segmentation quality Reference-based segmentation evaluation Machine segmentation Hand-labeled segmentation (ground truth) 3

  4. Introduction Ø Applications Ø Performance evaluation of segmentation algorithms. Ø Proper parameter values can be determined based on reliable quantitative evaluation of image segmentation. 4

  5. Related work Ø Variation of Information metric (VOI) It measures the distance between two segmentations in terms of their average conditional entropy.    VOI ( S , S ) H ( S | S ) H ( S | S ) 2 I ( S , S ) 1 2 1 2 2 1 1 2 Ø Segmentation Covering (SC) It measures the similarity between segmentations by weight averaging the overlaps of regions in two segmentations.  R R ' 1     C ( S S ) R max 1 2  N R R '  R ' S  2 R S 1 1. M. Meila. Comparing clusterings: an axiomatic view. In International Conference on Machine Learning, pages 577-584, 2005 2. P. Arbelaez, M. Maire, C. C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5):898–916, 2011 5

  6. Related work Ø Global Consistency Error (GCE) It measure to which degree the segmentations S 1 and S 2 agree with each other. R ( S , p ) \ R ( S , p )  1 i 2 i E ( S , S , p ) 1 2 i R ( S , p ) 1 i 1    GCE ( S , S ) min{ E ( S , S , p ) , E ( S , S , p ) } 1 2 1 2 i 2 1 i N i i Ø F-measure A combination of precision and recall leads to the F-measure. PR  F     R ( 1 ) P D. Martin. An Empirical Approach to Grouping and Segmentation. PhD thesis, EECS Department, University of California, Berkeley, 2002 6

  7. Related work Ø Global comparison strategy Elements (e.g. pixels) from one segmentation are fully compared with those of another segmentation (i.e. the ground truth). Ø The human visual system (HVS): highly adapted to extract structural information from natural scenes. Ø Human observers may pay different attentions to different parts of the images. Ø Ground truths of the same image therefore present various granularities in the object parts. This fact makes them rarely identical in the global view, while highly consistent in the local structures. 7

  8. Motivation An illustrative example between a machine segmentation and labeled segmentations by humans. 8

  9. Proposed evaluation framework 9

  10. Composing Reference Segmentations Seek for the labeling that minimizes the energy:         E ( l ) D ( l ) u T ( l l ) gj { g , g } g g j j ' j j '  i { g , g } M j j ' We use l labels, where each label corresponds to one reference segmentation, to compose G * . 10

  11. Composing Reference Segmentations Seek for the labeling l that minimizes the energy:         E ( l ) D ( l ) u T ( l l ) gj { g , g } g g j j ' j j '  i { g , g } M j j '   D ( l ) d ( s , g ) g j j j   1 if l l g g   T ( l l )  j j ' g g 0 otherwise j j '     u min{ d , d } { g , g } j j ' j j ' 11

  12. Composing Reference Segmentations Seek for the labeling that minimizes the energy.         E ( l ) D ( l ) u T ( l l ) gj { g , g } g g j j ' j j '  i { g , g } M j j ' Multi-label graph cuts [Y.Boykov et al. 2001] 12

  13. Composing Reference Segmentations   D ( l ) d ( s , g ) Localization errors from human labeling process g j j j Structural similarity index: define a pixel-based distance, which uses the complex Gabor transform coefficients.   N N     * * 2 | c || c | 2 | c c |  x , i y , i  x , i y , i   H ( c , c )    i 1 i 1 d ( c , c ) 1 H ( c , c )    x y N N N      s s ' s s ' 2 2 * | c | | c | 2 | c c | x , i y , i x , i y , i    i 1 i 1 i 1 wavelet transform coefficients: CW-SSIM [M. Sampat, 2009] 13

  14. Measuring Segmentation Quality * • Compute the similarity (or distance) between S and the reference G K 1      * )) Q ( S , G ) ( 1 2 d ( s , g R p j j s N j   i 1 s S j i The similarity between S and the composite ground truth G * :  * d ( s j g , ) j The empirical global confidence of G * :  1   R d s j j 14

  15. Examples of composed references     Q 0 . 81 , Q 0 . 80 , Q 0 . 73 , Q 0 . 69 p 1 p 2 1 2     Q 0 . 52 , Q 0 . 56 , Q 0 . 49 , Q 0 . 49 p 1 p 2 1 2 15

  16. Datasets Ø Image segmentation dataset User interface of the developed image segmentation tool. Ø Livewire 16

  17. Datasets Ø Image segmentation dataset database BDSD Our Database # images 500 200 # ground truths/image 4-9 6-15 Image type Natural images Natural images Software supported yes yes # subjects 30 45 Time/segmentation 5-30 min 2-4 min 17

  18. Datasets Ø Segmentation evaluation dataset Ø Compare the performance of a pair of segmentations based on a segmentation dataset with human labeled results. Ø Contains 500 pairs of segmentations and the corresponding evaluation results by human subjects. Seg. Algorithms Parameter values EG K = {600,800,1000,1400,1800} MS h r ={7, 11, 15, 19, 23}, h s =7, min R = 150. CTM ε ={0.1 , 0.2, 0.3, 0.4, 0.5} TBES N sp = 200, ε = {50, 100, 200,300, 400} 18

  19. Datasets Ø Segmentation evaluation dataset Ø Compare the performance of a pair of segmentations based on a segmentation dataset with human labeled results. Ø Contains 500 pairs of segmentations and the corresponding evaluation results by human subjects. Seg. pairs: with 10 human subjects, the the best 3 and the worst 3 segmentations randomly select one segmentation from the group of good/bad. Subjective evaluation: 70 subjects with little or no research experience in image segmentation. 500 pairs of segmentations are evenly divided into 10 groups. 19

  20. Datasets Ø Segmentation evaluation dataset Distribution of confidence rates on the proposed segmentation evaluation dataset. 20

  21. Experimental Results Ø Intel Core 2 Duo 3.00 GHz CPU and 4GB memory. Ø The run time: 24.6 6.0 seconds for composing the reference G *  10.7 1.1 seconds for computing the score Q p .  21

  22. Experimental Results Ø Sensitivity analysis  Test the effects of and initial labeling on the final evaluation score. Alpha-expansion algorithm: break multi-way cut computation into a sequence of binary s-t cuts.         E ( l ) D ( l ) u T ( l l ) gj { g , g } g g j j ' j j '  i { g , g } M j j ' 22

  23. Experimental Results Ø Sensitivity analysis  Fix to be [500,1200], with an interval of 50. The initial labeling of graph cut is set randomly, then Q the mean values and standard deviations of p 23

  24. Experimental Results Ø Sensitivity analysis  Ø Fix to be 800. Ø Carry out the proposed algorithm 50 times with random initialization of labeling. 24

  25. Evaluation with Meta-Measure Ø The meta-measure Ø human labeled segmentation vs. human labeled segmentation of the same image Ø human labeled segmentation vs. machine segmentations of a different image A A B C 25

  26. Evaluation with Meta-Measure Ø The meta-measure Ø human labeled segmentation vs. human labeled segmentation of the same image Ø human labeled segmentation vs. machine segmentations of a different image Ø the percentage of comparisons that agree with this principle as the meta-measure result 26

  27. Evaluation with meta-measure Evaluation results with the meta-measure on different measures Measures PRI GCE VOI BDE F-measure SC( S->G ) SC( G->S ) Q p BSDS500 0.911 0.929 0.967 0.921 0.882 0.962 0.956 0.984 Proposed 0.959 0.981 0.991 0.947 0.838 0.974 0.979 0.994 dataset 27

  28. Evaluation with proposed segmentation dataset 28

  29. Evaluation with proposed segmentation dataset Evaluation results by different measures . 29

  30. Evaluation with proposed segmentation dataset The false evaluation rates with respect to the confidence rate of human subjects 30

  31. Further work Composed exemplar reference image using region-based distance:  M ( A B )    d ( A , B ) 1  M ( A B ) Improved measure Q , Q Q PRI GCE , VOI N R M   Q j R N j R j [1] Features of similarity.[A. Tversky. Psychological Review, 1977] [2] Region based exemplar references for image segmentation evaluation. [B.Peng et al. SPL,2016] 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend