digital g
play

DIGITAL G Institute for Information and Communication Technologies - PowerPoint PPT Presentation

DIGITAL G Institute for Information and Communication Technologies JOANNEUM RESEARCH and Vienna JOANNEUM RESEARCH and Vienna University of Technology at INS Task Werner Bailer TRECVID Workshop Nov 2010 TRECVID Workshop, Nov. 2010 O tli


  1. DIGITAL G Institute for Information and Communication Technologies JOANNEUM RESEARCH and Vienna JOANNEUM RESEARCH and Vienna University of Technology at INS Task Werner Bailer TRECVID Workshop Nov 2010 TRECVID Workshop, Nov. 2010

  2. O tli Outline � Approach A h � Subsystems and features Subsystems and features � Fusion strategies � Results � Conclusion Conclusion 2

  3. A Approach h � fully automatic f ll t ti � set of independent subsystems, using different features � query each sample of a topic independently q y p p p y � each subsystem returns a ranked result list for each sample for each sample � research focus: fusion strategies 3

  4. System Overview i O t S 4

  5. S b Subsystems (1) t (1) � Gabor feature G b f t � perform face detection (Viola-Jones) � if face detected, extract Gabor wavelet descriptor from face region , p g � match against descriptors of all face regions in database � k-NN search � Histogram of gradients Hi t f di t � not used for person/character � descriptor with 36 bins (9 orientations, 4 cells) descriptor with 36 bins (9 orientations, 4 cells) � cell layout is adapted to aspect ratio of query object: 2x2 or 1x4 cells � search window is shifted ¼ cell size � search window is shifted ¼ cell size � 3 scales: 1x, 1.5x and 2x initial size 5

  6. S b Subsystems (2) t (2) � Region covariance R i i � covariance of rectangular region (can be determined efficiently using integral images) � from RGB and first-order derivatives of intensity � same cell sizes/scales as for HoG � SIFT � SIFT � from DoG points � matching: voting in a position histogram (1/10 of image size), g g g ( g ) report match for bins with 5+ votes � Bag of visual features (BoF) � SIFT descriptors from DoG points and global SIFT descriptors from DoG points and global � codebook sizes 100 and 1000 for both 6

  7. P Pre-computed features t d f t � Pre-computed for database P t d f d t b � face detection + Gabor descriptor � global SIFT extraction � BoF codebook generation � At query time � interest point detection + SIFT extraction � HoG � Region covariance g 7

  8. Fusion strategies (1) F i t t i (1) � Two simple methods, not making use of T i l th d t ki f query samples � Max-max � For each shot in the results, take maximum scope of all samples and features l d f t � Top-k � For each feature take for each shot the maximum of all � For each feature, take for each shot the maximum of all samples � Rerank per feature � Take the top-k per feature (k=1000/no. features used) 8

  9. Fusion strategies (2) F i t t i (2) � Two methods using query samples T th d i l � idea: weight features by their relative performance � for each sample, determine where the other samples would for each sample determine where the other samples would be ranked in the result if they were in the database � best rank best rank � determine mean best rank over all samples for each feature � calculate feature weight as � top 100 � determine how many samples are in the top 100 results d i h l i h 100 l � calculate feature weight as 9

  10. R Results per topic/type lt t i /t 0,1 JRS rank max_max 0,09 JRS rank topK JRS rank w bestR JRS rank w_bestR 0 08 0,08 JRS rank w_t100 0,07 0,06 0,05 0,04 0,03 0,02 0,01 0 10

  11. R Results per feature lt f t mean (all) mean (person) mean (character) mean (object) mean (location) 0,025 BOF100G BOF100L BOF1000G BOF1000L BOF1000L Gabor 0,020 HoG RegCov SIFT 0,015 0 010 0,010 0,005 0,000 11

  12. C Conclusion (1) l i (1) � Task is difficult, results for automatic system poor T k i diffi lt lt f t ti t � different sizes, lighting, perspectives, … � needle in a haystack : very few relevant results in a large � “needle in a haystack”: very few relevant results in a large set with many similar objects (e.g. pedestrian crossing, blinds) � Features � as expected, our features perform best for object queries � better results could be possible for some of the features but � better results could be possible for some of the features, but would make matching process more costly 12

  13. C Conclusion (2) l i (2) � Fusion methods F i th d � Overall, the fusion methods using information from query samples perform better samples perform better � Only slight difference for object queries � To fuse or not to fuse? To fuse or not to fuse? � for person and object queries, a single feature outperforms the best fused results � few topics for the other query types, thus difficult to say if fusion is actually useful in these cases 13

  14. Th The research leading to these results has received funding from the European h l di t th lt h i d f di f th E Union’s Seventh Framework Programme under the grant agreements no. FP7- 215475, “2020 3D Media – Spatial Sound and Vision” (http://www.20203dmedia.eu/) and no. FP7-248138, “FascinatE – Format- (http://www.20203dmedia.eu/) and no. FP7 248138, FascinatE Format Agnostic Script based INterAcTive Experience” (http://www.fascinate- project.eu/), as well as from the Austrian FIT-IT project “IV-ART – Intelligent Video Annotation and Retrieval Techniques”. 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend