evaluating the impact of 374 visual based lscom concept
play

Evaluating the Impact of 374 Visual- based LSCOM Concept Detectors - PowerPoint PPT Presentation

Evaluating the Impact of 374 Visual- based LSCOM Concept Detectors on Automatic Search Shih-Fu Chang, Winston Hsu, Wei Jiang, Lyndon Kennedy , Dong Xu, Akira Yanagawa, and Eric Zavesky Digital Video and Multimedia Lab Columbia University NIST


  1. Evaluating the Impact of 374 Visual- based LSCOM Concept Detectors on Automatic Search Shih-Fu Chang, Winston Hsu, Wei Jiang, Lyndon Kennedy , Dong Xu, Akira Yanagawa, and Eric Zavesky Digital Video and Multimedia Lab Columbia University NIST TRECVID Workshop November 14, 2006 DVMM Lab, Columbia University 1 Lyndon Kennedy

  2. Video / Image Search Multimodal Queries Video / Image Index • Objective: Semantic access to visual content Find shots of tennis players on • Stop-gap solutions the court - both players visible at same time. • Text Search • Not always useful • Text not available in all situations • Query-by-Example • Lacking in semantic meaning Find shots of Condoleezza Rice. • Example images not readily available • Concept Search: exciting new direction • Visual indexing with concept detection: high semantic meaning • Simple text keyword search DVMM Lab, Columbia University Lyndon Kennedy 2

  3. Concept Search Framework Text Queries Concept Image Database Detectors Anchor Snow Find shots of snow. Soccer Building Outdoor Find shots of soccer matches. Find shots of buildings. DVMM Lab, Columbia University Lyndon Kennedy 3

  4. Concept Search • Text-based queries against visual content • Index video shots using visual content only • run many concept detectors over images • treat scores as likelihood of containing concept • Allow queries using text keywords (no examples) • map keywords to concepts • use fixed list of synonyms for each concept • Many concepts available • LSCOM 449 / MediaMill 101 • TRECVID 2006: first opportunity to put large lexicons to the test DVMM Lab, Columbia University Lyndon Kennedy 4

  5. Concept Search: TRECVID Perspective • Concept search is powerful and attractive • But unable to handle every type of query • Text and Query-by-Example still very powerful • Want to exploit any/all query/index information available • Impact of methods varies from query to query • Text: named persons • Query-by-Example: consistent low-level appearance • Concept: existence of matching concept • Propose: Query-Class-Dependent model DVMM Lab, Columbia University Lyndon Kennedy 5

  6. Query-Class-Dependent Search Query Expansion Multimodal Query Fusion Keyword Text Text Search Linearly Extraction weighted sum of Images Multimodal scores Re-ranking Search Result Query- Concept Class- Search Dependent weights Image Search DVMM Lab, Columbia University Lyndon Kennedy 6

  7. Query Classes • Named Person • if named entity detected in query. Rely on text search • Sports • if sports keyword detected in query. Rely on visual examples. • Concept • if keyword maps to pre-trained concept detector. Rely on concept search. • Named Person + Concept • if both named entity and concept detected. Combine text and concept search equally. • General • for all other queries. Combine text and visual examples equally. DVMM Lab, Columbia University Lyndon Kennedy 7

  8. Query Class Distribution 2006 2005 6 Named Person 4 3 Sports 1 12 Concept 16 2 Named Entity + Concept 1 General 3 DVMM Lab, Columbia University Lyndon Kennedy 8

  9. Query Processing / Classification Incoming Query Topic Keyword Extraction Query Classification Natural language Part-of-Speech Tagging Named Entity? statement Named Entity Detection Matching concept? Sports word? Find shots of condoleezza rice Named Entity Condoleezza Rice. Find shots of scenes scenes snow Matching Concept with snow. Find shots of one or more soccer goalposts Sports soccer goalposts. DVMM Lab, Columbia University Lyndon Kennedy 9

  10. Text Search • Extract named entities or nouns as keywords • Keyword-based search over ASR/MT transcripts • use story segmentation • Most powerful individual tool • Named persons: “Dick Cheney,” “Condoleezza Rice,” “Hussein” • Others: “demonstration/protest,” “snow,” “soccer” DVMM Lab, Columbia University Lyndon Kennedy 10

  11. Story Segmentation • Automatically detect story boundaries. clas s ifier clas s ifier parameters parameters clas s ification training training • low-level features: color IB cue clusters moments, gabor texture • IB framework: discover IB cue clus ter IB cue clus ter IB cue clus ter cons truction cons truction cons truction meaningful mid-level feature clusters cue clus ter projection cue clus ter projection • high-performing in TV2004 • results shared with community in 2005 + 2006 Cue Cluster • Stories for text search Testing Phase Classifer Training Phase Discovery Phase • typically 25% improvement • TV2006: 10% improvement [Hsu, CIVR 2005] DVMM Lab, Columbia University Lyndon Kennedy 11

  12. Named Entity Query Expansion Internal Query Search Text Query Processing Stemmed: shots tennis court Single source: players target documents Find shots of a tennis court with Named External both players visible. Entities Text Method : Detected Story named entities from internal and external sources. Stories: ASR / CC Secondary search with discovered entities in external text Internal Text Joint work with AT&T Labs: AT&T Labs: Miracle multimedia platform. * Result Search with both keywords Liu, Gibbon, Zavesky, and entities Shahraray, Haffner, TV2006 DVMM Lab, Columbia University Lyndon Kennedy 12

  13. Information Bottleneck Reranking (a) (a) Search topic - “Find shots of Tony Blair” & search examples (c) IB reranked results + text search (b) Text search results DVMM Lab, Columbia University Lyndon Kennedy 13

  14. Information Bottleneck Reranking Information Bottleneck principle • Re-order text results • make visually similar clusters Y= search relevance • preserve mutual information with estimated … … search relevance • Improve 10% over text alone Clusters automatically discovered via Information Bottleneck principle & Kernel Density Estimation • lower than past years low-level features • text baseline is (too) low DVMM Lab, Columbia University Lyndon Kennedy 14

  15. Visual Example Search • Fusion of many image matching and SVM-based searches [IBM, TV2005] • Feature spaces: • 5x5 grid color moments, gabor texture, edge direction histogram • Image matching: • euclidean distance between examples and search set in each dimension • SVM-based: • Take examples as positives (~5), randomly sample 50 negatives • Learn SVM, repeat 10 times, average resulting scores • Independent in each feature space • Average scores from 3 image matching and 3 SVM-based models • Least-powerful method • best for “soccer” DVMM Lab, Columbia University Lyndon Kennedy 15

  16. Reminder: Concept Search Framework Text Queries Concept Image Database Detectors Anchor Snow Find shots of snow. Soccer Building Outdoor Find shots of soccer matches. Find shots of buildings. DVMM Lab, Columbia University Lyndon Kennedy 16

  17. Concept Ontologies • LSCOM-Lite • 39 Concepts (used for TRECVID 2006 High-level Features) • LSCOM • 449 Concepts • Labeled over TRECVID 2005 development set • 30+ Annotators at CMU and Columbia • 33 million judgments collected • Free to download (110+ downloads so far) • http://www.ee.columbia.edu/dvmm/lscom/ • revisions for “event/activity” (motion) concepts coming soon! DVMM Lab, Columbia University Lyndon Kennedy 17

  18. Lexicon Size Impact • 10-fold increase in number of concepts Possible effects on search? • Depends on: • How many queries have matching concepts? • How frequent are the concepts? • How good are the detection results? DVMM Lab, Columbia University Lyndon Kennedy 18

  19. Concept Search Performance Increasing Size of Lexicon LSCOM (374) LSCOM-lite (39) 0.500 Sports Average Precision 0.375 Helicopters Newspaper Boats 0.250 Soldiers+Weapon+Prisoner Protest+Building Smokestacks 0.125 0 TRECVID 2005 TRECVID 2006 DVMM Lab, Columbia University Lyndon Kennedy 19

  20. Increasing Lexicon Size 39 Concepts 374 Concepts TV 2005 MAP: 0.0353 MAP: 0.0743 TV 2006 MAP: 0.0191 MAP: 0.0244 • Large increase in number of concepts, Moderate increase in search performance • 10x as many concepts in lexicon • Search MAP increases by 30% - 100% DVMM Lab, Columbia University Lyndon Kennedy 20

  21. Concept / Query Coverage 39 Concepts 374 Concepts 11 Query Matches 17 Query Matches TV 2005 1.1 Concepts/Query 1.3 Concepts/Query 12 Query Matches 17 Query Matches TV 2006 1.8 Concepts/Query 2.5 Concepts/Query • Large increase in number of concepts, Small increase in coverage • 10x as many concepts in lexicon • 1.5x as many queries covered • 1.2x - 1.4x as many concepts per covered query DVMM Lab, Columbia University Lyndon Kennedy 21

  22. Concept Frequencies LSCOM LSCOM-Lite Examples per concept: 1200 5000 Frequency (log) “Prisoner” more frequent than most LSCOM concepts! Concepts (rank) DVMM Lab, Columbia University Lyndon Kennedy 22

  23. Concept Detection Performance Internal evaluation: 2005 validation data LSCOM LSCOM-Lite Mean Average Precision: 0.39 0.26 1.00 Average Precision 0.75 0.50 0.25 0 Concepts (rank) DVMM Lab, Columbia University Lyndon Kennedy 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend