INEX 2012 Overview
Shlomo Geva Jaap Kamps Ralf Schenkel
INEX 2012 Overview Shlomo Geva Jaap Kamps Ralf Schenkel 10 - - PowerPoint PPT Presentation
INEX 2012 Overview Shlomo Geva Jaap Kamps Ralf Schenkel 10 years! 2002-2012 INEX 2012 Overview Shlomo Geva Jaap Kamps Ralf Schenkel Search changed a lot in 10 years! INEX teams up with CLEF in 2012 INEX11 Workshop was on Dec 13-15,
Shlomo Geva Jaap Kamps Ralf Schenkel
Shlomo Geva Jaap Kamps Ralf Schenkel
Search changed a lot in 10 years!
INEX teams up with CLEF in 2012
So INEX’12 ran for only nine months... Apologies to the CLEF folks for running late’ish ... which was hard ... INEX’11 Workshop was on Dec 13-15, 2011
Huize Bergen, Vught, The Netherlands, Dec 13-15, 2010 Snippet Retrieval Relevance Feedback Tweet Contextualization Linked Data Social Book Search
Five tracks
Social Book Search Track
Topic title Group name Narrative Recommended books
300 topics + recommendations from the LT forum Also crowdsourcing recommendation/relevance
Detailed results discussed in the INEX sessions
Task 2: “Prove it” task against scanned books “Structure Extraction” task @ ICDAR
Extensive use of crowdsourcing (topics, judgments)
Corpus: DBpedia/YAGO + Wikipedia Investigate textual and highly structured data Linked Data Track
entities.
values obtained from the RDF data that will
article in a large result set.
Jeopardy! clues which are manually translated into SPARQL queries extended with keyword conditions.
Ad hoc: Structured helped the best run(s) Faceted: evaluation is ongoing... Jeopardy!: SPARQL effective but text better
Task description
What International Women's Day is?
Who Francesca Woodman is?
What kind of art it is?
Where this exhibition is?
What Guggenheim Museum is?
...
On #InternationalWomensDay, we are proud to present our new Francesca Woodman exhibition opening Mar 16: http://t.co/AyuRH1OF From : Guggenheim Museum
Tweet Contextualization Track
Task description
Given a tweet and its metadata
Wikipedia
contextualization of the tweet
→ Multi-document summarization / answer aggregation
Evaluation:
Evaluation
1000 tweets manual or automatically collected Evaluation of:
33 submitted runs + 1 organizer baseline 13 teams (Canada, Chile, France, Germany,
India, Ireland, Mexico, Russia, Spain, USA)
Snippet Retrieval Track
SnippetMbased%assessment%
Assessor%reads%through%the%20%snippets%for%each%topic,%
and%judges%each%as%relevant/irrelevant.%
DocumentMbased%assessment%
Each%document%is%reassessed%by%the%same%assessor%using%
the%full%document%text%providing%the%‘groundMtruth’.%
Evaluation%is%based%on%comparing%these%two%sets%of%
judgments.%
Round%1%
Running%(very)%Late…%
Round%2%
Submissions%due:%%Oct%19% Assessment:%Oct%29%–%Nov%24% Results%released:%Dec%3%
Relevance Feedback Track (Open Source Retrieval workshop at SIGIR)
+ Evaluation Platform
Track participants provided with the complete document
collection in advance (2,666,192 docs INEX Wikipedia 2009)
Evaluation platform provides the relevance feedback modules with
topics
Simulates a user in the loop, interacting with the search system
Evalua&on) Pla+orm)
Document) Collec,on) Assessments Relevance) Feedback) Algorithm)
2012'(preliminary)'results'
0.1 0.2 0.3 0.4 0.5 0.6 0.7 @5 @10 @15 @20 @30 @100 @200 @500 @1000
Exact Precision - Best non-RF and best RF submissions from each participant
BASE-IND RRMRF-300D-L05 TOPSIG-2048 TOPSIG-RF4
1
Geva et al. (Eds.)
Comparative Evaluation
LNCS 6932
Shlomo Geva Jaap Kamps Ralf Schenkel Andrew Trotman (Eds.)
123
LNCS 6932
9th International Workshop of the Inititative for the Evaluation of XML Retrieval, INEX 2010 Vugh, The Netherlands, December 2010 Revised Selected Papers
Comparative Evaluation
INEX 2010
ISSN 0302-9743› springer.com
Lecture Notes in Computer Science
The LNCS series reports state-of-the-art results in computer science re search, development, and education, at a high level and in both printed and electronic form. Enjoying tight cooperation with the R&D community, with numerous individuals, as well as with prestigious organizations and societies, LNCS has grown into the most comprehensive computer science research forum available. The scope of LNCS, including its subseries LNAI and LNBI, spans the whole range of computer science and information technology including interdisciplinary topics in a variety of application fields. The type of material published traditionally includes – proceedings (published in time for the respective conference) – post-proceedings (consisting of thoroughly revised final full papers) – research monographs (which may be based on outstanding PhD work, research projects, technical reports, etc.) More recently,several color-cover sublines have beenadded featuring, beyond a collection of papers, various added-value components; these sublines in clude – tutorials (textbook-like monographs or collections of lectures given at advanced courses) – state-of-the-art surveys (offering complete and mediated coverageINEX’11 LNCS in Sep/Oct; 2012 coming as well.
Plans for INEX 2013 are under discussion