evaluation of rich and explicit feedback for exploratory
play

Evaluation of Rich and Explicit Feedback for Exploratory Search - PowerPoint PPT Presentation

Evaluation of Rich and Explicit Feedback for Exploratory Search Esben Srig 1 , Nicolas Collignon 2 , Rebecca Fiebrink 1 , and Noriko Kando 3 1 Goldsmiths, University of London 2 University of Edinburgh 3 National Institute of Informatics, Japan


  1. Evaluation of Rich and Explicit Feedback for Exploratory Search Esben Sørig 1 , Nicolas Collignon 2 , Rebecca Fiebrink 1 , and Noriko Kando 3 1 Goldsmiths, University of London 2 University of Edinburgh 3 National Institute of Informatics, Japan

  2. Annotation feedback for exploratory IR • Current search systems have limited support exploratory searchers • Document annotation is a well- known strategy for active reading • We investigate the usage of annotations as an explicit feedback signal for IR systems. Alighieri, Dante. 1993. Convivio. Ed. Giorgio Inglese. Milano: Rizzoli

  3. Previous work • Relevance feedback is a well-studied interactive, explicit feedback mechanism. User marks documents as relevant or irrelevant • Variants of relevance feedback such as interactive query expansion • Golovchinsky et al 1 studied the effect of annotation feedback on retrieval performance • Significant improvement in retrieval performance in single iteration of feedback • Fixed query, fixed search results, non-interactive, no user experience 1 G. Golovchinsky, M. N. Price, B. N. Schilit. 1999. From reading to retrieval: freeform ink annotations as queries. In Prooceedings of ACM SIGIR '99. ACM. 19-25.

  4. Simulation retrieval performance • TREC Dynamic Domain 2017 user simulator • New York Times Annotated Corpus • Performance averaged over 60 search tasks

  5. Simulation retrieval performance • TREC Dynamic Domain 2017 user simulator • New York Times Annotated Corpus • Performance averaged over 60 search tasks

  6. Evaluation platform for annotation feedback

  7. Experiment with human subjects • Goal : measure retrieval performance and user experience of real search users of annotation feedback • Two conditions: 1) Relevance feedback only. 2) Annotation feedback only • Between-subjects design • Amazon Mechanical Turk participants • Tasks chosen from user simulator • Pre- and post-task questionnaires to measure user experience

  8. Discussion and questions • Is evaluation on TREC dataset representative of multi-session exploratory search contexts? • Are questionnaires sufficient to measure hypothesized benefit to user exploration, understanding, and engagement? • Questions?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend