Evaluation of Rich and Explicit Feedback for Exploratory Search - - PowerPoint PPT Presentation

evaluation of rich and explicit feedback for exploratory
SMART_READER_LITE
LIVE PREVIEW

Evaluation of Rich and Explicit Feedback for Exploratory Search - - PowerPoint PPT Presentation

Evaluation of Rich and Explicit Feedback for Exploratory Search Esben Srig 1 , Nicolas Collignon 2 , Rebecca Fiebrink 1 , and Noriko Kando 3 1 Goldsmiths, University of London 2 University of Edinburgh 3 National Institute of Informatics, Japan


slide-1
SLIDE 1

Evaluation of Rich and Explicit Feedback for Exploratory Search

Esben Sørig1, Nicolas Collignon2, Rebecca Fiebrink1, and Noriko Kando3

1 Goldsmiths, University of London 2 University of Edinburgh 3 National Institute of Informatics, Japan

slide-2
SLIDE 2

Annotation feedback for exploratory IR

  • Current search systems have

limited support exploratory searchers

  • Document annotation is a well-

known strategy for active reading

  • We investigate the usage of

annotations as an explicit feedback signal for IR systems.

Alighieri, Dante. 1993. Convivio. Ed. Giorgio Inglese. Milano: Rizzoli

slide-3
SLIDE 3

Previous work

  • Relevance feedback is a well-studied interactive, explicit

feedback mechanism. User marks documents as relevant or irrelevant

  • Variants of relevance feedback such as interactive query

expansion

  • Golovchinsky et al1 studied the effect of annotation feedback
  • n retrieval performance
  • Significant improvement in retrieval performance in single iteration of

feedback

  • Fixed query, fixed search results, non-interactive, no user experience

1 G. Golovchinsky, M. N. Price, B. N. Schilit. 1999. From reading to

retrieval: freeform ink annotations as queries. In Prooceedings of ACM SIGIR '99. ACM. 19-25.

slide-4
SLIDE 4

Simulation retrieval performance

  • TREC Dynamic Domain

2017 user simulator

  • New York Times

Annotated Corpus

  • Performance averaged
  • ver 60 search tasks
slide-5
SLIDE 5

Simulation retrieval performance

  • TREC Dynamic Domain

2017 user simulator

  • New York Times

Annotated Corpus

  • Performance averaged
  • ver 60 search tasks
slide-6
SLIDE 6

Evaluation platform for annotation feedback

slide-7
SLIDE 7

Experiment with human subjects

  • Goal: measure retrieval performance and user experience of

real search users of annotation feedback

  • Two conditions: 1) Relevance feedback only. 2) Annotation

feedback only

  • Between-subjects design
  • Amazon Mechanical Turk participants
  • Tasks chosen from user simulator
  • Pre- and post-task questionnaires to measure user experience
slide-8
SLIDE 8

Discussion and questions

  • Is evaluation on TREC dataset representative of multi-session

exploratory search contexts?

  • Are questionnaires sufficient to measure hypothesized benefit

to user exploration, understanding, and engagement?

  • Questions?