Overview of the 2019 Open-Source IR Replicability Challenge (OSIRRC - - PowerPoint PPT Presentation

overview of the 2019 open source ir replicability
SMART_READER_LITE
LIVE PREVIEW

Overview of the 2019 Open-Source IR Replicability Challenge (OSIRRC - - PowerPoint PPT Presentation

Overview of the 2019 Open-Source IR Replicability Challenge (OSIRRC 2019) Ryan Clancy, Nicola Ferro, Claudia Hauff, Jimmy Lin, Tetsuya Sakai, Ze Zhong Wu Vision Source: saveur.com Vision The ultimate candy store for information retrieval


slide-1
SLIDE 1

Overview of the 2019 Open-Source IR Replicability Challenge (OSIRRC 2019)

Ryan Clancy, Nicola Ferro, Claudia Hauff, Jimmy Lin, Tetsuya Sakai, Ze Zhong Wu

slide-2
SLIDE 2

Vision

Source: saveur.com

slide-3
SLIDE 3

Vision

The ultimate candy store for information retrieval researchers!

Source: Wikipedia (Candy)

slide-4
SLIDE 4

Vision

The ultimate candy store for information retrieval researchers! See a result you like? Click a button to recreate those results!

Really, any result?

(not quite… let’s start with batch ad hoc retrieval experiments on standard test collections)

What is this, really?

slide-5
SLIDE 5

Repeatability: you can recreate your own results again Reproducibility: others can recreate your results (with code they rewrite) Replicability: others can recreate your results (with your code)

ACM Artifact Review and Badging Guidelines

We get this “for free” Stepping stone… Our focus

slide-6
SLIDE 6

Why is this important?

Good science Sustained cumulative progress

Armstrong et al. (CIKM 2009): Little empirical progress made from 1998 to 2009 Why? researchers compare against weak baselines Yang et al. (SIGIR 2019): Researchers still compare against weak baselines

slide-7
SLIDE 7

How do we get there?

Open-Source Code!

A g

  • d

s t a r t , b u t f a r f r

  • m

e n

  • u

g h …

TREC 2015 “Open Runs”

Voorhees et al. Promoting Repeatability Through Open Runs. EVIA 2016.

79 submitted runs…

slide-8
SLIDE 8

Voorhees et al. Promoting Repeatability Through Open Runs. EVIA 2016.

Number of runs successfully replicated

slide-9
SLIDE 9

How do we get there?

Open-Source Code!

A g

  • d

s t a r t , b u t f a r f r

  • m

e n

  • u

g h …

Ask developers to show us how!

Open-Source IR Reproducibility Challenge (OSIRRC), SIGIR 2015 Workshop on Reproducibility, Inexplicability, and Generalizability of Results (RIGOR)

Participants contributed end-to-end scripts for replicating ad hoc retrieval experiments

Lin et al. Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. ECIR 2016.

slide-10
SLIDE 10

7 participating systems, GOV2 collection

0.00 0.25 0.50 0.75 T e r r i e r : B M 2 5 G a l a g

  • :

Q L J A S S : 2 . 5 M P I n d r i : Q L M G 4 J : B J A S S : 1 B P A T I R E : Q u a n t . B M 2 5 A T I R E : B M 2 5 M G 4 J : B + G a l a g

  • :

S D M I n d r i : S D M T e r r i e r : D P H + P r

  • x

S D M G 4 J : B M 2 5 T e r r i e r : D P H L u c e n e : B M 2 5 ( P

  • s

. ) L u c e n e : B M 2 5 ( C

  • u

n t ) T e r r i e r : D P H + B

  • 1

Q E

System / Model MAP

System Effectiveness

slide-11
SLIDE 11

7 participating systems, GOV2 collection

1 10 100 1,000 10,000 100,000 J A S S : 2 . 5 M P M G 4 J : B J A S S : 1 B P M G 4 J : B + A T I R E : Q u a n t . B M 2 5 L u c e n e : B M 2 5 ( C

  • u

n t ) L u c e n e : B M 2 5 ( P

  • s

. ) A T I R E : B M 2 5 M G 4 J : B M 2 5 T e r r i e r : B M 2 5 T e r r i e r : D P H G a l a g

  • :

Q L T e r r i e r : D P H + P r

  • x

S D I n d r i : Q L T e r r i e r : D P H + B

  • 1

Q E G a l a g

  • :

S D M I n d r i : S D M

System / Model Search Time (ms)

System Efficiency

slide-12
SLIDE 12

7 participating systems, GOV2 collection

ATIRE: BM25 ATIRE: Quant. BM25 Galago: QL Galago: SDM Indri: QL Indri: SDM JASS: 1B P JASS: 2.5M P Lucene: BM25 (Count) Lucene: BM25 (Pos.) MG4J: B MG4J: B+ MG4J: BM25 Terrier: BM25 Terrier: DPH Terrier: DPH+Bo1 QE Terrier: DPH+Prox SD

100 1000 10000 . 2 8 . 3 . 3 2 . 3 4

MAP Time (ms)

Effectiveness/Efficiency Tradeoff

slide-13
SLIDE 13

How do we get there?

Open-Source Code!

A g

  • d

s t a r t , b u t f a r f r

  • m

e n

  • u

g h …

Ask developers to show us how!

I t w

  • r

k e d , b u t …

slide-14
SLIDE 14

What worked well?

We actually pulled it off!

What didn’t work well?

Technical infrastructure was brittle Replication scripts too under-constrained

slide-15
SLIDE 15

Infrastructure

Source: Wikipedia (Burj Khalifa)

slide-16
SLIDE 16

VMs

VM OS App VM OS App

hypervisor

Physical Machine

slide-17
SLIDE 17

Containers

Physical Machine Container Container

Container Engine

OS App App

slide-18
SLIDE 18

Infrastructure

Source: Wikipedia (Burj Khalifa)

slide-19
SLIDE 19
  • 1. Develop common Docker specification for capturing

ad hoc retrieval experiments – the “jig”.

  • 2. Build a library of curate images that work with the jig.
  • 3. Take over the world!

(encourage adoption, broaden to other tasks, etc.)

Workshop Goals

slide-20
SLIDE 20

jig Docker image

<snapshot> <image>:<tag> Creates snapshot run files Triggers hook with snapshot <image>:<tag>

prepare phase

User specifies

search phase

init hook index hook search hook Triggers hook Starts image Triggers hook trec_eval

slide-21
SLIDE 21

Source: Flickr (https://www.flickr.com/photos/m00k/15789986125/)

slide-22
SLIDE 22

17 images 13 different teams

Focus on newswire collections: Robust04, Core17, Core18 Official runs on Microsoft Azure T h a n k s M i c r

  • s
  • f

t f

  • r

f r e e c r e d i t s !

slide-23
SLIDE 23

Anserini (University of Waterloo) Anserini-bm25prf (Waseda University) ATIRE (University of Otago) Birch (University of Waterloo) Elastirini (University of Waterloo) EntityRetrieval (Ryerson University) Galago (University of Massachusetts) ielab (University of Queensland) Indri (TU Delft) IRC-CENTRE2019 (T echnische Hochschule Köln) JASS (University of Otago) JASSv2 (University of Otago) NVSM (University of Padua) OldDog (Radboud University) PISA (New York University and RMIT University) Solrini (University of Waterloo) T errier (TU Delft and University of Glasgow)

slide-24
SLIDE 24

Robust04

49 runs from 13 images

Images captured diverse models: query expansion and relevance feedback conjunctive and efficiency-oriented query processing neural ranking models

slide-25
SLIDE 25

Core17

12 runs from 6 images

slide-26
SLIDE 26

Core18

19 runs from 4 images

slide-27
SLIDE 27

Robust04

49 runs from 13 images

slide-28
SLIDE 28

Who won?

Source: Time Magazine

slide-29
SLIDE 29

But it’s not a competition!

Source: Washington Post

slide-30
SLIDE 30

TREC best – 0.333 TREC median (title) – 0.258

slide-31
SLIDE 31
slide-32
SLIDE 32
  • 1. Develop common Docker specification for capturing

ad hoc retrieval experiments – the “jig”.

  • 2. Build a library of curate images that work with the jig.
  • 3. Take over the world!

(encourage adoption, broaden to other tasks, etc.)

Workshop Goals

✓ ✓

?

slide-33
SLIDE 33

Source: flickr (https://www.flickr.com/photos/39414578@N03/16042029002)

What’s next?