FXPAL at TRECvid 2007 Collaborative Exploratory Search - - PowerPoint PPT Presentation

fxpal at trecvid 2007 collaborative exploratory search
SMART_READER_LITE
LIVE PREVIEW

FXPAL at TRECvid 2007 Collaborative Exploratory Search - - PowerPoint PPT Presentation

FXPAL at TRECvid 2007 Collaborative Exploratory Search Collaborative search is overloaded Synchronous Real-time awareness Collaborative and continual update Exploratory Search context systems (FXPAL) (e.g. Nokia, Imity)


slide-1
SLIDE 1

FXPAL at TRECvid 2007

slide-2
SLIDE 2

Collaborative Exploratory Search

slide-3
SLIDE 3

6 November 2007 TRECvid 2007 workshop

“Collaborative” search is overloaded

Real-time awareness and continual update context systems (e.g. Nokia, Imity) Collaborative Exploratory Search (FXPAL) Chi et al “Search Trails” (Xerox PARC) Web 2.0 Wisdom of Crowds Collaborative Filtering Personalization Explicit Implicit Asynchronous Synchronous

slide-4
SLIDE 4

6 November 2007 TRECvid 2007 workshop

“Collaborative” search is overloaded

Collaborative Exploratory Search (FXPAL) Explicit Synchronous Collaborative Exploratory Search

  • Fischlar-DiamondTouch:

Collaborative Video Searching on a Table (Smeaton et al, 2005)

  • Interfaces for Collaborative Exploratory Web Search:

Motivations and Directions for Multi-User Designs (M. Morris, 2007) Algorithmically-Mediated Intelligent Interfaces Only

slide-5
SLIDE 5

6 November 2007 TRECvid 2007 workshop

Collaborative Exploratory Search

  • Synchronous

– Collaborating users use the system at the same time

  • Explicitly Shared goals

– Collaborating users share the information need

  • Algorithmically-mediated

– System combines users’ inputs in various ways

  • Not just keyword pooling

– System generates results based on users’ roles

  • Terms, ranked lists, etc.
slide-6
SLIDE 6

6 November 2007 TRECvid 2007 workshop

Algorithmic Collaboration Logic Unit Input Coordinator User 1 Output Coordinator User 2

slide-7
SLIDE 7
slide-8
SLIDE 8

6 November 2007 TRECvid 2007 workshop

slide-9
SLIDE 9

6 November 2007 TRECvid 2007 workshop

slide-10
SLIDE 10

6 November 2007 TRECvid 2007 workshop

System overview

MediaMagic RSVP Shared Display

Input Coordinator Output Coordinator Algorithmic Collaboration Module

slide-11
SLIDE 11

6 November 2007 TRECvid 2007 workshop

slide-12
SLIDE 12

6 November 2007 TRECvid 2007 workshop

slide-13
SLIDE 13

6 November 2007 TRECvid 2007 workshop

slide-14
SLIDE 14

6 November 2007 TRECvid 2007 workshop

slide-15
SLIDE 15

6 November 2007 TRECvid 2007 workshop

Prospector Miner

slide-16
SLIDE 16

6 November 2007 TRECvid 2007 workshop

RSVP Queue Priority

q doc q retrieved q doc

rank N score

, , ,

− =

⋅ ⋅ =

q q rel q seen q doc doc

w w score rank

, , ,

q unseen q seen q seen

N N w

, , ,

/ =

q nonrel q rel q rel

N N w

, , ,

/ =

Weighted Borda Count fusion

Freshness Relevance

slide-17
SLIDE 17

6 November 2007 TRECvid 2007 workshop

Shared Display Suggested Query Term

q retrieved q term

TF score

, , =

⋅ ⋅ =

q q rel q seen q term term

w w score rank

, , ,

q unseen q seen q seen

N N w

, , ,

/ =

q nonrel q rel q rel

N N w

, , ,

/ =

Weighted frequency fusion

Freshness Relevance

slide-18
SLIDE 18

6 November 2007 TRECvid 2007 workshop

Example

slide-19
SLIDE 19

6 November 2007 TRECvid 2007 workshop

TRECvid Experiments

  • 3 ½ Systems, 4 Users
  • a. MMA: Single MediaMagic user (full capabilities)
  • b. MMV: Single MediaMagic user (no text)
  • c. MMA+V: Post hoc simulated MMA+MMV

combination

– Duplicates (both rel and nonrel) removed

  • d. COLL: Collaborative search
slide-20
SLIDE 20

6 November 2007 TRECvid 2007 workshop

TRECvid Experiments

  • Problem: Learning effect?

– All COLL runs done first – All MMA runs done second – All MMV runs done third

slide-21
SLIDE 21

6 November 2007 TRECvid 2007 workshop

Results: Mean Average Precision

0.0000 0.0500 0.1000 0.1500 0.2000 0.2500 0.3000 0.3500 0.4000 FXPAL_CO15 FXPAL_CO FXPAL_MMA FXPAL_CO11 FXPAL_MMV FXPAL_CO07 MAP

Collaborative search, 7 minutes Single user, Video only Single user, text Collaborative search

slide-22
SLIDE 22

6 November 2007 TRECvid 2007 workshop

Additional Metrics

  • Examine Recall and Precision separately
  • Examine the manually-selected shot set

– What actually happened during the run?

slide-23
SLIDE 23

6 November 2007 TRECvid 2007 workshop

Precision

  • COLL is:

1.47% relative improvement over MMA

  • 3.42% relative improvement over MMV

15.4% relative improvement over MMA+V

Legend explaining MMA,etc

FP TP TP +

slide-24
SLIDE 24

6 November 2007 TRECvid 2007 workshop

Recall

  • COLL is:

101.1% relative improvement over MMA 43.3% relative improvement over MMV

  • 10.7% relative improvement over MMA+V

totalrel TP #

slide-25
SLIDE 25

6 November 2007 TRECvid 2007 workshop

  • COLL outperforms MMA and MMV
  • COLL is about the same against MMA+V

– What does this suggest? – Why bother working collaboratively? – Let’s examine closer

slide-26
SLIDE 26

6 November 2007 TRECvid 2007 workshop

% improvement in precision

  • 40
  • 20

20 40 60 80 100 120 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 COLL over MMA COLL over MMV

slide-27
SLIDE 27

6 November 2007 TRECvid 2007 workshop

% improvement in recall

50 100 150 200 250 300 350 400 450 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 COLL over MMA COLL over MMV

slide-28
SLIDE 28

6 November 2007 TRECvid 2007 workshop

% improvement COLL over MMA+V

Kernel density smoothing

  • 60
  • 40
  • 20

20 40 60 80 100 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Precision Recall

slide-29
SLIDE 29

6 November 2007 TRECvid 2007 workshop

Tentative Conclusion:

Collaborative Search (at least in our current implementation) offers its best improvements when there are fewer relevant documents to be found

slide-30
SLIDE 30

6 November 2007 TRECvid 2007 workshop

Normalizing by Shots Viewed

  • Our RSVP system needed another design

iteration (missed opportunity)

  • Average number of shots viewed:

– MMA: 2,123 – MMV: 2,601 – MMA+V: 4,184 – COLL: 2,614

Work smarter not harder?

slide-31
SLIDE 31

6 November 2007 TRECvid 2007 workshop

Precision

Precision, with counts normalized by the number of seen shots, does not change

FP TP TP seen FP seen TP seen TP + = + # # #

slide-32
SLIDE 32

6 November 2007 TRECvid 2007 workshop

Recall

  • COLL is:

73.9% relative improvement over MMA (101.1%) 38.5% relative improvement over MMV (43.3%) 44.1% relative improvement over MMA+V (-10.7%)

totalrel seen TP # #

slide-33
SLIDE 33

6 November 2007 TRECvid 2007 workshop

% improvement in recall

  • 100

100 200 300 400 500 600 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 COLL over MMA COLL over MMV

slide-34
SLIDE 34

6 November 2007 TRECvid 2007 workshop

COLL over MMAV

  • 50

50 100 150 200 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 COLL over MMAV

% improvement in recall

Kernel density smoothing

slide-35
SLIDE 35

6 November 2007 TRECvid 2007 workshop

Future Work

  • Still like the idea of miner vs. prospector

– But need to give the miner more ability to “steer” – And achieve higher throughput

  • Also investigate other collaboration roles
  • Also investigate types of queries in which

different roles work better. Can we know this a priori?