Optimizing Result Prefetching in Web Search Engines with Segmented - - PowerPoint PPT Presentation

optimizing result prefetching in web search engines with
SMART_READER_LITE
LIVE PREVIEW

Optimizing Result Prefetching in Web Search Engines with Segmented - - PowerPoint PPT Presentation

Optimizing Result Prefetching in Web Search Engines with Segmented Indices Ronny Lempel Shlomo Moran Technion, Israel Institute of Technology Web Search Engines Index billions of documents. Make use of distributed storage. Their


slide-1
SLIDE 1

Optimizing Result Prefetching in Web Search Engines with Segmented Indices

Ronny Lempel Shlomo Moran

Technion, Israel Institute of Technology

slide-2
SLIDE 2

VLDB 2002 2

Web Search Engines

  • Index billions of documents.
  • Make use of distributed storage.
  • Their main task: Answer user’s

queries – millions per day.

  • Issue addressed: Optimizing

prefetching policy for answering Active Queries.

slide-3
SLIDE 3

VLDB 2002 3

Talk Outline

  • 1. Defining Active Queries and Active Search

Session.

  • 2. Modeling the cost of serving Active Queries

and Active Search Sessions.

  • 3. Optimizing cost of Active Search Session.
  • 4. Possible implementations
slide-4
SLIDE 4

VLDB 2002 4

Part 1 Active Queries Active Search Sessions

slide-5
SLIDE 5

VLDB 2002 5

Query execution: Parties involved (in typical large search engines)

QI Query Integrator: controls the execution store results for fast access Segmented index user Fast Cache . . . . Documents are assigned to segments randomly, uniformly and independently (local inverted index organization)

slide-6
SLIDE 6

VLDB 2002 6

Search Session: User’s point of view

Query Integrator (QI) Submit an initial query, then wait… … eventually receive back a result page, (usually, the top 10 documents for the query)

slide-7
SLIDE 7

VLDB 2002 7

Search Session (cont.)

Query Integrator (QI) …possibly submit a follow-up query, requesting additional results …Receive the following result page, and so on until no further follow-up query is asked

slide-8
SLIDE 8

VLDB 2002 8

Search Session: QI’s point of view

Query Integrator (QI) query result cache ? Case 1 . 1st result page (top 10 results) is cached: return to user. Initial query received… Case 2. 1st result page is not cached. This is an

Active Query, needs more work:

Are the results stored in the cache?

slide-9
SLIDE 9

VLDB 2002 9

QI’s work on Active Query

Query Integrator (QI) Ask the Segmented Index for the top n results (n to be determined)

Segmented Index

slide-10
SLIDE 10

VLDB 2002 10

QI’s work on Active Query (cont.)

Query Integrator (QI) Each of the m segments returns its top z results (z=z(m,n) to be discussed later)

slide-11
SLIDE 11

VLDB 2002 11

QI’s work on Active Query (cont.)

Query Integrator (QI) Cache Returns 1st page (10 results) to user Stores the other r-1 pages (if r>1) in the cache QI selects top n (out of mz) results, for preparing r=n/10 result page(s)

slide-12
SLIDE 12

VLDB 2002 12

Goal: Optimize QI’s policy for serving a “typical” Active Search Session Active Search Session: A (suffix of a) Search Session which starts with Active Query

slide-13
SLIDE 13

VLDB 2002 13

Part 2 Modeling cost of Active Search Session (of a “typical” user)

slide-14
SLIDE 14

VLDB 2002 14

Cost of Active Search Session depends on:

  • A. Architectural constraints and query specific

parameters (not controlled by the QI)

  • 1. C – the number of relevant documents held in

each segment (could be in the millions).

  • 2. m - Number of index segments.
  • 3. Other properties of the search engine.
slide-15
SLIDE 15

VLDB 2002 15

Cost of Active Search Session depends on:

  • B. Implementation dependent parameters

(controlled by the QI):

  • 1. n =10r, the number of results the QI

prepares.

  • 2. z = zq(n,m) = # of results each of the m

segments returns, so that with probability q, these mz results contain the top n results.

slide-16
SLIDE 16

VLDB 2002 16

The QI needs to decide n -

The Prefetching Dilemma:

1. Preparing just the single required result page is cheap, but follow up queries will require additional Active Query executions. 2. Prefetching several result pages is expensive, but save Active Query executions should the user request the prefetched pages while they are still cached.

Need to model User Behaviour

slide-17
SLIDE 17

VLDB 2002 17

Search engine users - some statistics:

A typical user views relatively few result pages – 1st Pages (containing the top-10 results

  • f queries) account for at least 58% of

all result pages viewed by users. – Pages in ranks 1-3 (containing the top- 30 results) account for at least 88% of all page views.

(based on 3 published analyses of search engine query logs):

slide-18
SLIDE 18

VLDB 2002 18

Search Engine users: Views of Result Pages 1-6

Views of Pages 2-6

200000 400000 600000 800000 1000000 2 3 4 5 6 Result Page Number Views

Analysis of an AltaVista query log containing ~7.55 million result page views from September, 2001: About 4.8 million views (63.5%) are for the first page of results (the top-10 results).

slide-19
SLIDE 19

VLDB 2002 19

Search Engine users: Views of Result Pages 6-20

Views of Pages 6-20

20000 40000 60000 80000 100000 120000 140000 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Result Page Number Views

slide-20
SLIDE 20

VLDB 2002 20

Modeling Search Sessions

  • Result pages are always viewed in their natural
  • rder.
  • The number of result pages that are viewed per

session is a geometric random variable with parameter p. The probability that a user will view precisely result pages 1,…,k of any query in a session is (1-p)pk-1 .

slide-21
SLIDE 21

VLDB 2002 21

Executing an Active Query: Work at segments

  • Each of the m segments selects the top z
  • results. Assuming documents are evenly

distributed, all segments perform roughly the same amount of work, which depends on: – The number of documents relevant to the query (query’s breadth). – the number z of results that each segment returns to the QI.

slide-22
SLIDE 22

VLDB 2002 22

Executing an Active Query: Work at QI

  • The QI’s work is dominated by receiving the mz

results from the segments, and merging them to get the top n=10r results.

  • The cache space required depends on n=10r,

the number of results that are prepared.

  • A single round of communication between the

QI and the segments is performed.

slide-23
SLIDE 23

VLDB 2002 23

W(r): The expected cost of Active Search Session

W =

1

Pr

i ∞ =

When preparing r pages, this cost is:

W(r) = ar + [ (b+czq(r,m)+dr) / (1-pr) ]

Note: For each topic, there is a fixed optimal value ropt which optimizes the expected work.

(executing the ‘th Active Query)× (cost of executing each Active Query)

i

slide-24
SLIDE 24

VLDB 2002 24

Optimizing cost of typical Active Search Session. Part 3

slide-25
SLIDE 25

VLDB 2002 25

Optimizing W(r):

calculating ropt

  • The paper presents a polynomial time algorithm

that, given a query-topic T, determines ropt, the number of result pages to prepare per query execution so as to minimize the work function W(r).

  • In addition, a (simpler) polynomial time

algorithm which, given a parameter ε, outputs a value rε for which W(ropt) / W(rε) ≥ 1- ε , and is applicable to any (monotonic) cost function, assuming the number of viewed pages is geometric RV.

slide-26
SLIDE 26

VLDB 2002 26

Optimizing W(r):

Calculating zq(r,m)

Problem demonstration: m=5, r =1 (hence n = 10) How many results should the QI retrieve from each segment?

fails to collect all top 10 results! Fetching the top three results from each of the five segments 1 3 2 4 5 6 8 7 9 10

slide-27
SLIDE 27

VLDB 2002 27

Calculating zq(r,m)

  • Notations:

1. m – number of segments. 2. n – number of requested results (n=10r).

  • In order to collect the top-n results with

probability 1, the QI must retrieve n results from each segment.

  • In order to have any chance of collecting the

top-n results, the QI must retrieve at least n/m results from each segment.

slide-28
SLIDE 28

VLDB 2002 28

Calculating zq(r,m)

  • The paper presents a polynomial time DP

algorithm to calculate zq(r,m), the minimal number of results that the QI can retrieve from each segment and still obtain the top-n (n=10r) results with probability q.

  • Applicable to any search engine that uses

an m–way locally segmented index where documents are distributed uniformly and independently.

slide-29
SLIDE 29

VLDB 2002 29

Sample values of zq(r,m), q=0.99

5 10 15 20 25 30 35 40 1 2 3 4 5 6 7 8 9 10 11 12

  • no. of result pages, r (10 results per page)

zq (r,m)

5 segments lower bound for 5 segments 25 segments 50 segments

slide-30
SLIDE 30

VLDB 2002 30

Behavior of W(r)

b+czq(r,m)+dr 1-pr = ar +

8500 10500 12500 14500 16500 1 2 3 4 5 6 7 8

r

W(r)

q=0.99, m=25 p=0.5 query breadth: 213 results per segment ropt = 6

slide-31
SLIDE 31

VLDB 2002 31

Possible implementations Part 4

slide-32
SLIDE 32

VLDB 2002 32

Implementation of prefetching policy

  • 1. A preprocessing stage: calculate and store

relevant data.

  • 2. Query execution stage: determine the

parameters n, z by the data stored and the query topic.

slide-33
SLIDE 33

VLDB 2002 33

Prefetching Framework: Preprocessing Stage

  • Determine the work function W(r) that describes

the resources that the engine consumes during a Search Session.

  • For a wide range of query breadths, calculate ropt

and the corresponding values of zq(ropt,m).

  • Load tables with the values of ropt and zq(ropt,m)

into the QI and each of the index segments.

slide-34
SLIDE 34

VLDB 2002 34

  • Centralized approach:

– The QI estimates the query’s breadth using global term statistics. – Looks up the corresponding values of ropt and zq(ropt,m). – Queries each segment for zq(ropt,m) results and prepares ropt result pages.

Prefetching Framework: During Query Execution

slide-35
SLIDE 35

VLDB 2002 35

  • Distributed approach:

– Each segment estimates the query’s breadth independently by considering the number of query matches it contains.

Looks up the corresponding zq(ropt,m),

and returns this number of results to the QI.

Prefetching Framework: During Query Execution

slide-36
SLIDE 36

VLDB 2002 36

  • Distributed approach (cont.):

– The QI (conservatively) estimates the query’s breadth from the number of returned results, and prepares the required number of result pages.

Prefetching Framework: During Query Execution

slide-37
SLIDE 37

VLDB 2002 37

Summary and Future Work

  • This work aims at minimizing the expected

computational load of an Active Search Session.

  • Some limitation: the model is insensitive to

popularity of queries; the replacement policy of the query result cache is not considered.

  • Future work: integrate result prefetching with

cache replacement policies. Implement (or simulate) prefetching policies on actual search engines.