INFO 4300 / CS4300 Information Retrieval slides adapted from - - PowerPoint PPT Presentation

info 4300 cs4300 information retrieval slides adapted
SMART_READER_LITE
LIVE PREVIEW

INFO 4300 / CS4300 Information Retrieval slides adapted from - - PowerPoint PPT Presentation

INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch utzes, linked from http://informationretrieval.org/ IR 2: The term vocabulary and postings lists Paul Ginsparg Cornell University, Ithaca, NY 30 Aug 2011 1 / 55


slide-1
SLIDE 1

INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨ utze’s, linked from http://informationretrieval.org/

IR 2: The term vocabulary and postings lists

Paul Ginsparg

Cornell University, Ithaca, NY

30 Aug 2011

1 / 55
slide-2
SLIDE 2

Administrativa (tentative)

Course Webpage: http://www.infosci.cornell.edu/Courses/info4300/2011fa/ Lectures: Tuesday and Thursday 11:40-12:55, Kimball B11 Instructor: Paul Ginsparg, ginsparg@..., 255-7371, Physical Sciences Building 452 Instructor’s Office Hours: Wed 1-2pm, Fri 2-3pm, or e-mail instructor to schedule an appointment Teaching Assistant: Saeed Abdullah, use cs4300-l@lists.cs.cornell.edu Course text at: http://informationretrieval.org/

Introduction to Information Retrieval, C.Manning, P.Raghavan, H.Sch¨ utze

see also

Information Retrieval, S. B¨ uttcher, C. Clarke, G. Cormack

http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12307

2 / 55
slide-3
SLIDE 3

Overview

1

Recap

2

Query optimization

3

Discussion Section (Thu 1 Sep)

4

The term vocabulary General + Non-English English

3 / 55
slide-4
SLIDE 4

Outline

1

Recap

2

Query optimization

3

Discussion Section (Thu 1 Sep)

4

The term vocabulary General + Non-English English

4 / 55
slide-5
SLIDE 5

Major Steps

  • 1. Collect documents
  • 2. Tokenize text
  • 3. linguistic preprocessing
  • 4. Index documents
5 / 55
slide-6
SLIDE 6

Inverted index

For each term t, we store a list of all documents that contain t. Brutus − → 1 2 4 11 31 45 173 174 Caesar − → 1 2 4 5 6 16 57 132 . . . Calpurnia − → 2 31 54 101 . . .

  • dictionary

postings

6 / 55
slide-7
SLIDE 7

Intersecting two postings lists

Brutus − → 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174 Calpurnia − → 2 → 31 → 54 → 101 Intersection = ⇒ 2 → 31 Linear in the length of the postings lists.

7 / 55
slide-8
SLIDE 8

Constructing the inverted index: Sort postings

term docID I 1 did 1 enact 1 julius 1 caesar 1 I 1 was 1 killed 1 i’ 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 2

= ⇒

term docID ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 caesar 2 caesar 2 did 1 enact 1 hath 1 I 1 I 1 i’ 1 it 2 julius 1 killed 1 killed 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 8 / 55
slide-9
SLIDE 9

Westlaw: Example queries

Information need: Information on the legal theories involved in preventing the disclosure of trade secrets by employees formerly employed by a competing company Query: “trade secret” /s disclos! /s prevent /s employe! Information need: Requirements for disabled people to be able to access a workplace Query: disab! /p access! /s work-site work-place (employment /3 place) Information need: Cases about a host’s responsibility for drunk guests Query: host! /p (responsib! liab!) /p (intoxicat! drunk!) /p guest

9 / 55
slide-10
SLIDE 10

Outline

1

Recap

2

Query optimization

3

Discussion Section (Thu 1 Sep)

4

The term vocabulary General + Non-English English

10 / 55
slide-11
SLIDE 11

Query optimization

Consider a query that is an and of n terms, n > 2 For each of the terms, get its postings list, then and them together Example query: Brutus AND Calpurnia AND Caesar What is the best order for processing this query?

11 / 55
slide-12
SLIDE 12

Query optimization

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further In this example, first Caesar, then Calpurnia, then Brutus Brutus − → 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174 Calpurnia − → 2 → 31 → 54 → 101 Caesar − → 5 → 31

12 / 55
slide-13
SLIDE 13

Optimized intersection algorithm for conjunctive queries

Intersect(t1, . . . , tn) 1 terms ← SortByIncreasingFrequency(t1, . . . , tn) 2 result ← postings(first(terms)) 3 terms ← rest(terms) 4 while terms = nil and result = nil 5 do result ← Intersect(result, postings(first(terms))) 6 terms ← rest(terms) 7 return result

13 / 55
slide-14
SLIDE 14

More general optimization

Example query: (madding or crowd) and (ignoble or strife) Get frequencies for all terms Estimate the size of each or by the sum of its frequencies (conservative) Process in increasing order of or sizes

14 / 55
slide-15
SLIDE 15

In addition

determine set of terms in dictionary and provide retrieval tolerant to spelling mistakes and inconsistent choice of words search for compounds or phrases that denote a concept such as “operating system” or proximity queries such as Gates NEAR Microsoft: augment index to capture proximities of terms in documents Boolean model only records term presence or absence, but perhaps give more weight to documents that have a term several times? Need term frequency information (number of times a term occurs in a document) in postings lists Boolean queries retrieve a set of matching documents, but need effective method to order (or “rank”) returned results: requires mechanism for determining document score measuring goodness of match to query

15 / 55
slide-16
SLIDE 16

Summary

Ad hoc searching over documents has recently conquered the world, powering not only web search engines but the kind of unstructured search that lies behind large eCommerce websites. But web search engines have added at least partial implementations of some of the most popular operators from extended Boolean models: phrase search, Boolean operators

16 / 55
slide-17
SLIDE 17

Outline

1

Recap

2

Query optimization

3

Discussion Section (Thu 1 Sep)

4

The term vocabulary General + Non-English English

17 / 55
slide-18
SLIDE 18

Discussion 1, Thu 1 Sep

The course uses the Computer Science Course Management System (CMS) to manage assignments. Between now and class on Thu, login using your NetID and password at http://cms.csuglab.cornell.edu/. Go to CS 4300 and follow the instructions for “assignment 0”. If you do not see CS 4300, contact cs4300-l@lists.cs.cornell.edu. In preparation, explore three information retrieval systems and compare them: Bing — a Web search engine (http://bing.com/). The Library of Congress catalog — a very large bibliographic catalog (http://catalog.loc.gov/). PubMed — an indexing and abstracting service for medicine and related fields (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi).

18 / 55
slide-19
SLIDE 19

Use each service separately for the following information discovery task: What is the medical evidence that vaccines can cause autism? Evaluate each search service. What do you consider the strengths and weaknesses of each service? When would you use them? (a) Does the service search full text or surrogates? What is the underlying corpus? What effect does this have on your results? (b) Is fielded searching offered? What Boolean operators are supported? What regular expressions? How does it handle non-Roman character sets? What is the stop list? How are results ranked? Are they sorted, if so in what order? (c) From a usability viewpoint. What style of user interface(s) is provided? What training or help services? If there are basic and advanced user interfaces, what does each offer?

N.B.: Use these questions to guide your thoughts — It is not necessary to write detailed answers to these questions in a write-up. Just give the “best URL”, explaining in a sentence or two why you found that particular resource definitive, comprehensive or authoritative.

19 / 55
slide-20
SLIDE 20

Outline

1

Recap

2

Query optimization

3

Discussion Section (Thu 1 Sep)

4

The term vocabulary General + Non-English English

20 / 55
slide-21
SLIDE 21

Major Steps

Recall the major steps in inverted index construction:

  • 1. Collect the documents to be indexed
  • 2. Tokenize the text
  • 3. Do linguistic preprocessing of tokens
  • 4. Index the documents in which each term occurs
21 / 55
slide-22
SLIDE 22

Terms and documents

Last lecture: Simple Boolean retrieval system Our assumptions were:

We know what a document is. We know what a term is.

Both issues can be complex in reality. We’ll look a little bit at what a document is. But mostly at terms: How do we define and process the vocabulary of terms of a collection?

22 / 55
slide-23
SLIDE 23

Term Statistics

Next Lecture: statistical properties of term occurrences Heap’s Law M = kT b: empirical growth of vocabulary with size of collection Zipf’s Law cfi ∝ 1

i : empirical distribution of term usage

Both are power laws But first. . .

23 / 55
slide-24
SLIDE 24

Parsing a document

Before we can even start worrying about terms . . . . . . need to deal with format and language of each document. What format is it in? pdf, word, excel, html etc. What language is it in? What character set is in use? Each of these is a classification problem, which we will study later in this course Alternative: use heuristics

24 / 55
slide-25
SLIDE 25

Format/Language: Complications

A single index usually contains terms of several languages.

Sometimes a document or its components contain multiple languages/formats. French email with Spanish pdf attachment

What is the document unit for indexing? A file? An email? An email with 5 attachments? A group of files (ppt or latex in HTML)? Issues with books (again precision vs. recall)

25 / 55
slide-26
SLIDE 26

What is a document?

Take-away: potentially non-trivial; in many cases requires some design decisions.

26 / 55
slide-27
SLIDE 27

Outline

1

Recap

2

Query optimization

3

Discussion Section (Thu 1 Sep)

4

The term vocabulary General + Non-English English

27 / 55
slide-28
SLIDE 28

Definitions

Word – A delimited string of characters as it appears in the text. Term – A “normalized” word (case, morphology, spelling etc); an equivalence class of words. Token – An instance of a word or term occurring in a document. Type – The same as a term in most cases: an equivalence class of tokens.

28 / 55
slide-29
SLIDE 29

Type/token distinction: Example

In June, the dog likes to chase the cat in the barn. (How many word tokens? How many word types?)

29 / 55
slide-30
SLIDE 30

Recall: Inverted index construction

Input: Friends, Romans, countrymen. So let it be with Caesar . . . Output: friend roman countryman so . . . Each token is a candidate for a postings entry. What are valid tokens to emit?

30 / 55
slide-31
SLIDE 31

Why tokenization is difficult – even in English

Example: Mr. O’Neill thinks that the boys’ stories about Chile’s capital aren’t amusing. Tokenize this sentence (neill , oneill , o’neill , o’ neill , o neill) (aren’t , arent , are n’t , aren t) choices determine which Boolean queries will match

31 / 55
slide-32
SLIDE 32

One word or two? (or several)

Hewlett-Packard State-of-the-art co-education the hold-him-back-and-drag-him-away maneuver data base San Francisco Los Angeles-based company cheap San Francisco-Los Angeles fares York University vs. New York University

32 / 55
slide-33
SLIDE 33

Numbers

3/20/91 20/3/91 Mar 20, 1991 100.2.86.144 (800) 234-2333 800.234.2333 Older IR systems may not index numbers . . . . . . but generally it’s a useful feature.

33 / 55
slide-34
SLIDE 34

Etc

C++ C# B-52 M*A*S*H jblack@mail.yahoo.com http://stuff.big.com/new/specials.html 1Z9999W99845399981

34 / 55
slide-35
SLIDE 35

Other languages

English dominant on the WWW: approximately 60% of web pages in English (Gerrand 2007). But still 40% of the web non-English, expected to grow over time (less than one third of Internet users and less than 10% of the worlds population primarily speak English) Signs of change: Sifry (2007) only about one third of blog posts are in English. 30 Aug 2011: Google search for ”percentage of web pages in english” brings up (page dated 26 Feb, 2011)

http://www.quora.com/ How-many-websites-percentage-or-absolute-numbers-are-not-in-English:

“. . . best guess would be about 40% of webpages today are in English, and 60% (or about 170 billion websites) are non-English.”

35 / 55
slide-36
SLIDE 36

Chinese: No whitespace

莎拉波娃

  • 在居住在美国

南部的佛

。今年4月 9日,莎拉波娃在美国第一大城市

☎ ✆

了18

生 日。生日派

上,莎拉波娃露出了甜美的微笑。

36 / 55
slide-37
SLIDE 37

Ambiguous segmentation in Chinese

和尚

The two characters can be treated as one word meaning ‘monk’ or as a sequence of two words meaning ‘and’ and ‘still’.

37 / 55
slide-38
SLIDE 38

Other cases of “no whitespace”

Compounds in Dutch and German Computerlinguistik → Computer + Linguistik Lebensversicherungsgesellschaftsangestellter → leben + versicherung + gesellschaft + angestellter Inuit: tusaatsiarunnanngittualuujunga (I can’t hear very well.) Swedish, Finnish, Greek, Urdu, many other languages

38 / 55
slide-39
SLIDE 39

Japanese

✠ ✡ ☛ ☞ ✌ ✍ ✎ ✏ ✑ ✎ ✒ ✓ ✔ ✕ ✖ ✗ ✘ ✙ ✡ ✚ ✛ ✜ ✢ ✣ ✤ ✥ ✦ ✧ ✏ ★ ✩ ✪ ✫ ✬ ✭ ✭ ✮ ✯ ✰ ✮ ✯ ✱ ✲ ✕ ✳ ✡ ✕ ✴ ✵ ✶ ✷ ✒ ✸ ✹ ✺ ✻ ✼ ✽ ✾ ✷ ✙ ✖ ✿ ✕ ❀ ❁ ❂ ❃ ❄ ❅ ✴ ✹ ❆ ❇ ✓ ❈ ❉ ❈ ❊ ✏ ❋
❍ ■ ❏ ❑ ▲ ✣ ✻ ▼ ◆ ❄ ❆ ❇ ✓ ❈ ❉ ❈ ❊ ✷ ❖ P ✸ ◗ ❘ ✒ ✸ ❈ ✪ ❙ ✷ ❚ ✹ ❯ ❱ ❲ ❍ ❳ ❨ ✪ ❩ ❬ ❭ ✡ ❪ ✏ ❫ ❴ ❴ ❵ ❛ ❜ ✴ ❝ ❞ ❲ ❍ ✷ ✩ ✹ ❡ ❢ ❉ ❣ ❤ ✹ ✛ ✐ ❂ ❥ ✹ ❦ ❉ ❧ ✏ ♠ ♥ ✸ ♦ ❴ ♣ q ❴ ✻ ❍ r ❲ s t ✉ ✈ ✇ ✜ ❈ ❏ ① ✎ ✑ ✎ ② ❲ ❃ ✹ ③ ❴ ④ ⑤ ⑥ ⑦ ✴ ⑧ ⑨ ⑩ ✷ ❩ ❶ ❷ ❸ q ❹ ✴ ❺ ✎ ✣ ❻ ❼ ❱ ❍ ■ ❏

4 different “alphabets”: Chinese characters, hiragana syllabary for inflectional endings and function words, katakana syllabary for transcription of foreign words and other uses, and latin. No spaces (as in Chinese). End user can express query entirely in hiragana!

39 / 55
slide-40
SLIDE 40

Arabic script

ٌبَِآ ⇐ ٌ ب ا ت ِ ك un b ā t i k /kitābun/ ‘a book’

40 / 55
slide-41
SLIDE 41

Arabic script: Bidirectionality

اا ا1962 132ا لا . ← → ← → ← START

‘Algeria achieved its independence in 1962 after 132 years of French occupation.’

Bidirectionality is not a problem if text is coded in Unicode.

41 / 55
slide-42
SLIDE 42

Accents and diacritics

Accents: r´ esum´ e vs. resume (simple omission of accent) Umlauts: Universit¨ at vs. Universitaet (substitution with special letter sequence “ae”) Most important criterion: How are users likely to write their queries for these words? Even in languages that standardly have accents, users often do not type them. (Polish?)

42 / 55
slide-43
SLIDE 43

Outline

1

Recap

2

Query optimization

3

Discussion Section (Thu 1 Sep)

4

The term vocabulary General + Non-English English

43 / 55
slide-44
SLIDE 44

Normalization

Need to “normalize” terms in indexed text as well as query terms into the same form. Example: We want to match U.S.A. and USA We most commonly implicitly define equivalence classes of terms. Alternatively: do asymmetric expansion

window → window, windows windows → Windows, windows Windows (no expansion)

More powerful, but less efficient Why don’t you want to put window, Window, windows, and Windows in the same equivalence class?

44 / 55
slide-45
SLIDE 45

Normalization: Other languages

Normalization and language detection interact. PETER WILL NICHT MIT. → MIT = mit He got his PhD from MIT. → MIT = mit

45 / 55
slide-46
SLIDE 46

Case folding

Reduce all letters to lower case Possible exceptions: capitalized words in mid-sentence MIT vs. mit Fed vs. fed Windows vs. windows It’s often best to lowercase everything since users will use lowercase regardless of correct capitalization.

46 / 55
slide-47
SLIDE 47

Stop words

stop words = extremely common words which would appear to be of little value in helping select documents matching a user need Examples: a, an, and, are, as, at, be, by, for, from, has, he, in, is, it, its, of, on, that, the, to, was, were, will, with Stop word elimination used to be standard in older IR systems. But you need stop words for phrase queries, e.g. “King of Denmark”, “flights to London”, “As we may think”, “To be

  • r not to be”, “Let It Be”, “I don’t want to be”

Most web search engines index stop words.

47 / 55
slide-48
SLIDE 48

More equivalence classing

Soundex: (phonetic equivalence, Muller = Mueller) Thesauri: (semantic equivalence, car = automobile)

48 / 55
slide-49
SLIDE 49

Lemmatization

Reduce inflectional/variant forms to base form Example: am, are, is → be Example: car, cars, car’s, cars’ → car Example: the boy’s cars are different colors → the boy car be different color Lemmatization implies doing “proper” reduction to dictionary headword form (the lemma). Inflectional morphology (cutting → cut) vs. derivational morphology (destruction → destroy)

49 / 55
slide-50
SLIDE 50

Stemming

Definition of stemming: Crude heuristic process that chops off the ends of words in the hope of achieving what “principled” lemmatization attempts to do with a lot of linguistic knowledge. Language dependent Often inflectional and derivational Example for derivational: automate, automatic, automation all reduce to automat

50 / 55
slide-51
SLIDE 51

Porter algorithm

Most common algorithm for stemming English Results suggest that it is at least as good as other stemming

  • ptions

Conventions + 5 phases of reductions Phases are applied sequentially Each phase consists of a set of commands.

Sample command: Delete final ement if what remains is longer than 1 character replacement → replac cement → cement

Sample convention: Of the rules in a compound command, select the one that applies to the longest suffix.

51 / 55
slide-52
SLIDE 52

Porter stemmer: A few rules

Rule Example SSES → SS caresses → caress IES → I ponies → poni SS → SS caress → caress S → cats → cat

52 / 55
slide-53
SLIDE 53

Three stemmers: A comparison

Sample text: Such an analysis can reveal features that are not easily visible from the variations in the individual genes and can lead to a picture of expression that is more biologically transparent and accessible to interpretation Porter stemmer: such an analysi can reveal featur that ar not easili visibl from the variat in the individu gene and can lead to a pictur

  • f express that is more biolog transpar and access to interpret

Lovins stemmer: such an analys can reve featur that ar not eas vis from th vari in th individu gen and can lead to a pictur of expres that is mor biolog transpar and acces to interpres Paice stemmer: such an analys can rev feat that are not easy vis from the vary in the individ gen and can lead to a pict of express that is mor biolog transp and access to interpret http://www.tartarus.org/∼martin/PorterStemmer/ http://www.cs.waikato.ac.nz/∼eibe/stemmers/ http://www.comp.lancs.ac.uk/computing/research/stemming/

53 / 55
slide-54
SLIDE 54

Does stemming improve effectiveness?

In general, stemming increases effectiveness for some queries, and decreases effectiveness for others (increases recall at expense of precision) Porter Stemmer equivalence class oper contains all of operate

  • perating operates operation operative operatives operational.

Queries where stemming hurts: “operational AND research”, “operating AND system”, “operative AND dentistry”

54 / 55
slide-55
SLIDE 55

What does Google do?

Stop words Normalization Tokenization Lowercasing Stemming Non-latin alphabets Umlauts Compounds Numbers

55 / 55