INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨ utze’s, linked from http://informationretrieval.org/
IR 2: The term vocabulary and postings lists
Paul Ginsparg
Cornell University, Ithaca, NY30 Aug 2011
1 / 55
INFO 4300 / CS4300 Information Retrieval slides adapted from - - PowerPoint PPT Presentation
INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch utzes, linked from http://informationretrieval.org/ IR 2: The term vocabulary and postings lists Paul Ginsparg Cornell University, Ithaca, NY 30 Aug 2011 1 / 55
INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨ utze’s, linked from http://informationretrieval.org/
IR 2: The term vocabulary and postings lists
Paul Ginsparg
Cornell University, Ithaca, NY30 Aug 2011
1 / 55Administrativa (tentative)
Course Webpage: http://www.infosci.cornell.edu/Courses/info4300/2011fa/ Lectures: Tuesday and Thursday 11:40-12:55, Kimball B11 Instructor: Paul Ginsparg, ginsparg@..., 255-7371, Physical Sciences Building 452 Instructor’s Office Hours: Wed 1-2pm, Fri 2-3pm, or e-mail instructor to schedule an appointment Teaching Assistant: Saeed Abdullah, use cs4300-l@lists.cs.cornell.edu Course text at: http://informationretrieval.org/
Introduction to Information Retrieval, C.Manning, P.Raghavan, H.Sch¨ utze
see also
Information Retrieval, S. B¨ uttcher, C. Clarke, G. Cormack
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12307
2 / 55Overview
1Recap
2Query optimization
3Discussion Section (Thu 1 Sep)
4The term vocabulary General + Non-English English
3 / 55Outline
1Recap
2Query optimization
3Discussion Section (Thu 1 Sep)
4The term vocabulary General + Non-English English
4 / 55Major Steps
Inverted index
For each term t, we store a list of all documents that contain t. Brutus − → 1 2 4 11 31 45 173 174 Caesar − → 1 2 4 5 6 16 57 132 . . . Calpurnia − → 2 31 54 101 . . .
postings
6 / 55Intersecting two postings lists
Brutus − → 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174 Calpurnia − → 2 → 31 → 54 → 101 Intersection = ⇒ 2 → 31 Linear in the length of the postings lists.
7 / 55Constructing the inverted index: Sort postings
term docID I 1 did 1 enact 1 julius 1 caesar 1 I 1 was 1 killed 1 i’ 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 2= ⇒
term docID ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 caesar 2 caesar 2 did 1 enact 1 hath 1 I 1 I 1 i’ 1 it 2 julius 1 killed 1 killed 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 8 / 55Westlaw: Example queries
Information need: Information on the legal theories involved in preventing the disclosure of trade secrets by employees formerly employed by a competing company Query: “trade secret” /s disclos! /s prevent /s employe! Information need: Requirements for disabled people to be able to access a workplace Query: disab! /p access! /s work-site work-place (employment /3 place) Information need: Cases about a host’s responsibility for drunk guests Query: host! /p (responsib! liab!) /p (intoxicat! drunk!) /p guest
9 / 55Outline
1Recap
2Query optimization
3Discussion Section (Thu 1 Sep)
4The term vocabulary General + Non-English English
10 / 55Query optimization
Consider a query that is an and of n terms, n > 2 For each of the terms, get its postings list, then and them together Example query: Brutus AND Calpurnia AND Caesar What is the best order for processing this query?
11 / 55Query optimization
Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further In this example, first Caesar, then Calpurnia, then Brutus Brutus − → 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174 Calpurnia − → 2 → 31 → 54 → 101 Caesar − → 5 → 31
12 / 55Optimized intersection algorithm for conjunctive queries
Intersect(t1, . . . , tn) 1 terms ← SortByIncreasingFrequency(t1, . . . , tn) 2 result ← postings(first(terms)) 3 terms ← rest(terms) 4 while terms = nil and result = nil 5 do result ← Intersect(result, postings(first(terms))) 6 terms ← rest(terms) 7 return result
13 / 55More general optimization
Example query: (madding or crowd) and (ignoble or strife) Get frequencies for all terms Estimate the size of each or by the sum of its frequencies (conservative) Process in increasing order of or sizes
14 / 55In addition
determine set of terms in dictionary and provide retrieval tolerant to spelling mistakes and inconsistent choice of words search for compounds or phrases that denote a concept such as “operating system” or proximity queries such as Gates NEAR Microsoft: augment index to capture proximities of terms in documents Boolean model only records term presence or absence, but perhaps give more weight to documents that have a term several times? Need term frequency information (number of times a term occurs in a document) in postings lists Boolean queries retrieve a set of matching documents, but need effective method to order (or “rank”) returned results: requires mechanism for determining document score measuring goodness of match to query
15 / 55Summary
Ad hoc searching over documents has recently conquered the world, powering not only web search engines but the kind of unstructured search that lies behind large eCommerce websites. But web search engines have added at least partial implementations of some of the most popular operators from extended Boolean models: phrase search, Boolean operators
16 / 55Outline
1Recap
2Query optimization
3Discussion Section (Thu 1 Sep)
4The term vocabulary General + Non-English English
17 / 55Discussion 1, Thu 1 Sep
The course uses the Computer Science Course Management System (CMS) to manage assignments. Between now and class on Thu, login using your NetID and password at http://cms.csuglab.cornell.edu/. Go to CS 4300 and follow the instructions for “assignment 0”. If you do not see CS 4300, contact cs4300-l@lists.cs.cornell.edu. In preparation, explore three information retrieval systems and compare them: Bing — a Web search engine (http://bing.com/). The Library of Congress catalog — a very large bibliographic catalog (http://catalog.loc.gov/). PubMed — an indexing and abstracting service for medicine and related fields (http://www.ncbi.nlm.nih.gov/entrez/query.fcgi).
18 / 55Use each service separately for the following information discovery task: What is the medical evidence that vaccines can cause autism? Evaluate each search service. What do you consider the strengths and weaknesses of each service? When would you use them? (a) Does the service search full text or surrogates? What is the underlying corpus? What effect does this have on your results? (b) Is fielded searching offered? What Boolean operators are supported? What regular expressions? How does it handle non-Roman character sets? What is the stop list? How are results ranked? Are they sorted, if so in what order? (c) From a usability viewpoint. What style of user interface(s) is provided? What training or help services? If there are basic and advanced user interfaces, what does each offer?
N.B.: Use these questions to guide your thoughts — It is not necessary to write detailed answers to these questions in a write-up. Just give the “best URL”, explaining in a sentence or two why you found that particular resource definitive, comprehensive or authoritative.
19 / 55Outline
1Recap
2Query optimization
3Discussion Section (Thu 1 Sep)
4The term vocabulary General + Non-English English
20 / 55Major Steps
Recall the major steps in inverted index construction:
Terms and documents
Last lecture: Simple Boolean retrieval system Our assumptions were:
We know what a document is. We know what a term is.
Both issues can be complex in reality. We’ll look a little bit at what a document is. But mostly at terms: How do we define and process the vocabulary of terms of a collection?
22 / 55Term Statistics
Next Lecture: statistical properties of term occurrences Heap’s Law M = kT b: empirical growth of vocabulary with size of collection Zipf’s Law cfi ∝ 1
i : empirical distribution of term usageBoth are power laws But first. . .
23 / 55Parsing a document
Before we can even start worrying about terms . . . . . . need to deal with format and language of each document. What format is it in? pdf, word, excel, html etc. What language is it in? What character set is in use? Each of these is a classification problem, which we will study later in this course Alternative: use heuristics
24 / 55Format/Language: Complications
A single index usually contains terms of several languages.
Sometimes a document or its components contain multiple languages/formats. French email with Spanish pdf attachment
What is the document unit for indexing? A file? An email? An email with 5 attachments? A group of files (ppt or latex in HTML)? Issues with books (again precision vs. recall)
25 / 55What is a document?
Take-away: potentially non-trivial; in many cases requires some design decisions.
26 / 55Outline
1Recap
2Query optimization
3Discussion Section (Thu 1 Sep)
4The term vocabulary General + Non-English English
27 / 55Definitions
Word – A delimited string of characters as it appears in the text. Term – A “normalized” word (case, morphology, spelling etc); an equivalence class of words. Token – An instance of a word or term occurring in a document. Type – The same as a term in most cases: an equivalence class of tokens.
28 / 55Type/token distinction: Example
In June, the dog likes to chase the cat in the barn. (How many word tokens? How many word types?)
29 / 55Recall: Inverted index construction
Input: Friends, Romans, countrymen. So let it be with Caesar . . . Output: friend roman countryman so . . . Each token is a candidate for a postings entry. What are valid tokens to emit?
30 / 55Why tokenization is difficult – even in English
Example: Mr. O’Neill thinks that the boys’ stories about Chile’s capital aren’t amusing. Tokenize this sentence (neill , oneill , o’neill , o’ neill , o neill) (aren’t , arent , are n’t , aren t) choices determine which Boolean queries will match
31 / 55One word or two? (or several)
Hewlett-Packard State-of-the-art co-education the hold-him-back-and-drag-him-away maneuver data base San Francisco Los Angeles-based company cheap San Francisco-Los Angeles fares York University vs. New York University
32 / 55Numbers
3/20/91 20/3/91 Mar 20, 1991 100.2.86.144 (800) 234-2333 800.234.2333 Older IR systems may not index numbers . . . . . . but generally it’s a useful feature.
33 / 55Etc
C++ C# B-52 M*A*S*H jblack@mail.yahoo.com http://stuff.big.com/new/specials.html 1Z9999W99845399981
34 / 55Other languages
English dominant on the WWW: approximately 60% of web pages in English (Gerrand 2007). But still 40% of the web non-English, expected to grow over time (less than one third of Internet users and less than 10% of the worlds population primarily speak English) Signs of change: Sifry (2007) only about one third of blog posts are in English. 30 Aug 2011: Google search for ”percentage of web pages in english” brings up (page dated 26 Feb, 2011)
http://www.quora.com/ How-many-websites-percentage-or-absolute-numbers-are-not-in-English:
“. . . best guess would be about 40% of webpages today are in English, and 60% (or about 170 billion websites) are non-English.”
35 / 55Chinese: No whitespace
莎拉波娃
南部的佛
✂里
✄。今年4月 9日,莎拉波娃在美国第一大城市
☎ ✆度
✝了18
✞生 日。生日派
✟上,莎拉波娃露出了甜美的微笑。
36 / 55Ambiguous segmentation in Chinese
The two characters can be treated as one word meaning ‘monk’ or as a sequence of two words meaning ‘and’ and ‘still’.
37 / 55Other cases of “no whitespace”
Compounds in Dutch and German Computerlinguistik → Computer + Linguistik Lebensversicherungsgesellschaftsangestellter → leben + versicherung + gesellschaft + angestellter Inuit: tusaatsiarunnanngittualuujunga (I can’t hear very well.) Swedish, Finnish, Greek, Urdu, many other languages
38 / 55Japanese
✠ ✡ ☛ ☞ ✌ ✍ ✎ ✏ ✑ ✎ ✒ ✓ ✔ ✕ ✖ ✗ ✘ ✙ ✡ ✚ ✛ ✜ ✢ ✣ ✤ ✥ ✦ ✧ ✏ ★ ✩ ✪ ✫ ✬ ✭ ✭ ✮ ✯ ✰ ✮ ✯ ✱ ✲ ✕ ✳ ✡ ✕ ✴ ✵ ✶ ✷ ✒ ✸ ✹ ✺ ✻ ✼ ✽ ✾ ✷ ✙ ✖ ✿ ✕ ❀ ❁ ❂ ❃ ❄ ❅ ✴ ✹ ❆ ❇ ✓ ❈ ❉ ❈ ❊ ✏ ❋4 different “alphabets”: Chinese characters, hiragana syllabary for inflectional endings and function words, katakana syllabary for transcription of foreign words and other uses, and latin. No spaces (as in Chinese). End user can express query entirely in hiragana!
39 / 55Arabic script
ٌبَِآ ⇐ ٌ ب ا ت ِ ك un b ā t i k /kitābun/ ‘a book’
40 / 55Arabic script: Bidirectionality
اا ا1962 132ا لا . ← → ← → ← START
‘Algeria achieved its independence in 1962 after 132 years of French occupation.’Bidirectionality is not a problem if text is coded in Unicode.
41 / 55Accents and diacritics
Accents: r´ esum´ e vs. resume (simple omission of accent) Umlauts: Universit¨ at vs. Universitaet (substitution with special letter sequence “ae”) Most important criterion: How are users likely to write their queries for these words? Even in languages that standardly have accents, users often do not type them. (Polish?)
42 / 55Outline
1Recap
2Query optimization
3Discussion Section (Thu 1 Sep)
4The term vocabulary General + Non-English English
43 / 55Normalization
Need to “normalize” terms in indexed text as well as query terms into the same form. Example: We want to match U.S.A. and USA We most commonly implicitly define equivalence classes of terms. Alternatively: do asymmetric expansion
window → window, windows windows → Windows, windows Windows (no expansion)
More powerful, but less efficient Why don’t you want to put window, Window, windows, and Windows in the same equivalence class?
44 / 55Normalization: Other languages
Normalization and language detection interact. PETER WILL NICHT MIT. → MIT = mit He got his PhD from MIT. → MIT = mit
45 / 55Case folding
Reduce all letters to lower case Possible exceptions: capitalized words in mid-sentence MIT vs. mit Fed vs. fed Windows vs. windows It’s often best to lowercase everything since users will use lowercase regardless of correct capitalization.
46 / 55Stop words
stop words = extremely common words which would appear to be of little value in helping select documents matching a user need Examples: a, an, and, are, as, at, be, by, for, from, has, he, in, is, it, its, of, on, that, the, to, was, were, will, with Stop word elimination used to be standard in older IR systems. But you need stop words for phrase queries, e.g. “King of Denmark”, “flights to London”, “As we may think”, “To be
Most web search engines index stop words.
47 / 55More equivalence classing
Soundex: (phonetic equivalence, Muller = Mueller) Thesauri: (semantic equivalence, car = automobile)
48 / 55Lemmatization
Reduce inflectional/variant forms to base form Example: am, are, is → be Example: car, cars, car’s, cars’ → car Example: the boy’s cars are different colors → the boy car be different color Lemmatization implies doing “proper” reduction to dictionary headword form (the lemma). Inflectional morphology (cutting → cut) vs. derivational morphology (destruction → destroy)
49 / 55Stemming
Definition of stemming: Crude heuristic process that chops off the ends of words in the hope of achieving what “principled” lemmatization attempts to do with a lot of linguistic knowledge. Language dependent Often inflectional and derivational Example for derivational: automate, automatic, automation all reduce to automat
50 / 55Porter algorithm
Most common algorithm for stemming English Results suggest that it is at least as good as other stemming
Conventions + 5 phases of reductions Phases are applied sequentially Each phase consists of a set of commands.
Sample command: Delete final ement if what remains is longer than 1 character replacement → replac cement → cement
Sample convention: Of the rules in a compound command, select the one that applies to the longest suffix.
51 / 55Porter stemmer: A few rules
Rule Example SSES → SS caresses → caress IES → I ponies → poni SS → SS caress → caress S → cats → cat
52 / 55Three stemmers: A comparison
Sample text: Such an analysis can reveal features that are not easily visible from the variations in the individual genes and can lead to a picture of expression that is more biologically transparent and accessible to interpretation Porter stemmer: such an analysi can reveal featur that ar not easili visibl from the variat in the individu gene and can lead to a pictur
Lovins stemmer: such an analys can reve featur that ar not eas vis from th vari in th individu gen and can lead to a pictur of expres that is mor biolog transpar and acces to interpres Paice stemmer: such an analys can rev feat that are not easy vis from the vary in the individ gen and can lead to a pict of express that is mor biolog transp and access to interpret http://www.tartarus.org/∼martin/PorterStemmer/ http://www.cs.waikato.ac.nz/∼eibe/stemmers/ http://www.comp.lancs.ac.uk/computing/research/stemming/
53 / 55Does stemming improve effectiveness?
In general, stemming increases effectiveness for some queries, and decreases effectiveness for others (increases recall at expense of precision) Porter Stemmer equivalence class oper contains all of operate
Queries where stemming hurts: “operational AND research”, “operating AND system”, “operative AND dentistry”
54 / 55What does Google do?
Stop words Normalization Tokenization Lowercasing Stemming Non-latin alphabets Umlauts Compounds Numbers
55 / 55