info 4300 cs4300 information retrieval slides adapted
play

INFO 4300 / CS4300 Information Retrieval slides adapted from - PowerPoint PPT Presentation

INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch utzes, linked from http://informationretrieval.org/ IR 24/25: Text Classification and Naive Bayes Paul Ginsparg Cornell University, Ithaca, NY 29 Nov 2011 1 / 35


  1. INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨ utze’s, linked from http://informationretrieval.org/ IR 24/25: Text Classification and Naive Bayes Paul Ginsparg Cornell University, Ithaca, NY 29 Nov 2011 1 / 35

  2. Administrativa Assignment 4 due Fri 2 Dec (extended to Sun 4 Dec). Final examination: Wed, 14 Dec, from 7:00-9:30 p.m., in Upson B17 2 / 35

  3. Overview Recap of Text classification 1 Naive Bayes 2 Discussion 3 3 / 35

  4. Outline Recap of Text classification 1 Naive Bayes 2 Discussion 3 4 / 35

  5. Relevance feedback In relevance feedback, the user marks documents as relevant/nonrelevant. Relevant/nonrelevant can be viewed as classes or categories. For each document, the user decides which of these two classes is correct. The IR system then uses these class assignments to build a better query (“model”) of the information need . . . . . . and returns better documents. Relevance feedback is a form of text classification. The notion of text classification (TC) is very general and has many applications within and beyond information retrieval. 5 / 35

  6. Another TC task: spam filtering From: ‘‘’’ <takworlld@hotmail.com> Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook. Change your life NOW ! ================================================= Click Below to order: http://www.wholesaledaily.com/sales/nmd.htm ================================================= How would you write a program that would automatically detect and delete this type of message? 6 / 35

  7. Formal definition of TC — summary Training Given: A document space X Documents are represented in some high-dimensional space. A fixed set of classes C = { c 1 , c 2 , . . . , c J } human-defined for needs of application (e.g., rel vs. non-rel). A training set D of labeled documents � d , c � ∈ X × C Using a learning method or learning algorithm, we then wish to learn a classifier γ that maps documents to classes: γ : X → C Application/Testing Given: a description d ∈ X of a document Determine: γ ( d ) ∈ C , i.e., the class most appropriate for d 7 / 35

  8. Topic classification γ ( d ′ ) = China regions industries subject areas classes: poultry sports UK China coffee elections d ′ first congestion feed roasting recount diamond Olympics training test private chicken beans votes baseball Chinese London Beijing set: set: airline tourism pate arabica seat forward Parliament Big Ben Great Wall ducks robusta run-off soccer Windsor Mao bird flu Kenya TV ads team the Queen communist turkey harvest campaign captain 8 / 35

  9. Examples of how search engines use classification Standing queries (e.g., Google Alerts) Language identification (classes: English vs. French etc.) The automatic detection of spam pages (spam vs. nonspam) The automatic detection of sexually explicit content (sexually explicit vs. not) Sentiment detection: is a movie or product review positive or negative (positive vs. negative) Topic-specific or vertical search – restrict search to a “vertical” like “related to health” (relevant to vertical vs. not) 9 / 35

  10. Classification methods: 1. Manual Manual classification was used by Yahoo in the beginning of the web. Also: ODP, PubMed Very accurate if job is done by experts Consistent when the problem size and team is small Scaling manual classification is difficult and expensive. → We need automatic methods for classification. 10 / 35

  11. Classification methods: 2. Rule-based Our Google Alerts example was rule-based classification. There are “IDE” type development environments for writing very complex rules efficiently. (e.g., Verity integrated development environment) Often: Boolean combinations (as in Google Alerts) Accuracy is very high if a rule has been carefully refined over time by a subject expert. Building and maintaining rule-based classification systems is expensive. 11 / 35

  12. Classification methods: 3. Statistical/Probabilistic As per our definition of the classification problem – text classification as a learning problem Supervised learning of a the classification function γ and its application to classifying new documents We have looked at a couple of methods for doing this: Rocchio, kNN. Now Naive Bayes No free lunch: requires hand-classified training data But this manual classification can be done by non-experts. 12 / 35

  13. Classification methods — summary 1. Manual (accurate if done by experts, consistent for problem size and team is small difficult and expensive to scale) 2. Rule-based (accuracy very high if a rule has been carefully refined over time by a subject expert, building and maintaining expensive) 3. Statistical/Probabilistic As per our definition of the classification problem – text classification as a learning problem Supervised learning of a the classification function γ and its application to classifying new documents We have looked at a couple of methods for doing this: Rocchio, kNN. Now Naive Bayes No free lunch: requires hand-classified training data But this manual classification can be done by non-experts. 13 / 35

  14. Outline Recap of Text classification 1 Naive Bayes 2 Discussion 3 14 / 35

  15. The Naive Bayes classifier The Naive Bayes classifier is a probabilistic classifier. We compute the probability of a document d being in a class c as follows: � P ( c | d ) ∝ P ( c ) P ( t k | c ) 1 ≤ k ≤ n d n d is the length of the document. (number of tokens) P ( t k | c ) is the conditional probability of term t k occurring in a document of class c P ( t k | c ) as a measure of how much evidence t k contributes that c is the correct class. P ( c ) is the prior probability of c . If a document’s terms do not provide clear evidence for one class vs. another, we choose the c with higher P ( c ). 15 / 35

  16. Maximum a posteriori class Our goal is to find the “best” class. The best class in Naive Bayes classification is the most likely or maximum a posteriori (MAP) class c map : ˆ ˆ � ˆ c map = arg max P ( c | d ) = arg max P ( c ) P ( t k | c ) c ∈ C c ∈ C 1 ≤ k ≤ n d We write ˆ P for P since these values are estimates from the training set. 16 / 35

  17. Taking the log Multiplying lots of small probabilities can result in floating point underflow. Since log( xy ) = log( x ) + log( y ), we can sum log probabilities instead of multiplying probabilities. Since log is a monotonic function, the class with the highest score does not change. So what we usually compute in practice is: � � log ˆ � log ˆ c map = arg max P ( c ) + P ( t k | c ) c ∈ C 1 ≤ k ≤ n d 17 / 35

  18. Naive Bayes classifier Classification rule: � � log ˆ � log ˆ c map = arg max P ( c ) + P ( t k | c ) c ∈ C 1 ≤ k ≤ n d Simple interpretation: Each conditional parameter log ˆ P ( t k | c ) is a weight that indicates how good an indicator t k is for c . The prior log ˆ P ( c ) is a weight that indicates the relative frequency of c . The sum of log prior and term weights is then a measure of how much evidence there is for the document being in the class. We select the class with the most evidence. 18 / 35

  19. Parameter estimation How to estimate parameters ˆ P ( c ) and ˆ P ( t k | c ) from training data? Prior: P ( c ) = N c ˆ N N c : number of docs in class c ; N : total number of docs Conditional probabilities: T ct ˆ P ( t | c ) = � t ′ ∈ V T ct ′ T ct is the number of tokens of t in training documents from class c (includes multiple occurrences) We’ve made a Naive Bayes independence assumption here: P ( t k 1 | c ) = ˆ ˆ P ( t k 2 | c ) (i.e., position independence of terms) 19 / 35

  20. The problem with maximum likelihood estimates: Zeros C = China X 1 = Beijing X 2 = and X 3 = Taipei X 4 = join X 5 = WTO P ( China | d ) ∝ P ( China ) · P ( Beijing | China ) · P ( and | China ) · P ( Taipei | China ) · P ( join | China ) · P ( WTO | China ) If WTO never occurs in class China: TChina , WTO ˆ P ( WTO | China ) = = 0 � t ′ ∈ V TChina , t ′ 20 / 35

  21. The problem with maximum likelihood estimates: Zeros (cont’d) If there were no occurrences of WTO in documents in class China, we’d get a zero estimate: TChina , WTO ˆ P ( WTO | China ) = = 0 � t ′ ∈ V TChina , t ′ → We will get P ( China | d ) = 0 for any document that contains WTO! Zero probabilities cannot be conditioned away. 21 / 35

  22. To avoid zeros: Add-one smoothing Add one to each count to avoid zeros: T ct + 1 T ct + 1 ˆ P ( t | c ) = t ′ ∈ V ( T ct ′ + 1) = � ( � t ′ ∈ V T ct ′ ) + B B is the number of different words (in this case the size of the vocabulary: | V | = M ) 22 / 35

  23. Naive Bayes: Summary Estimate parameters from the training corpus using add-one smoothing For a new document, for each class, compute sum of (i) log of prior, and (ii) logs of conditional probabilities of the terms Assign the document to the class with the largest score 23 / 35

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend