SI425 : NLP Set 5 Nave Bayes Classification Motivation We want to - - PowerPoint PPT Presentation

si425 nlp
SMART_READER_LITE
LIVE PREVIEW

SI425 : NLP Set 5 Nave Bayes Classification Motivation We want to - - PowerPoint PPT Presentation

SI425 : NLP Set 5 Nave Bayes Classification Motivation We want to predict something . We have some text related to this something. something = target label Y text = text features X Given X, what is the most probable Y?


slide-1
SLIDE 1

SI425 : NLP

Set 5 Naïve Bayes Classification

slide-2
SLIDE 2

Motivation

  • We want to predict something.
  • We have some text related to this something.
  • something = target label Y
  • text = text features X

Given X, what is the most probable Y?

slide-3
SLIDE 3

Motivation: Author Detection

Alas the day! take heed of him; he stabbed me in mine own house, and that most beastly: in good faith, he cares not what mischief he does. If his weapon be out: he will foin like any devil; he will spare neither man, woman, nor child.

X = Y =

{ Charles Dickens, William Shakespeare, Herman Melville, Jane Austin, Homer, Leo Tolstoy }

) | ( ) ( max arg

k k y

y Y X P y Y P Y

k

   

slide-4
SLIDE 4

More Motivation

P(Y=spam | X=email) P(Y=worthy | X=review sentence)

slide-5
SLIDE 5

5

The Naïve Bayes Classifier

  • Recall Bayes rule:
  • Which is short for:
  • We can re-write this as:

) ( ) | ( ) ( ) | (

j i j i j i

X P Y X P Y P X Y P  ) ( ) | ( ) ( ) | (

j i j i j i

x X P y Y x X P y Y P x X y Y P       

        

k k j k i j i j i

y Y x X P y Y P y Y x X P y Y P x X y Y P ) | ( ) ( ) | ( ) ( ) | (

Remaining slides adapted from Tom Mitchell.

slide-6
SLIDE 6

Deriving Naïve Bayes

  • Idea: use the training data to directly estimate:
  • We can use these values to estimate using Bayes rule.

6

and

) | ( Y X P ) (Y P ) | ( X Y P ) | , , , ( ) | (

2 1

Y X X X P Y X P

n

 

  • Remember: representing the full joint probability is not

practical.

slide-7
SLIDE 7

Deriving Naïve Bayes

  • However, if we make the assumption that the attributes

are independent, estimation is easy!

  • In other words, we assume all attributes are conditionally

independent given Y.

  • This assumption is violated in practice, but more on that later…

7

i i n

Y X P Y X X P ) | ( ) | , , (

1 

slide-8
SLIDE 8

Deriving Naïve Bayes

  • Let and label Y be discrete.
  • Then, we can estimate

and directly from the training data by counting!

8

Sky Temp Humid Wind Water Forecast Play? sunny warm normal strong warm same yes sunny warm high strong warm same yes rainy cold high strong warm change no sunny warm high strong cool change yes P(Sky = sunny | Play = yes) = ? P(Humid = high | Play = yes) = ?

n

X X X , ,

1 

 ) | (

i i Y

X P ) ( i Y P

slide-9
SLIDE 9

The Naïve Bayes Classifier

  • Now we have:
  • To classify a new observation Xnew :

9

  

     

k i k i k j i i j n j

y Y X P y Y P y Y X P y Y P X X y Y P ) | ( ) ( ) | ( ) ( ) , , | (

1 

   

i k i k y new

y Y X P y Y P Y

k

) | ( ) ( max arg

slide-10
SLIDE 10

Represent LMs as NB prediction

Y1 = dickens Y2 = twain P(Y1) * P(X | Y1) P(Y2) * P(X | Y2) P(X | Y1) = PY1(X) P(X | Y2) = PY2(X) Bigrams: PY1(X) = 𝑄𝑍1(𝑦𝑗|𝑦𝑗 − 1)

𝑗

Bigrams: PY2(X) = 𝑄𝑍2(𝑦𝑗|𝑦𝑗 − 1)

𝑗

X = your text! P(X) : your language model does this! P(X | Y) : still a language model, but trained on Y data

slide-11
SLIDE 11

Naïve Bayes Applications

  • Text classification
  • Which e-mails are spam?
  • Which e-mails are meeting notices?
  • Which author wrote a document?
  • Which webpages are about current events?
  • Which blog contains angry writing?
  • What sentence in a document talks about company X?
  • etc.

11

slide-12
SLIDE 12

Text and Features

  • What is Xi?
  • Could be unigrams, hopefully bigrams too.
  • It can be anything that is computed from the text X.
  • Yes, I really mean anything. Creativity and intuition into

language is where the real gains come from in NLP.

  • Non n-gram examples:
  • X10 = “the sentence contains a conjunction (yes or no)”
  • X356 = “existence of a semi-colon in the paragraph”

i i n

Y X P Y X X P ) | ( ) | , , (

1 

slide-13
SLIDE 13

Features

  • In machine learning, “features” are the attributes to

which you assign weights (probabilities in Naïve Bayes) that help in the final classification.

  • Up until now, your features have been n-grams. You

now want to consider other types of features.

  • You count features just like n-grams. How many did you see?
  • X = set of features
  • P(Y|X) = probability of a Y given a set of features
slide-14
SLIDE 14

How do you count features?

  • Feature idea: “a semicolon exists in this sentence”
  • Count them:
  • Count(“FEAT-SEMICOLON”, 1)
  • Make up a unique name for the feature, then count!
  • Compute probability:
  • P(“FEAT-SEMICOLON” | author=“dickens”) =

CountDICKENS(“FEAT-SEMICOLON”) / (# dickens sentences)

slide-15
SLIDE 15

Authorship Lab

  • 1. Figure out how to use your Language Models from

Lab 2. They can be your initial features.

  • HINT: can you train() a model on one author’s text?
  • 2. P(dickens | text) ≈ P(dickens) * P(text | dickens)

= P(dickens) * PBigramModel-DICKENS(text)

  • 3. Write new code for new features. Call your language

models, get their probabilities, and then multiply in the new feature probabilities.