parts of speech english statistical nlp
play

Parts-of-Speech (English) Statistical NLP One basic kind of - PDF document

Parts-of-Speech (English) Statistical NLP One basic kind of linguistic structure: syntactic word classes Spring 2011 Open class (lexical) words Nouns Verbs Adjectives yellow Proper Common Main Adverbs slowly IBM cat / cats see


  1. Parts-of-Speech (English) Statistical NLP � One basic kind of linguistic structure: syntactic word classes Spring 2011 Open class (lexical) words Nouns Verbs Adjectives yellow Proper Common Main Adverbs slowly IBM cat / cats see Italy snow registered Numbers … more 122,312 one Closed class (functional) Modals Lecture 6: POS / Phrase MT Determiners the some Prepositions to with can had Conjunctions Particles and or off up Dan Klein – UC Berkeley Pronouns he its … more Part-of-Speech Ambiguity CC conjunction, coordinating and both but either or CD numeral, cardinal mid-1890 nine-thirty 0.5 one DT determiner a all an every no that the EX existential there there FW foreign word gemeinschaft hund ich jeux IN preposition or conjunction, subordinating among whether out on by if � Words can have multiple parts of speech JJ adjective or numeral, ordinal third ill-mannered regrettable JJR adjective, comparative braver cheaper taller JJS adjective, superlative bravest cheapest tallest VBD VB MD modal auxiliary can may might will would NN VBN VBZ VBP VBZ noun, common, singular or mass cabbage thermostat investment subhumanity NNP noun, proper, singular Motown Cougar Yvette Liverpool NNP NNS NN NNS CD NN NNPS noun, proper, plural Americans Materials States NNS noun, common, plural undergraduates bric-a-brac averages Fed raises interest rates 0.5 percent POS genitive marker ' 's PRP pronoun, personal hers himself it we them PRP$ pronoun, possessive her his mine my our ours their thy your RB adverb occasionally maddeningly adventurously RBR adverb, comparative further gloomier heavier less-perfectly RBS adverb, superlative best biggest nearest worst RP particle aboard away back by on open through TO "to" as preposition or infinitive marker to UH interjection huh howdy uh whammo shucks heck VB verb, base form ask bring fire see take VBD verb, past tense pleaded swiped registered saw � Two basic sources of constraint: VBG verb, present participle or gerund stirring focusing approaching erasing VBN verb, past participle dilapidated imitated reunifed unsettled � Grammatical environment VBP verb, present tense, not 3rd person singular twist appear comprise mold postpone VBZ � Identity of the current word verb, present tense, 3rd person singular bases reconstructs marks uses WDT WH-determiner that what whatever which whichever � Many more possible features: WP WH-pronoun that what whatever which who whom WP$ WH-pronoun, possessive whose � Suffixes, capitalization, name databases (gazetteers), etc… WRB Wh-adverb however whenever where why Why POS Tagging? Classic Solution: HMMs � Useful in and of itself (more than you’d think) � We want a model of sequences s and observations w � Text-to-speech: record, lead � Lemmatization: saw[v] → see, saw[n] → saw s 0 s 1 s 2 s n � Quick-and-dirty NP-chunk detection: grep {JJ | NN}* {NN | NNS} w 1 w 2 w n � Useful as a pre-processing step for parsing � Less tag ambiguity means fewer parses � However, some tag choices are better decided by parsers IN DT NNP NN VBD VBN RP NN NNS � Assumptions: The Georgia branch had taken on loan commitments … � States are tag n-grams � Usually a dedicated start and end state / word VDN � Tag/state sequence is generated by a markov model DT NN IN NN VBD NNS VBD � Words are chosen independently, conditioned only on the tag/state The average of interbank offered rates plummeted … � These are totally broken assumptions: why? 1

  2. States Estimating Transitions � States encode what is relevant about the past � Use standard smoothing methods to estimate transitions: � Transitions P(s|s’) encode well-formed tag sequences P t t t = λ P ˆ t t t + λ P ˆ t t + − λ − λ P ˆ t ( | , ) ( | , ) ( | ) ( 1 ) ( ) � In a bigram tagger, states = tags i i − i − i i − i − i i − i 1 2 2 1 2 1 1 1 2 < ♦ > < t 1 > < t 2 > < t n > � Can get a lot fancier (e.g. KN smoothing) or use higher orders, but in s 0 s 1 s 2 s n this case it doesn’t buy much � One option: encode more into the state, e.g. whether the previous word was capitalized (Brants 00) w 1 w 2 w n � In a trigram tagger, states = tag pairs � BIG IDEA: The basic approach of state-splitting turns out to be very important in a range of tasks < ♦ , ♦ > < ♦ , t 1 > < t 1 , t 2 > < t n-1 , t n > s 0 s 1 s 2 s n w 1 w 2 w n Estimating Emissions Disambiguation (Inference) � Problem: find the most likely (Viterbi) sequence under the model � Emissions are trickier: � Given model parameters, we can score any tag sequence � Words we’ve never seen before � Words which occur with tags we’ve never seen them with < ♦ , ♦ > < ♦ ,NNP> <NNP, VBZ> <VBZ, NN> <NN, NNS> <NNS, CD> <CD, NN> <STOP> � One option: break out the Good-Turning smoothing NNP VBZ NN NNS CD NN . � Issue: unknown words aren’t black boxes: Fed raises interest rates 0.5 percent . 343,127.23 11-year Minteria reintroducibly P(NNP|< ♦ , ♦ >) P(Fed|NNP) P(VBZ|<NNP, ♦ >) P(raises|VBZ) P(NN|VBZ,NNP)….. � Basic solution: unknown words classes (affixes or shapes) � D + ,D + .D + D + -x + Xx + x + -“ly” In principle, we’re done – list all possible tag sequences, score each one, pick the best one (the Viterbi state sequence) � [Brants 00] used a suffix trie as its emission model logP = -23 NNP VBZ NN NNS CD NN NNP NNS NN NNS CD NN logP = -29 NNP VBZ VB NNS CD NN logP = -27 Finding the Best Trajectory The State Lattice / Trellis � Too many trajectories (state sequences) to list ^ ^ ^ ^ ^ ^ � Option 1: Beam Search Fed:NNP raises:NNS Fed:NNP N N N N N N Fed:NNP raises:VBZ <> Fed:VBN Fed:VBN raises:NNS V V V V V V Fed:VBD Fed:VBN raises:VBZ � A beam is a set of partial hypotheses J J J J J J � Start with just the single empty trajectory � At each derivation step: � Consider all continuations of previous hypotheses D D D D D D � Discard most, keep top k, or those within a factor of the best � Beam search works ok in practice $ $ $ $ $ $ � … but sometimes you want the optimal answer � … and you need optimal answers to validate your beam search START Fed raises interest rates END � … and there’s usually a better option than naïve beams 2

  3. The State Lattice / Trellis The Viterbi Algorithm � Dynamic program for computing ^ ^ ^ ^ ^ ^ δ s = P s s s w w ( ) max ( ... , ... ) i i − i − 0 1 1 1 s s s ... − N N N N N N 0 i 1 � The score of a best path up to position i ending in state s  if s =< • • > V V V V V V 1 , δ = s  ( ) 0 otherwise  0 J J J J J J δ s = P s s P w s δ s ( ) max ( | ' ) ( | ' ) ( ' ) − i i 1 s ' D D D D D D � Also can store a backtrace (but no one does) ψ s = P s s P w s δ s ( ) arg max ( | ' ) ( | ' ) ( ' ) $ $ $ $ $ $ − i i 1 s ' � Memoized solution � Iterative solution START Fed raises interest rates END So How Well Does It Work? Overview: Accuracies � Roadmap of (known / unknown) accuracies: � Choose the most common tag � 90.3% with a bad unknown word model � Most freq tag: ~90% / ~50% � 93.7% with a good one � TnT (Brants, 2000): � Trigram HMM: ~95% / ~55% � A carefully smoothed trigram tagger � Suffix trees for emissions Most errors � TnT (HMM++): JJ JJ NN 96.2% / 86.0% � 96.7% on WSJ text (SOA is ~97.5%) on unknown chief executive officer words � Noise in the data NN JJ NN � Maxent P(t|w): 93.7% / 82.6% chief executive officer � Many errors in the training and test corpora � MEMM tagger: 96.9% / 86.9% JJ NN NN DT NN IN NN VBD NNS VBD chief executive officer � Cyclic tagger: 97.2% / 89.0% The average of interbank offered rates plummeted … NN NN NN � Upper bound: ~98% � Probably about 2% guaranteed error chief executive officer from noise (on this data) Common Errors Corpus-Based MT � Common errors [from Toutanova & Manning 00] Modeling correspondences between languages Sentence-aligned parallel corpus: Yo lo haré mañana Hasta pronto Hasta pronto I will do it tomorrow See you soon See you around Machine translation system: Model of Yo lo haré pronto I will do it soon translation I will do it around VBD RP/IN DT NN NN/JJ NN RB VBD/VBN NNS See you tomorrow official knowledge made up the story recently sold shares 3

  4. Phrase-Based Systems cat ||| chat ||| 0.9 the cat ||| le chat ||| 0.8 dog ||| chien ||| 0.8 house ||| maison ||| 0.6 my house ||| ma maison ||| 0.9 language ||| langue ||| 0.9 … Phrase table Sentence-aligned Word alignments (translation model) corpus Many slides and examples from Philipp Koehn or John DeNero Phrase-Based Decoding The Pharaoh “Model” 这 7 人 中包括 来自 法国 和 俄罗斯 的 宇航 员 . [Koehn et al, 2003] Segmentation Translation Distortion Decoder design is important: [Koehn et al. 03] The Pharaoh “Model” Phrase Weights Where do we get these counts? 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend