statistics and the scientific study of language
play

Statistics and the Scientific Study of Language What do they have - PowerPoint PPT Presentation

Statistics and the Scientific Study of Language What do they have to do with each other? Mark Johnson Brown University ESSLLI 2005 Outline Why Statistics? Learning probabilistic context-free grammars Factoring learning into simpler


  1. Statistics and the Scientific Study of Language What do they have to do with each other? Mark Johnson Brown University ESSLLI 2005

  2. Outline Why Statistics? Learning probabilistic context-free grammars Factoring learning into simpler components The Janus-faced nature of computational linguistics Conclusion

  3. Statistical revolution in computational linguistics ◮ Speech recognition ◮ Syntactic parsing ◮ Machine translation 0.92 0.91 0.9 Parse 0.89 Accuracy 0.88 0.87 0.86 0.85 0.84 1994 1996 1998 2000 2002 2004 2006 Year

  4. Statistical models in computational linguistics ◮ Supervised learning: structure to be learned is visible ◮ speech transcripts, treebank, proposition bank, translation pairs ◮ more information than available to a child ◮ annotation requires (linguistic) knowledge ◮ a more practical method of making information available to a computer than writing a grammar by hand ◮ Unsupervised learning: structure to be learned is hidden ◮ alien radio, alien TV

  5. Chomsky’s “Three Questions” ◮ What constitutes knowledge of language? ◮ grammar (universal, language specific) ◮ How is knowledge of language acquired? ◮ language acquisition ◮ How is knowledge of language put to use? ◮ psycholinguistics (last two questions are about inference)

  6. The centrality of inference ◮ “poverty of the stimulus” ⇒ innate knowledge of language (universal grammar) ⇒ intricate grammar with rich deductive structure

  7. The centrality of inference ◮ “poverty of the stimulus” ⇒ innate knowledge of language (universal grammar) ⇒ intricate grammar with rich deductive structure ◮ Statistics is the theory of optimal inference in the presence of uncertainty ◮ We can define probability distributions over structured objects ⇒ no inherent contradiction between statistical inference and linguistic structure ◮ probabilistic models are declarative ◮ probabilistic models can be systematically combined P( X , Y ) = P( X )P( Y | X )

  8. Questions that statistical models might answer ◮ What information is required to learn language? ◮ How useful are different kinds of information to language learners? ◮ Bayesian inference can utilize prior knowledge ◮ Prior can encode “soft” markedness preferences and “hard” universal constraints ◮ Are there synergies between different information sources? ◮ Does knowledge of phonology or morphology make word segmentation easier? ◮ May provide hints about human language acquisition

  9. Outline Why Statistics? Learning probabilistic context-free grammars Factoring learning into simpler components The Janus-faced nature of computational linguistics Conclusion

  10. Probabilistic Context-Free Grammars 1 . 0 S → NP VP 1 . 0 VP → V 0 . 75 NP → George 0 . 25 NP → Al 0 . 6 V → barks 0 . 4 V → snores     S S     NP VP NP VP     P  = 0 . 45 P  = 0 . 1      George V  Al V barks snores

  11. Estimating PCFGs from visible data S S S NP VP NP VP NP VP rice grows rice grows corn grows   Rule Count Rel Freq S   S → NP VP 3 1   P  = 2 / 3 NP VP  NP → rice 2 2 / 3 NP → corn 1 1 / 3 rice grows VP → grows 3 1   S   Rel freq is maximum likelihood estimator   P  = 1 / 3 NP VP  (selects rule probabilities that maximize probability of trees) corn grows

  12. Estimating PCFGs from hidden data ◮ Training data consists of strings w alone ◮ Maximum likelihood selects rule probabilities that maximize the marginal probability of the strings w ◮ Expectation maximization is a way of building hidden data estimators out of visible data estimators ◮ parse trees of iteration i are training data for rule probabilities at iteration i + 1 ◮ Each iteration is guaranteed not to decrease P( w ) (but can get trapped in local minima) ◮ This can be done without enumerating the parses

  13. Example: The EM algorithm with a toy PCFG Initial rule probs “English” input rule prob the dog bites · · · · · · the dog bites a man VP → V 0 . 2 a man gives the dog a bone VP → V NP 0 . 2 · · · VP → NP V 0 . 2 VP → V NP NP 0 . 2 “pseudo-Japanese” input VP → NP NP V 0 . 2 the dog bites · · · · · · the dog a man bites Det → the 0 . 1 a man the dog a bone gives N → the 0 . 1 · · · V → the 0 . 1

  14. Probability of “English” 1 0.1 Geometric average 0.01 sentence probability 0.001 1e-04 1e-05 1e-06 0 1 2 3 4 5 Iteration

  15. Rule probabilities from “English” 1 VP → V NP 0.9 VP → NP V VP → V NP NP 0.8 VP → NP NP V Det → the 0.7 N → the 0.6 Rule V → the probability 0.5 0.4 0.3 0.2 0.1 0 0 1 2 3 4 5 Iteration

  16. Probability of “Japanese” 1 0.1 Geometric average 0.01 sentence probability 0.001 1e-04 1e-05 1e-06 0 1 2 3 4 5 Iteration

  17. Rule probabilities from “Japanese” 1 VP → V NP 0.9 VP → NP V VP → V NP NP 0.8 VP → NP NP V Det → the 0.7 N → the 0.6 Rule V → the probability 0.5 0.4 0.3 0.2 0.1 0 0 1 2 3 4 5 Iteration

  18. Learning in statistical paradigm ◮ The likelihood is a differentiable function of rule probabilities ⇒ learning can involve small, incremental updates ◮ Learning structure (rules) is hard, but . . . ◮ Parameter estimation can approximate rule learning ◮ start with “superset” grammar ◮ estimate rule probabilities ◮ discard low probability rules ◮ Parameters can be associated with other things besides rules (e.g., HeadInitial, HeadFinal)

  19. Applying EM to real data ◮ ATIS treebank consists of 1,300 hand-constructed parse trees ◮ ignore the words (in this experiment) ◮ about 1,000 PCFG rules are needed to build these trees S VP . VB NP NP . Show PRP NP DT JJ NNS PP ADJP me PDT the nonstop flights PP PP JJ PP all IN NP TO NP early IN NP from NNP to NNP in DT NN Dallas Denver the morning

  20. Experiments with EM 1. Extract productions from trees and estimate probabilities probabilities from trees to produce PCFG. 2. Initialize EM with the treebank grammar and MLE probabilities 3. Apply EM (to strings alone) to re-estimate production probabilities. 4. At each iteration: ◮ Measure the likelihood of the training data and the quality of the parses produced by each grammar. ◮ Test on training data (so poor performance is not due to overlearning).

  21. Log likelihood of training strings -14000 -14200 -14400 -14600 -14800 log P -15000 -15200 -15400 -15600 -15800 -16000 0 5 10 15 20 Iteration

  22. Quality of ML parses 1 Precision Recall 0.95 0.9 Parse Accuracy 0.85 0.8 0.75 0.7 0 5 10 15 20 Iteration

  23. Why does it work so poorly? ◮ Wrong data: grammar is a transduction between form and meaning ⇒ learn from form/meaning pairs ◮ exactly what contextual information is available to a language learner? ◮ Wrong model: PCFGs are poor models of syntax ◮ Wrong objective function: Maximum likelihood makes the sentences as likely as possible, but syntax isn’t intended to predict sentences (Klein and Manning) ◮ How can information about the marginal distribution of strings P( w ) provide information about the conditional distribution of parses t given strings P( t | w )? ◮ need additional linking assumptions about the relationship between parses and strings ◮ . . . but no one really knows!

  24. Outline Why Statistics? Learning probabilistic context-free grammars Factoring learning into simpler components The Janus-faced nature of computational linguistics Conclusion

  25. Factoring the language learning problem ◮ Factor the language learning problem into linguistically simpler components ◮ Focus on components that might be less dependent on context and semantics (e.g., word segmentation, phonology) ◮ Identify relevant information sources (including prior knowledge, e.g., UG) by comparing models ◮ Combine components to produce more ambitious learners ◮ PCFG-like grammars are a natural way to formulate many of these components Joint work with Sharon Goldwater and Tom Griffiths

  26. Word Segmentation Utterance Word Utterance t h e Word Utterance d o g Word b a r k s Data = t h e d o g b a r k s Utterance → Word Utterance Utterance → Word Word → w w ∈ Σ ⋆ ◮ Algorithms for word segmentation from this information already exists (e.g., Elman, Brent) ◮ Likely that children perform some word segmentation before they know the meanings of words

  27. Concatenative morphology Verb Stem Suffix t a l k i n g Data = t a l k i n g Verb → Stem Suffix Stem → w w ∈ Σ ⋆ Suffix → w w ∈ Σ ⋆ ◮ Morphological alternation provides primary evidence for phonological generalizations (“trucks” /s/ vs. “cars” /z/) ◮ Morphemes may also provide clues for word segmentation ◮ Algorithms for doing this already exist (e.g., Goldsmith)

  28. PCFG components can be integrated Utterance WordsN N WordsV StemN SuffixN V d o g s StemV SuffixV b a r k Utterance → Words S S ∈ S Words S → S Words T T ∈ S S → Stem S Suffix S Stem S → t t ∈ Σ ⋆ Suffix S → f f ∈ Σ ⋆

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend