probabilistic approaches to language and language learning
play

Probabilistic approaches to language and language learning John - PowerPoint PPT Presentation

Probabilistic approaches to language and language learning John Goldsmith The University of Chicago 1 This work is based on the work of too many people to name them all directly. Nonetheless, I must specifjcally acknowledge Jorma Rissanen


  1. Probabilistic approaches to language and language learning John Goldsmith The University of Chicago 1

  2. This work is based on the work of too many people to name them all directly. Nonetheless, I must specifjcally acknowledge Jorma Rissanen (MDL), Michael Brent and Carl de Marcken (applying MDL to word discovery), and Yu Hu, Colin Sprague, Jason Riggle, and Aris Xanthos, at the University of Chicago. 2

  3. How can it be innovative —much less subversive — to propose to use statistical and probabilistic methods in a scientifjc analysis in the year 2006 Anno Domini? 3

  4. 1. Rationalism and empiricism—and modern science. 2. The mystery of the synthetic apriori is still lurking. 3. Universal grammar is a fjne scientifjc hypothesis, but not a good synthetic a priori. 4. Grammar construction as maximum a posteriori probability . 4

  5. 1. The development of modern science The reasonable The surprising efgectiveness of efgectiveness of understanding the mathematics in universe by understanding observing it the universe. carefully. 5

  6. Rationalism Empiricism The efgectiveness The efgectiveness of mathematical models of observing the of the universe, and the universe even when mind’s ability to develop what we see is not what abstract models, and we expected. make predictions from Especially then. them. T rust the senses. T rust the mind. 6

  7. Francis Bacon Those who have handled sciences have been either men of experiment or men of dogmas. The men of experiment are like the ant , they only collect and use; the reasoners resemble spiders , who make cobwebs out of their own substance. But the bee takes a middle course: it gathers its material from the fmowers of the garden and of the fjeld, but transforms and digests it by a power of its own. Not unlike this is the true business of philosophy ; for it neither relies solely or chiefmy on the powers of the mind , nor does it take the matter which it gathers from natural history and mechanical experiments and lay it up in the memory whole, as it fjnds it, but lays it up in the understanding altered and digested. 7

  8. The collision of rationalism and empiricism Kant’s synthetic apriori: The proposal that there exist contentful truths knowable independent of experience. They are accessible because the very possibility of mind presupposes them. Space, time, causality, induction. 8

  9. 2. Synthetic apriori The problem is still lurking. Efgorts to dissolve it have been many. One method, in both linguistics and psychology, is to naturalize it: to view it as a scientifjc problem. “The problem lies in the object of study: the human brain.” 9

  10. Synthetic apriori The mind’s construction of the world is its best understanding of what the senses provides it with. = World arg max pr ( world | observatio ns ) i world i ∈ possible worlds The real world is the one which is most probable, given our observations. Bayesian, maximum a posteriori reasoning 10

  11. D = Data H = Hypothesis Bayes’ Rule pr ( D | H ) pr ( H ) = pr ( H | D ) pr ( D ) 11

  12. D = Data H = Hypothesis Bayes’ Rule pr ( D | H ) pr ( H ) = pr ( H | D ) pr ( D ) = = pr ( H | D ) pr ( D ) pr ( D and H ) pr ( D | H ) pr ( H ) Defjnition Defjne pr(A|B) = pr(A&B)/pr(B) 12

  13. D = Data H = Hypothesis Bayes’ Rule pr ( D | H ) pr ( H ) = pr ( H | D ) pr ( D ) = = pr ( H | D ) pr ( D ) pr ( D and H ) pr ( D | H ) pr ( H ) Defjnition Defjnition 13

  14. D = Data H = Hypothesis Bayes’ Rule pr ( D | H ) pr ( H ) = pr ( H | D ) pr ( D ) = = pr ( H | D ) pr ( D ) pr ( D and H ) pr ( D | H ) pr ( H ) Defjnition Defjnition = pr ( H | D ) pr ( D ) pr ( D | H ) pr ( H ) 14

  15. D = Data H = Hypothesis Bayes’ Rule pr ( D | H ) pr ( H ) = pr ( H | D ) pr ( D ) = pr ( H | D ) pr ( D ) pr ( D | H ) pr ( H ) pr ( D | H ) pr ( H )  = pr ( H | D ) pr ( D ) 15

  16. If reality is the most probable hypothesis, given the evidence... we must fjnd the hypothesis for which the following is a maximum: D = Data H = Hypothesis pr ( D | H ) pr ( H ) How do we calculate the How do we calculate the probability probability of our observations, given our of our hypothesis about what understanding of reality? reality is? rationalism empiricism 16

  17. How do we calculate the How do we calculate the probability probability of our observations, given our of our hypothesis about what understanding of reality? reality is? Assign a (“prior”) Insist that your probability to all grammars be hypotheses, based on probabilistic: they assign their coherence. a probability to their Measure the coherence. generated output. Call it an evaluation metric . 17

  18. Generative grammar Construct an evaluation metric: Choose the grammar which best satisfjes the evaluation metric, as long as it somehow matches up with the data. Generative grammar satisfjes the rationalist need. 18

  19. Generative grammar Construct an evaluation metric: Choose the grammar which best satisfjes the evaluation metric, as long as it somehow matches up with the data. Generative grammar satisfjes the rationalist need. It fails to say anything at all about the empiricist need. 19

  20. Assigning probability to algorithms after Solomonofg, Chaitin, Kolmogorov The probability the length of its most ...related of an compact expression to... algorithm log pr(A) = - length (A) pr(A) = 2 -length(A) 20

  21. Assigning probability to algorithms after Solomonofg, Chaitin, Kolmogorov The probability the length of its most ...related of an compact expression to... algorithm log pr(A) = - length (A) pr(A) = 2 -length(A) 21

  22. Assigning probability to algorithms after Solomonofg, Chaitin, Kolmogorov The probability the length of its most ...related of an compact expression to... algorithm log pr(A) = - length (A) pr(A) = 2 -length(A) 22

  23. Assigning probability to algorithms after Solomonofg, Chaitin, Kolmogorov The probability the length of its most ...related of an compact expression to... algorithm log pr(A) = - length (A) pr(A) = 2 -length(A) The promise of this approach is that it ofgers an apriori measure of complexity expressed in the language of probability 23

  24. Let’s get to work and write some grammars. We will make sure they all assign probabilities to our observations. We will make sure we can calculate their length. Then we know how to rationally pick the best one... 24

  25. The real challenge for the linguist is to see if this methodology will lead to the automatic discovery of structure that we already know is there. 25

  26. T o maximize pr(Grammar)*pr(Data|Grammar) we maximize log pr(Grammar)+log pr (Data|Grammar) or minimize -log pr(Grammar)–log pr(Data|Grammar) or minimize Length(Grammar) – log pr(Data|Grammar) 26

  27. An observation: thedogsawthecatandthecatsawthedog 27

  28. An observation: thedogsawthecatandthecatsawthedog What is its probability? 28

  29. An observation: thedogsawthecatandthecatsawthedog What is its probability? Its probability depends on the model we propose. The mind is active. The mind chooses. 29

  30. An observation: thedogsawthecatandthecatsawthedog What is its probability? If we only know that the language has phonemes , we can calculate the probability based on phonemes . 30

  31. Phonological structure (1)The probability of a phoneme can be calculated independent of context; or (2) We can calculate a phoneme’s probability conditioned by the phoneme that precedes it. 31

  32. Phonological structure (1)The probability of a phoneme can be calculated independent of context; or (2) We can calculate a phoneme’s probability conditioned by the phoneme that precedes it. T o make life simple for now, we choose (1). 32

  33. Probability of our observation: thedogsawthecatandthecatsawthedog pr(t) * pr(h) * pr(e)…pr(g) Multiply the probability of all 33 letters. − 33 = 2 . 04 * 10 33

  34. D = Data pr ( D | H ) pr ( H ) H = Hypothesis We have pr(D|H): probability of the data given the phoneme hypothesis. What is the probability of the phoneme hypothesis: pr(H)? 34

  35. D = Data pr ( D | H ) pr ( H ) H = Hypothesis We have pr(D|H): probability of the data given the phoneme hypothesis. What is the probability of the phoneme hypothesis: pr(H)? We interpret that as the question: What is the probability of a system with 11 distinct phonemes ? 35

  36. D = Data pr ( D | H ) pr ( H ) H = Hypothesis We have pr(D|H): probability of the data given the phoneme hypothesis. What is the probability of the phoneme hypothesis: pr(H)? We interpret that as the question: What is the probability of a system with Π(11)=Prob[Phoneme Inventory (Lg)=11] 11 distinct phonemes ? 36

  37. D = Data pr ( D | H ) pr ( H ) H = Hypothesis We have pr(D|H): probability of the data given the phoneme hypothesis. What is the probability of the phoneme hypothesis: pr(H)? And is there a better hypothesis available, anyway? Yes, there is. 37

  38. The word hypothesis: There is a vocabulary in this language: the dog saw cat and 38

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend