alignment in
play

Alignment in Machine Translation CMSC 723 / LING 723 / INST 725 M - PowerPoint PPT Presentation

Alignment in Machine Translation CMSC 723 / LING 723 / INST 725 M ARINE C ARPUAT marine@cs.umd.edu Centauri/Arcturan [Knight, 1997] Your assignment, translate this to Arcturan: farok crrrok hihok yorok clok kantok ok-yurp 1a. ok-voon ororok


  1. Alignment in Machine Translation CMSC 723 / LING 723 / INST 725 M ARINE C ARPUAT marine@cs.umd.edu

  2. Centauri/Arcturan [Knight, 1997] Your assignment, translate this to Arcturan: farok crrrok hihok yorok clok kantok ok-yurp 1a. ok-voon ororok sprok . 7a. lalok farok ororok lalok sprok izok enemok . 1b. at-voon bichat dat . 7b. wat jjat bichat wat dat vat eneat . 2a. ok-drubel ok-voon anok plok 8a. lalok brok anok plok nok . sprok . 8b. iat lat pippat rrat nnat . 2b. at-drubel at-voon pippat rrat dat . 3a. erok sprok izok hihok ghirok . 9a. wiwok nok izok kantok ok-yurp . 3b. totat dat arrat vat hilat . 9b. totat nnat quat oloat at-yurp . 4a. ok-voon anok drok brok jok . 10a. lalok mok nok yorok ghirok clok . 4b. at-voon krat pippat sat lat . 10b. wat nnat gat mat bat hilat . 5a. wiwok farok izok stok . 11a. lalok nok crrrok hihok yorok zanzanok . 5b. totat jjat quat cat . 11b. wat nnat arrat mat zanzanat . 6a. lalok sprok izok jok stok . 12a. lalok rarok nok izok hihok mok . 6b. wat dat krat quat cat . 12b. wat nnat forat arrat vat gat .

  3. Centauri/Arcturian was actually Spanish/English… Translate: Clients do not sell pharmaceuticals in Europe. 1a. Garcia and associates . 7a. the clients and the associates are enemies . 1b. Garcia y asociados . 7b. los clients y los asociados son enemigos . 2a. Carlos Garcia has three associates . 8a. the company has three groups . 2b. Carlos Garcia tiene tres asociados . 8b. la empresa tiene tres grupos . 3a. his associates are not strong . 9a. its groups are in Europe . 3b. sus asociados no son fuertes . 9b. sus grupos estan en Europa . 4a. Garcia has a company also . 10a. the modern groups sell strong pharmaceuticals 4b. Garcia tambien tiene una empresa . 10b. los grupos modernos venden medicinas fuertes 5a. its clients are angry . 11a. the groups do not sell zenzanine . 5b. sus clientes estan enfadados . 11b. los grupos no venden zanzanina . 6a. the associates are also angry . 12a. the small groups are not modern . 6b. los asociados tambien estan 12b. los grupos pequenos no son modernos . enfadados .

  4. 1947 When I look at an article in Russian, I say to myself: This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode. Warren Weaver

  5. 1988 More about the IBM story: 20 years of bitext workshop

  6. Noisy Channel Model for Machine Translation • The noisy channel model decomposes machine translation into two independent subproblems – Language modeling – Translation modeling / Alignment

  7. WO WORD AL D ALIGN GNME MENT NT

  8. How can we model p(f|e)? • We’ll describe the word alignment models introduced in early 90s at IBM • Assumption: each French word f is aligned to exactly one English word e – Including NULL

  9. Word Alignment Vector Representation • Alignment vector a = [2,3,4,5,6,6,6] – length of a = length of sentence f – a i = j if French position i is aligned to English position j

  10. Formalizing the connection between word alignments & the translation model • We define a conditional model – Projecting word translations – Through alignment links

  11. How many possible alignments in A? • How many possible alignments for (f,e) where – f is French sentence with m words – e is an English sentence with l words • For each of m French words, we choose an alignment link among (l+1) English words • Answer: (𝑚 + 1) 𝑛

  12. IBM Model 1: generative story • Input – an English sentence of length l – a length m • For each French position 𝑗 in 1..m – Pick an English source index j – Choose a translation

  13. IBM Model 1: generative story • Input – an English sentence of length l Alignment is based on word positions, not Alignment probabilities – a length m word identities are UNIFORM • For each French position 𝑗 in 1..m – Pick an English source index j – Choose a translation Words are translated independently

  14. IBM Model 1: Parameters • t(f|e) – Word translation probability table – for all words in French & English vocab

  15. IBM Model 1: generative story • Input – an English sentence of length l – a length m • For each French position 𝑗 in 1..m – Pick an English source index j – Choose a translation

  16. Improving on IBM Model 1: IBM Model 2 • Input – an English sentence of length l Remove – a length m assumption that q is uniform • For each French position 𝑗 in 1..m – Pick an English source index j – Choose a translation

  17. IBM Model 2: Parameters • q(j|i,l,m) – now a table – not uniform as in IBM1 • How many parameters are there?

  18. 2 Remaining T asks Inference Parameter Estimation • Given • Given – a sentence pair (e,f) – training data (lots of sentence pairs) – an alignment model with parameters t(e|f) – a model definition and q(j|i,l,m) • how do we learn the • What is the most parameters t(e|f) and probable alignment a? q(j|i,l,m)?

  19. Inference • Inputs – Model parameter tables for t and q – A sentence pair • How do we find the alignment a that maximizes P(e,a|f)? – Hint: recall independence assumptions!

  20. Inference • Inputs – Model parameter tables for t and q – A sentence pair • How do we find the alignment a that maximizes P(e,a|f)? – Hint: recall independence assumptions!

  21. Inference • Inputs – Model parameter tables for t and q – A sentence pair • How do we find the alignment a that maximizes P(e,a|f)? – Hint: recall independence assumptions!

  22. Inference • Inputs – Model parameter tables for t and q – A sentence pair • How do we find the alignment a that maximizes P(e,a|f)? – Hint: recall independence assumptions!

  23. Inference • Inputs – Model parameter tables for t and q – A sentence pair • How do we find the alignment a that maximizes P(e,a|f)? – Hint: recall independence assumptions!

  24. Inference • Inputs – Model parameter tables for t and q – A sentence pair • How do we find the alignment a that maximizes P(e,a|f)? – Hint: recall independence assumptions!

  25. 1 Remaining T ask Inference Parameter Estimation • Given a sentence pair • How do we learn the (e,f), what is the most parameters t(e|f) and probable alignment a? q(j|i,l,m) from data?

  26. Parameter Estimation • Problem – Parallel corpus gives us (e,f) pairs only, a is hidden • We know how to – estimate t and q , given (e,a,f) – compute p(e,a|f) , given t and q • Solution: Expectation-Maximization algorithm (EM) – E-step: given hidden variable, estimate parameters – M-step: given parameters, update hidden variable

  27. Parameter Estimation: EM Use “Soft” values instead of binary counts

  28. Parameter Estimation: soft EM • Soft EM considers all possible alignment links • Each alignment link now has a weight

  29. EM for IBM Model 1 • Expectation (E)-step: – Compute expected counts for parameters (t) based on summing over hidden variable • Maximization (M)-step: – Compute the maximum likelihood estimate of t from the expected counts

  30. EM example: initialization green house the house casa verde la casa For the rest of this talk, French = Spanish

  31. EM example: E-step (a) compute probability of each alignment p(a|f,e) Note: we’re making simplification assumptions in this example • No NULL word • We only consider alignments were each French and English word is aligned to something • We ignore q!

  32. EM example: E-step (b) normalize to get p(a|f,e)

  33. EM example: E-step (c) compute expected counts (weighting each count by p(a|e,f)

  34. EM example: M-step Compute probability estimate by normalizing expected counts

  35. EM example: next iteration

  36. Parameter Estimation with EM • EM guarantees that data likelihood does not decrease across iterations • EM can get stuck in a local optimum – Initialization matters

  37. Word Alignment with IBM Models 1, 2 • Probabilistic models with strong independence assumptions – Results in linguistically naïve models • asymmetric, 1-to-many alignments – But allows efficient parameter estimation and inference • Alignments are hidden variables – unlike words which are observed – require unsupervised learning (EM algorithm)

  38. PH PHRAS ASE-BASED BASED MO MODE DELS

  39. Phrase-based models • Most common way to model P(F|E) nowadays (instead of IBM models) Start position of f_i End position of f_(i-1) Probability of two consecutive English phrases being separated by a particular span in French

  40. Phrase alignments are derived from word alignments This means that the IBM model represents P(Spanish|English) Get high confidence alignment links by intersecting IBM word alignments from both directions

  41. Phrase alignments are derived from word alignments Improve recall by adding some links from the union of alignments

  42. Phrase alignments are derived from word alignments Extract phrases that are consistent with word alignment

  43. Phrase Translation Probabilities • Given such phrases we can get the required statistics for the model from

  44. Phrase-based Machine Translation

  45. RECAP AP

  46. Noisy Channel Model for Machine Translation • The noisy channel model decomposes machine translation into two independent subproblems – Language modeling – Translation modeling / Alignment

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend