selective attention for context aware neural machine
play

Selective Attention for Context-aware Neural Machine Translation - PowerPoint PPT Presentation

Selective Attention for Context-aware Neural Machine Translation Sameen Maruf , Andr e F. T. Martins , Gholamreza Haffari Faculty of Information Technology, Monash University, Australia Unbabel & Instituto de


  1. Selective Attention for Context-aware Neural Machine Translation Sameen Maruf † , Andr´ e F. T. Martins ‡ , Gholamreza Haffari † † Faculty of Information Technology, Monash University, Australia ‡ Unbabel & Instituto de Telecomunica¸ c˜ oes, Lisbon, Portugal NAACL-HLT, Minneapolis, June, 2019 1 / 31

  2. Overview The Whys? 1 Proposed Approach 2 Experiments and Analyses 3 Summary 4 2 / 31

  3. The Whys? Overview The Whys? 1 Proposed Approach 2 Experiments and Analyses 3 Summary 4 3 / 31

  4. The Whys? Why document-level machine translation? 4 / 31

  5. The Whys? Why document-level machine translation? Most state-of-the-art NMT models translate sentences independently 4 / 31

  6. The Whys? Why document-level machine translation? Most state-of-the-art NMT models translate sentences independently Discourse phenomena are ignored, e.g., pronominal anaphora and coherence, which may have long-range dependency 4 / 31

  7. The Whys? Why document-level machine translation? Most state-of-the-art NMT models translate sentences independently Discourse phenomena are ignored, e.g., pronominal anaphora and coherence, which may have long-range dependency Most of the works in document NMT focus on using a few previous sentences as context ignoring the rest of the document [Jean et al., 2017, Wang et al., 2017, Bawden et al., 2018, Voita et al., 2018, Tu et al., 2018, Zhang et al., 2018, Miculicich et al., 2018] 4 / 31

  8. The Whys? Why document-level machine translation? Most state-of-the-art NMT models translate sentences independently Discourse phenomena are ignored, e.g., pronominal anaphora and coherence, which may have long-range dependency Most of the works in document NMT focus on using a few previous sentences as context ignoring the rest of the document [Jean et al., 2017, Wang et al., 2017, Bawden et al., 2018, Voita et al., 2018, Tu et al., 2018, Zhang et al., 2018, Miculicich et al., 2018] The global document context for MT [Maruf and Haffari, 2018] 4 / 31

  9. The Whys? Why selective attention for document MT? 5 / 31

  10. The Whys? Why selective attention for document MT?

  11. The Whys? Why selective attention for document MT? Soft attention over words in the document context 5 / 31

  12. The Whys? Why selective attention for document MT? Soft attention over words in the document context Forms a long-tail absorbing significant probability mass 5 / 31

  13. The Whys? Why selective attention for document MT? Soft attention over words in the document context Forms a long-tail absorbing significant probability mass Incapable of ignoring irrelevant words 5 / 31

  14. The Whys? Why selective attention for document MT? Soft attention over words in the document context Forms a long-tail absorbing significant probability mass Incapable of ignoring irrelevant words Not scalable to long documents 5 / 31

  15. The Whys? This Work We propose a sparse and hierarchical attention approach for document NMT which: identifies the key sentences in the global document context, and attends to the key words within those sentences 6 / 31

  16. Proposed Approach Overview The Whys? 1 Proposed Approach 2 Experiments and Analyses 3 Summary 4 7 / 31

  17. Proposed Approach Hierarchical Selective Context Attention 8 / 31

  18. Proposed Approach Hierarchical Selective Context Attention For each query word: α s : attention weights given to sentences in context

  19. Proposed Approach Hierarchical Selective Context Attention For each query word: α s : attention weights given to sentences in context α w : attention weights given to words in context

  20. Proposed Approach Hierarchical Selective Context Attention For each query word: α s : attention weights given to sentences in context α w : attention weights given to words in context α hier : re-scaled attention weights of words in context

  21. Proposed Approach Hierarchical Selective Context Attention For each query word: α s : attention weights given to sentences in context α w : attention weights given to words in context α hier : re-scaled attention weights of words in context V w : from words in context 8 / 31

  22. Proposed Approach Hierarchical Selective Attention over Source Document 9 / 31

  23. Proposed Approach Hierarchical Selective Attention over Source Document 1 Sparse sentence-level key matching : identify relevant sentences Q s : representation of words in current sentence K s : representation of sentences in context

  24. Proposed Approach Hierarchical Selective Attention over Source Document 1 Sparse sentence-level key matching : identify relevant sentences Q s : representation of words in current sentence K s : representation of sentences in context

  25. Proposed Approach Hierarchical Selective Attention over Source Document 1 Sparse sentence-level key matching : identify relevant sentences Q s : representation of words in current sentence K s : representation of sentences in context 9 / 31

  26. Proposed Approach Hierarchical Selective Attention over Source Document 2 Sparse word-level key matching : identify relevant words in relevant sentences Q w : representation of words in current sentence K w : representation of words in context

  27. Proposed Approach Hierarchical Selective Attention over Source Document 2 Sparse word-level key matching : identify relevant words in relevant sentences Q w : representation of words in current sentence K w : representation of words in context

  28. Proposed Approach Hierarchical Selective Attention over Source Document 2 Sparse word-level key matching : identify relevant words in relevant sentences Q w : representation of words in current sentence K w : representation of words in context 10 / 31

  29. Proposed Approach Hierarchical Selective Attention over Source Document 3 Re-scale attention weights

  30. Proposed Approach Hierarchical Selective Attention over Source Document 3 Re-scale attention weights 11 / 31

  31. Proposed Approach Hierarchical Selective Attention over Source Document 4 Read the word-level values with the attention weights

  32. Proposed Approach Hierarchical Selective Attention over Source Document 4 Read the word-level values with the attention weights

  33. Proposed Approach Hierarchical Selective Attention over Source Document 4 Read the word-level values with the attention weights Our sparse hierarchical attention module is able to selectively focus on relevant sentences in the document context and then attends to key words in those sentences 12 / 31

  34. Proposed Approach Flat Attention over Source Document 13 / 31

  35. Proposed Approach Flat Attention over Source Document Soft sentence-level attention over all sentences in the document context 13 / 31

  36. Proposed Approach Flat Attention over Source Document Soft sentence-level attention over all sentences in the document context K , V : representation of sentences in context

  37. Proposed Approach Flat Attention over Source Document Soft sentence-level attention over all sentences in the document context K , V : representation of sentences in context 13 / 31

  38. Proposed Approach Flat Attention over Source Document Soft sentence-level attention over all sentences in the document context K , V : representation of sentences in context Comparison to [Maruf and Haffari, 2018]: 13 / 31

  39. Proposed Approach Flat Attention over Source Document Soft sentence-level attention over all sentences in the document context K , V : representation of sentences in context Comparison to [Maruf and Haffari, 2018]: • multi-head attention 13 / 31

  40. Proposed Approach Flat Attention over Source Document Soft sentence-level attention over all sentences in the document context K , V : representation of sentences in context Comparison to [Maruf and Haffari, 2018]: • multi-head attention • dynamic 13 / 31

  41. Proposed Approach Flat Attention over Source Document Soft word-level attention over all words in the document context K , V : representation of words in context 14 / 31

  42. Proposed Approach Document-level Context Layer Hierarchical selective or Flat 15 / 31

  43. Proposed Approach Document-level Context Layer Hierarchical selective or Flat 15 / 31

  44. Proposed Approach Document-level Context Layer Hierarchical selective or Flat Monolingual context (source) integrated in encoder 15 / 31

  45. Proposed Approach Document-level Context Layer Hierarchical selective or Flat Monolingual context (source) integrated in encoder Bilingual context (source & target) integrated in decoder 15 / 31

  46. Proposed Approach Our Models and Settings 16 / 31

  47. Proposed Approach Our Models and Settings Our Models: 16 / 31

  48. Proposed Approach Our Models and Settings Our Models: Hierarchical Attention over context • sparse at sentence-level, soft at word-level • sparse at both sentence and word-level 16 / 31

  49. Proposed Approach Our Models and Settings Our Models: Hierarchical Attention over context • sparse at sentence-level, soft at word-level • sparse at both sentence and word-level Flat Attention over context • soft at sentence-level • soft at word-level 16 / 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend