natural language processing with deep learning cs224n
play

Natural Language Processing with Deep Learning CS224N/Ling284 - PowerPoint PPT Presentation

Natural Language Processing with Deep Learning CS224N/Ling284 Christopher Manning Lecture 10: (Textual) Question Answering Architectures, Attention and Transformers Mid-quarter feedback survey Thanks to the many of you (!) who have filled it


  1. Natural Language Processing with Deep Learning CS224N/Ling284 Christopher Manning Lecture 10: (Textual) Question Answering Architectures, Attention and Transformers

  2. Mid-quarter feedback survey Thanks to the many of you (!) who have filled it in! If you haven’t yet, today is a good time to do it 😊 2

  3. Lecture Plan Lecture 10: (Textual) Question Answering 1. History/The SQuAD dataset (review) 2. The Stanford Attentive Reader model 3. BiDAF 4. Recent, more advanced architectures 5. Open-domain Question Answering: DrQA 6. Attention revisited; motivating transformers; ELMo and BERT preview 7. Training/dev/test data 8. Getting your neural network to train 3

  4. 1. Turn-of-the Millennium Full NLP QA: [architecture of LCC (Harabagiu/Moldovan) QA system, circa 2003] Complex systems but they did work fairly well on β€œfactoid” questions Document Processing Question Processing Factoid Answer Processing Single Factoid Question Parse Passages Answer Extraction (NER) Factoid Question Multiple Answer Justification Semantic List (alignment, relations) Transformation Factoid Passages Answer Answer Reranking Recognition of Multiple List Expected Answer Definition ( ~ Theorem Prover) Type (for NER) Passages Question Axiomatic Knowledge Passage Retrieval Keyword Extraction Base List List Answer Processing Document Index Answer Answer Extraction Named Entity Answer Type Recognition Hierarchy (CICERO LITE) (WordNet) Threshold Cutoff Document Question Processing Definition Answer Processing Definition Collection Question Parse Question Answer Extraction Definition Pattern Matching Answer Pattern Pattern Matching Repository Keyword Extraction

  5. Stanford Question Answering Dataset (SQuAD) ( Rajpurkar et al., 2016) Question: Which team won Super Bowl 50? Passage Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. 100k examples Answer must be a span in the passage Extractive question answering/reading comprehension 5

  6. SQuAD 2.0 No Answer Example When did Genghis Khan kill Great Khan? Gold Answers: <No Answer> Prediction: 1234 [from Microsoft nlnet] 6

  7. 2. Stanford Attentive Reader [Chen, Bolton, & Manning 2016] [Chen, Fisch, Weston & Bordes 2017] DrQA [Chen 2018] β€’ Demonstrated a minimal, highly successful architecture for reading comprehension and question answering β€’ Became known as the Stanford Attentive Reader 7

  8. The Stanford Attentive Reader Output Input Q Which team won Super Bowl 50? Passage (P) Answer (A) Question (Q) … … … Which team won Super 50 ? 8

  9. Stanford Attentive Reader Q Who did Genghis Khan unite before he began conquering the rest of Eurasia ? Bidirectional LSTMs … … … p # ! P … … … p # 9

  10. Stanford Attentive Reader Q Who did Genghis Khan unite before he began conquering the rest of Eurasia ? Bidirectional LSTMs p # ! … … … Attention Attention predict end token predict start token 10

  11. SQuAD 1.1 Results (single model, c. Feb 2017) F1 51.0 Logistic regression Fine-Grained Gating (Carnegie Mellon U) 73.3 Match-LSTM (Singapore Management U) 73.7 75.9 DCN (Salesforce) 77.3 BiDAF (UW & Allen Institute) 78.7 Multi-Perspective Matching (IBM) 79.4 ReasoNet (MSR Redmond) DrQA (Chen et al. 2017) 79.4 r-net (MSR Asia) [Wang et al., ACL 2017] 79.7 88.0 Google Brain / CMU (Feb 2018) 91.2 Human performance 11

  12. Stanford Attentive Reader++ Figure from SLP3: Chapter 23 p start(1) p end(1) p end(3) … p start(3) … similarity similarity similarity q q q p 3 p 2 p 1 Weighted sum LSTM2 LSTM2 LSTM2 LSTM1 LSTM1 LSTM1 LSTM2 LSTM2 LSTM2 ~ ~ ~ p 1 p 2 p 3 q-align1 LSTM1 LSTM1 LSTM1 q-align2 q-align3 Attention GloVe GloVe 0 0 1 NNP Att NN Att NN GloVe GloVe GloVe GloVe O PER O GloVe GloVe GloVe GloVe p 1 p 2 p 3 Beyonce … q 1 q 2 q 3 When did … Beyonce’s debut album … Question Passage Training objective: 12

  13. (Chen et al., 2018) Stanford Attentive Reader++ q = & 𝑐 ' q ' Q Which team won Super Bowl 50? ' exp( w 6 q ' ) Deep 3 layer BiLSTM For learned 𝐱, 𝑐 ' = βˆ‘ ' 9 exp( w 6 q π’Œ 9 ) is better! w e i g h t e d s u m … … … Which team won Super 50 ? 13

  14. Stanford Attentive Reader++ β€’ πͺ # : Vector representation of each token in passage Made from concatenation of β€’ Word embedding (GloVe 300d) β€’ Linguistic features: POS & NER tags, one-hot encoded β€’ Term frequency (unigram probability) β€’ Exact match: whether the word appears in the question β€’ 3 binary features: exact, uncased, lemma β€’ Aligned question embedding (β€œ car ” vs β€œ vehicle ”) Where 𝛽 is a simple one layer FFNN 14

  15. (Chen, Bolton, Manning, 2016) What do these neural models do? NN Categorical Feature Classifier 100 100 95 100 90 78 Correctness (%) 74 67 50 50 40 28 33 0 Easy Partial Hard/Error 13% 41% 19% 2% 25% 16

  16. 3. BiDAF: Bi-Directional Attention Flow for Machine Comprehension (Seo, Kembhavi, Farhadi, Hajishirzi, ICLR 2017) 17

  17. BiDAF – Roughly the CS224N DFP baseline β€’ There are variants of and improvements to the BiDAF architecture over the years, but the central idea is the Attention Flow layer β€’ Idea: attention should flow both ways – from the context to the question and from the question to the context β€’ Make similarity matrix (with w of dimension 6 d ): β€’ Context-to-Question (C2Q) attention: (which query words are most relevant to each context word) 18

  18. BiDAF β€’ Attention Flow Idea: attention should flow both ways – from the context to the question and from the question to the context β€’ Question-to-Context (Q2C) attention: (the weighted sum of the most important words in the context with respect to the query – slight asymmetry through max) β€’ For each passage position, output of BiDAF layer is: 19

  19. BiDAF β€’ There is then a β€œmodelling” layer: β€’ Another deep (2-layer) BiLSTM over the passage β€’ And answer span selection is more complex: β€’ Start: Pass output of BiDAF and modelling layer concatenated to a dense FF layer and then a softmax β€’ End: Put output of modelling layer M through another BiLSTM to give M 2 and then concatenate with BiDAF layer and again put through dense FF layer and a softmax β€’ Editorial: Seems very complex, but it does seem like you should do a bit more than Stanford Attentive Reader, e.g., conditioning end also on start 20

  20. 4. Recent, more advanced architectures Most of the question answering work in 2016–2018 employed progressively more complex architectures with a multitude of variants of attention – often yielding good task gains 21

  21. Dynamic Coattention Networks for Question Answering (Caiming Xiong, Victor Zhong, Richard Socher ICLR 2017) β€’ Flaw: Questions have input-independent representations β€’ Interdependence needed for a comprehensive QA model Dynamic pointer decoder Coattention encoder start index: 49 end index: 51 steam turbine plants Document encoder Question encoder The weight of boilers and condensers generally What plants create most makes the power-to-weight ... However, most electric power? electric power is generated using steam turbine plants , so that indirectly the world's industry is ...

  22. Coattention Encoder U : u t D : bi-LSTM bi-LSTM bi-LSTM bi-LSTM bi-LSTM οΏ½ m+1 A D document A Q C D concat product product C Q Q : concat οΏ½ n+1

  23. Coattention layer β€’ Coattention layer again provides a two-way attention between the context and the question β€’ However, coattention involves a second-level attention computation: β€’ attending over representations that are themselves attention outputs β€’ We use the C2Q attention distributions Ξ± i to take weighted sums of the Q2C attention outputs b j . This gives us second-level attention outputs s i : 24

  24. Co-attention: Results on SQUAD Competition Model Dev EM Dev F1 Test EM Test F1 Ensemble DCN (Ours) 70.3 79.4 71.2 80.4 Microsoft Research Asia βˆ— 69.4 78.3 βˆ’ βˆ’ Allen Institute βˆ— 69.2 77.8 69.9 78.1 Singapore Management University βˆ— 67.6 76.8 67.9 77.0 Google NYC βˆ— 68.2 76.7 βˆ’ βˆ’ Single model DCN (Ours) 65.4 75.6 66.2 75.9 Microsoft Research Asia βˆ— 65.9 75.2 65.5 75.0 Google NYC βˆ— 66.4 74.9 βˆ’ βˆ’ Singapore Management University βˆ— 64.7 73.7 βˆ’ βˆ’ Carnegie Mellon University βˆ— 62.5 73.3 βˆ’ βˆ’ Dynamic Chunk Reader (Yu et al., 2016) 62.5 71.2 62.5 71.0 Match-LSTM (Wang & Jiang, 2016) 59.1 70.0 59.5 70.3 Baseline (Rajpurkar et al., 2016) 40.0 51.0 40.4 51.0 Human (Rajpurkar et al., 2016) 81.4 91.0 82.3 91.2 Results are at time of ICLR submission See https://rajpurkar.github.io/SQuAD-explorer/ for latest results

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend