neural semantic parsing with anonymization for command
play

Neural Semantic Parsing with Anonymization for Command Understanding - PowerPoint PPT Presentation

Neural Semantic Parsing with Anonymization for Command Understanding in General- Purpose Service Robots Nick Walker , Yu-Tang Peng, Maya Cakmak Bring me an apple Traditional semantic parsing bring an apple VB DT NN DT NN\ NP


  1. Neural Semantic Parsing with Anonymization for Command Understanding in General- Purpose Service Robots Nick Walker , Yu-Tang Peng, Maya Cakmak

  2. Bring me an apple

  3. Traditional semantic parsing bring an apple VB DT NN DT NN\ NP 
 λ $1.is_a($1, “apple”) VB NP\ S 
 bring(X) Zettlemoyer and Collins 2007 � 5

  4. Neural machine translation bring an apple f bring( λ $1 (is_a($1 “apple”))) Dong and Lapata 2016 � 6

  5. Neural machine translation h 0 h 1 h 2 bring ( λ $1 (is_a($1 “apple”)) ) h 2 Encoder Encoder Encoder Decoder Decoder Decoder … LSTM LSTM LSTM LSTM LSTM LSTM h 0..2 Attention Embedding fetch an apple Dong and Lapata 2016 � 7

  6. How do we frame command understanding so that neural semantic parsing methods can work for robotics domains?

  7. Command Command Anonymizer Anonymized Command Semantic Parser Anonymized Logical Form Logical Form Logical Form � 9

  8. Simplification: 
 Anonymize Commands “bring an apple” • use robot’s ontology to simplify commands Command Anonymizer • improve predictability • hit or miss “bring an <object>” � 10

  9. Simplification: 
 Anonymized Logical Representation “bring an <object>” • skip argument assignment Semantic Parser • no longer executable bring( λ $1.(is_a($1, <object>))) � 11

  10. Getting executability back Command Anonymizer apple: <object> Semantic Parser • if command anonymization worked, deanonymization bring( λ $1.(is_a($1, <object>))) can be straightforward • but in other cases… Logical Form Deanonymizer bring( λ $1.(is_a($1, “apple”))) � 12

  11. Anonymization “bring an apple” Failure Command Anonymizer • good word embeddings make parser robust to “bring an apple” unseen entities Semantic Parser ?: <object> bring( λ $1.(is_a($1, <object>)) ) Logical Form Deanonymizer Confirmation dialogue “What would you like me to bring?” bring( λ $1.(is_a($1, “apple”))) � 13

  12. “bring an apple from the kitchen kitchen: <location> to the table” Deanonymization table: <location> apple: <object> Recovery Command Anonymizer • ambiguous cases “bring an <object> from the <location> to the <location>” • handle in dialogue Semantic Parser b r i n g ( λ $ 1 . ( i s _ a ( $ 1 , < o b j e c t > ) ^ a t ( $ 1 , < l o c a t i o n > ) ) , < l o c a t i o n > ) Logical Form Deanonymizer Confirmation dialogue “You’d like me to bring a what from where to bring( where?” λ $1.(is_a($1, “apple”) ^ at($1, “kitchen”)),”table”) ) � 14

  13. How well does a neural semantic parser work under this regime in a robotics domain? � 15

  14. Getting Annotated Data $vbbring me the $object = bring( λ $1.(is a($1, $object))) � 16

  15. Getting Annotated Data • Modified RoboCup@Home 2018 GPSR command generator • Generates command + logical form • 21 predicates • 125 annotations → 1211 anonymized commands • 101 anonymized logical forms � 17

  16. Paraphrased Data • Di ff erent words, same meaning • 1836 paraphrases from 95 crowd workers • Reasonable validation checks required Generated: 
 “ tell me how many coke there are on the freezer” Paraphrased: 
 “how many cokes are left in the freezer” � 18

  17. Experiment • Split paraphrased data 70/10/20% • Train and tune, then test on the held out 20% • Measure accuracy, percentage of exact match predictions • Assume ontology is empty! Command anonymization always fails � 19

  18. percentage accuracy ( ↑ ) Train Paraphrased Test Paraphrased Grammar-based Oracle k-Nearest Neighbors � 20

  19. percentage accuracy ( ↑ ) Train Paraphrased Test Paraphrased Grammar-based Oracle k-Nearest Neighbors seq2seq + GloVe + GloVe;ELMo + GloVe;OpenAI + GloVe;BERT base + GloVe;BERT large � 21

  20. percentage accuracy ( ↑ ) Train Paraphrased Test Paraphrased Grammar-based Oracle 1.1 k-Nearest Neighbors 42.8 seq2seq 64.4 + GloVe 70.2 + GloVe;ELMo 77.3 + GloVe;OpenAI 78.2 + GloVe;BERT base 75.4 + GloVe;BERT large 78.5 � 22

  21. percentage accuracy ( ↑ ) Train Paraphrased Gen. + Paraphrased Test Paraphrased Paraphrased Grammar-based Oracle 1.1 1.1 k-Nearest Neighbors 42.8 49.8 seq2seq 64.4 79.6 + GloVe 70.2 85.3 + GloVe;ELMo 77.3 85.4 + GloVe;OpenAI 78.2 89.0 + GloVe;BERT base 75.4 87.6 + GloVe;BERT large 78.5 89.4 � 23

  22. You can train neural semantic parsers that work well for command-taking dialogues in robots.

  23. Use our code and data! github.com/nickswalker/gpsr-command-understanding • Train your own models • Use our baselines • Beat our performance (data, splits available) • Hack on a Python version of the command generator � 25

  24. Neural Semantic Parsing with Paper Anonymization for Command Understanding in General-Purpose Service Robots Nick Walker , Yu-Tang Peng, Maya Cakmak arxiv.org/abs/1907.01115 Slides DOI 10.5281/zenodo.3253252

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend