driving semantic parsing from the world s response
play

Driving Semantic Parsing from the Worlds Response James Clarke , Dan - PowerPoint PPT Presentation

Driving Semantic Parsing from the Worlds Response James Clarke , Dan Goldwasser, Ming-Wei Chang, Dan Roth Cognitive Computation Group University of Illinois at Urbana-Champaign CoNLL 2010 Clarke, Goldwasser, Chang, Roth 1 What is Semantic


  1. Driving Semantic Parsing from the World’s Response James Clarke , Dan Goldwasser, Ming-Wei Chang, Dan Roth Cognitive Computation Group University of Illinois at Urbana-Champaign CoNLL 2010 Clarke, Goldwasser, Chang, Roth 1

  2. What is Semantic Parsing? Meaning Representation make(coffee, sugar=0, milk=0.3) I’d like a coffee with no sugar and just a little milk Clarke, Goldwasser, Chang, Roth 2

  3. What is Semantic Parsing? Meaning Representation make(coffee, sugar=0, milk=0.3) I’d like a coffee with no sugar and just a little milk Clarke, Goldwasser, Chang, Roth 2

  4. Supervised Learning Problem meaning text Training Model algorithm Challenges: Structured Prediction problem Model part of the structure as hidden? Clarke, Goldwasser, Chang, Roth 3

  5. Lots of previous work Multiple approaches to the problem: K RISP (Kate & Mooney 2006) SVM-based parser using string kernels. Zettlemoyer & Collins 2005; Zettlemoyer & Collins 2007 Probabilistic parser based on relaxed CCG grammars. W ASP (Wong & Mooney 2006; Wong & Mooney 2007) Based on Synchronous CFG. Ge & Mooney 2009 Integrated syntactic and semantic parser. Clarke, Goldwasser, Chang, Roth 4

  6. Lots of previous work Multiple approaches to the problem: K RISP (Kate & Mooney 2006) SVM-based parser using string kernels. Zettlemoyer & Collins 2005; Zettlemoyer & Collins 2007 Probabilistic parser based on relaxed CCG grammars. W ASP (Wong & Mooney 2006; Wong & Mooney 2007) Based on Synchronous CFG. Ge & Mooney 2009 Integrated syntactic and semantic parser. Assumption : A training set consisting of natural language and meaning representation pairs. Clarke, Goldwasser, Chang, Roth 4

  7. Using the World’s response Meaning Representation make(coffee, sugar=0, milk=0.3) I’d like a coffee with no sugar and just a little milk Clarke, Goldwasser, Chang, Roth 5

  8. Using the World’s response Meaning Representation make(coffee, sugar=0, milk=0.3) I’d like a coffee with no sugar and just a little milk Good! Bad! Clarke, Goldwasser, Chang, Roth 5

  9. Using the World’s response Meaning Representation make(coffee, sugar=0, milk=0.3) I’d like a coffee with no sugar and just a little milk Good! Bad! Question: Can we use feedback based on the response to provide supervision? Clarke, Goldwasser, Chang, Roth 5

  10. This work We aim to : Reduce the burden of annotation for semantic parsing. We focus on : Using the World’s response to learn a semantic parser. Developing new training algorithms to support this learning paradigm. A lightweight semantic parsing model that doesn’t require annotated data. This results in : Learning a semantic parser using zero annotated meaning representations. Clarke, Goldwasser, Chang, Roth 6

  11. Outline Semantic Parsing 1 Learning 2 D IRECT Approach A GGRESSIVE Approach Semantic Parsing Model 3 Experiments 4 Clarke, Goldwasser, Chang, Roth 7

  12. Outline Semantic Parsing 1 Learning 2 D IRECT Approach A GGRESSIVE Approach Semantic Parsing Model 3 Experiments 4 Clarke, Goldwasser, Chang, Roth 8

  13. Semantic Parsing What is the largest state that borders Texas? I NPUT x H IDDEN y O UTPUT z largest(state(next_to(texas))) Clarke, Goldwasser, Chang, Roth 9

  14. Semantic Parsing What is the largest state that borders Texas? I NPUT x H IDDEN y O UTPUT z largest(state(next_to(texas))) F : X → Z w T Φ( x , y , z ) ˆ z = F w ( x ) = arg max y ∈Y , z ∈Z Clarke, Goldwasser, Chang, Roth 9

  15. Semantic Parsing What is the largest state that borders Texas? I NPUT x H IDDEN y O UTPUT z largest(state(next_to(texas))) F : X → Z w T Φ( x , y , z ) ˆ z = F w ( x ) = arg max y ∈Y , z ∈Z Model The nature of inference and feature functions. Learning Strategy How we obtain the weights. Clarke, Goldwasser, Chang, Roth 9

  16. Semantic Parsing What is the largest state that borders Texas? I NPUT x H IDDEN y O UTPUT z largest(state(next_to(texas))) Response r New Mexico F : X → Z w T Φ( x , y , z ) ˆ z = F w ( x ) = arg max y ∈Y , z ∈Z Model The nature of inference and feature functions. Learning Strategy How we obtain the weights. Clarke, Goldwasser, Chang, Roth 9

  17. Outline Semantic Parsing 1 Learning 2 D IRECT Approach A GGRESSIVE Approach Semantic Parsing Model 3 Experiments 4 Clarke, Goldwasser, Chang, Roth 10

  18. Learning Inputs : Natural language sentences. Feedback : X × Z → { + 1 , − 1 } . Zero meaning representations. Clarke, Goldwasser, Chang, Roth 11

  19. Learning Inputs : Natural language sentences. Feedback : X × Z → { + 1 , − 1 } . Zero meaning representations. � + 1 if execute ( z ) = r Feedback ( x , z ) = − 1 otherwise Clarke, Goldwasser, Chang, Roth 11

  20. Learning Inputs : Natural language sentences. Feedback : X × Z → { + 1 , − 1 } . Zero meaning representations. Goal : A weight vector that scores the correct meaning representation higher than all other meaning representations. Response Driven Learning : Feedback predict apply to Meaning Input text World Representation Clarke, Goldwasser, Chang, Roth 11

  21. Learning Strategies x 1 repeat for all input sentences do x 2 Solve the inference problem Query Feedback function x 3 end for Learn a new w using feedback until Convergence . . . x n Clarke, Goldwasser, Chang, Roth 12

  22. Learning Strategies y 1 x 1 z 1 repeat for all input sentences do x 2 y 2 z 2 Solve the inference problem Query Feedback function x 3 y 3 z 3 end for Learn a new w using feedback until Convergence . . . . . . . . . y , z = arg max w T Φ( x , y , z ) x n y n z n Clarke, Goldwasser, Chang, Roth 12

  23. Learning Strategies x 1 y 1 z 1 + 1 repeat x 2 y 2 z 2 for all input sentences do − 1 Solve the inference problem Query Feedback function end for x 3 y 3 z 3 − 1 Learn a new w using feedback until Convergence . . . . . . . . . . . . y n x n z n − 1 Clarke, Goldwasser, Chang, Roth 12

  24. Learning Strategies x 1 y 1 z 1 + 1 repeat x 2 y 2 z 2 for all input sentences do − 1 Solve the inference problem Query Feedback function end for x 3 y 3 z 3 − 1 Learn a new w using feedback until Convergence . . . . . . . . . . . . y n x n z n − 1 Clarke, Goldwasser, Chang, Roth 12

  25. Outline Semantic Parsing 1 Learning 2 D IRECT Approach A GGRESSIVE Approach Semantic Parsing Model 3 Experiments 4 Clarke, Goldwasser, Chang, Roth 13

  26. D IRECT Approach Binary Learning Feedback predict apply to Meaning Input text World Representation D IRECT Learn a binary classifier to discriminate between good and bad meaning representations. Clarke, Goldwasser, Chang, Roth 14

  27. D IRECT Approach x 1 y 1 z 1 + 1 x 2 y 2 z 2 − 1 Use ( x , y , z ) as a training example with label from x 3 y 3 z 3 − 1 feedback. . . . . . . . . . . . . y n x n z n − 1 Clarke, Goldwasser, Chang, Roth 15

  28. D IRECT Approach x 1 , y 1 , z 1 + 1 x 2 , y 2 , z 2 − 1 Use ( x , y , z ) as a training example with label from x 3 , y 3 , z 3 − 1 feedback. Find w such that f · w T Φ( x , y , z ) > 0 . . . . . . x n , y n , z n − 1 Clarke, Goldwasser, Chang, Roth 15

  29. D IRECT Approach Each point represented by Φ( x , y , x ) normalized by | x | Clarke, Goldwasser, Chang, Roth 16

  30. D IRECT Approach w Learn a binary classifier to discriminate between good and bad meaning representations. Clarke, Goldwasser, Chang, Roth 16

  31. D IRECT Approach repeat for all input sentences do Solve the inference problem Query Feedback function end for Learn a new w using feedback until Convergence Clarke, Goldwasser, Chang, Roth 17

  32. D IRECT Approach x 1 repeat for all input sentences do x 2 Solve the inference problem Query Feedback function x 3 end for Learn a new w using feedback until Convergence . . . x n Clarke, Goldwasser, Chang, Roth 17

  33. D IRECT Approach y ′ z ′ x 1 1 1 repeat x 2 y ′ z ′ 2 for all input sentences do 2 Solve the inference problem Query Feedback function end for x 3 y ′ z ′ 3 3 Learn a new w using feedback until Convergence . . . . . . . . . y , z = arg max w T Φ( x , y , z ) x n y ′ z ′ n n Clarke, Goldwasser, Chang, Roth 17

  34. D IRECT Approach y ′ z ′ x 1 + 1 1 1 repeat x 2 y ′ z ′ 2 for all input sentences do + 1 2 Solve the inference problem Query Feedback function end for x 3 y ′ z ′ + 1 3 3 Learn a new w using feedback until Convergence . . . . . . . . . . . . y ′ z ′ x n − 1 n n Clarke, Goldwasser, Chang, Roth 17

  35. D IRECT Approach x 1 , y ′ 1 , z ′ + 1 1 repeat x 2 , y ′ 2 , z ′ for all input sentences do + 1 2 Solve the inference problem Query Feedback function x 3 , y ′ 3 , z ′ end for + 1 3 Learn a new w using feedback until Convergence . . . . . . x n , y ′ n , z ′ − 1 n Clarke, Goldwasser, Chang, Roth 17

  36. D IRECT Approach w Clarke, Goldwasser, Chang, Roth 18

  37. D IRECT Approach w Clarke, Goldwasser, Chang, Roth 18

  38. D IRECT Approach w Clarke, Goldwasser, Chang, Roth 18

  39. D IRECT Approach w Repeat until convergence! Clarke, Goldwasser, Chang, Roth 18

  40. Outline Semantic Parsing 1 Learning 2 D IRECT Approach A GGRESSIVE Approach Semantic Parsing Model 3 Experiments 4 Clarke, Goldwasser, Chang, Roth 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend