affordance extraction and inference based on semantic
play

Affordance Extraction and Inference based on Semantic Role Labeling - PowerPoint PPT Presentation

Affordance Extraction and Inference based on Semantic Role Labeling Daniel Loureiro , Alpio Jorge University of Porto Fact Extraction and Verification (FEVER) Workshop EMNLP 2018 uxdesign.cc Overview 1. Affordances What are they and why are


  1. Affordance Extraction and Inference based on Semantic Role Labeling Daniel Loureiro , Alípio Jorge University of Porto Fact Extraction and Verification (FEVER) Workshop EMNLP 2018 uxdesign.cc

  2. Overview 1. Affordances What are they and why are they relevant? 2. FEVER How may this relate to FEVER? (Suggestions) 3. Affordance Extraction Method 4. Affordance Inference 5. Evaluation 6. Conclusions Demo, data and code available at a2avecs.github.io 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 2/20

  3. What is affordance? Gibson 1979 Norman 1988 Glenberg 2000 Depends on who you ask. 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 3/20

  4. What is affordance? Gibson 1979 Norman 1988 Glenberg 2000 Psychology Affordance: What the environment provides the animal. 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 4/20

  5. What is affordance? Gibson 1979 Norman 1988 Glenberg 2000 Design Affordance: Perceived action possibilities (suggestive). Less Not Likely Suggestive 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 5/20

  6. What is affordance? Gibson 1979 Norman 1988 Glenberg 2000 Language Affordance: Basis for grounding meaning under the Indexical Hypothesis. = 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 6/20

  7. Why affordance? • Commonsense acquisition and representation in Distributional Semantic Models is still an open question [Camacho-Collados, Pilhevar 2018]. • Affordances are a relational component of Commonsense Knowledge. Commonsense Knowledge … Motivations Objects Living Things Substances Affordances 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 7/20

  8. Why affordance? • Commonsense acquisition and representation in Distributional Semantic Models is still an open question [Camacho-Collados, Pilhevar 2018]. • Affordances are a relational component of Commonsense Knowledge. Language Models Commonsense Knowledge World Knowledge … Motivations Patterns … Medicine … Objects Chemistry Geography Living Syntax Associations Things Substances Events Culture Vocabulary Affordances Names 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 7/20

  9. Why affordance? • Commonsense acquisition and representation in Distributional Semantic Models is still an open question [Camacho-Collados, Pilhevar 2018]. • Affordances are a relational component of Commonsense Knowledge. Language Models Language Models Commonsense Knowledge World Knowledge … Motivations … Patterns … Medicine … Medicine Coreference Fact Coreference Fact … … Objects Chemistry Geography Chemistry Resolution Verification Living Resolution Verification Associations Syntax Associations Things Substances Events Events Culture Vocabulary Affordances Names Names 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 7/20

  10. Fact Extraction https://en.wikipedia.org/wiki/Alan_Turing 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 8/20

  11. Fact Extraction Benedict Cumberbatch portrayed Turing in The Imitation Game. With good statistics on affordances, you can infer additional extractions: • Those who portray usually personify. - Benedict Cumberbatch personified Turing. • Things portrayed are usually film characters. - Turing is a film character. (not exclusive) • Places where portrayal occurs are usually films. - The Imitation Game is a film. 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 9/20

  12. Fact Extraction Benedict Cumberbatch portrayed Turing in The Imitation Game. With good statistics on affordances, you can infer additional extractions: • Those who portray usually personify. - Benedict Cumberbatch personified Turing. • Things portrayed are usually film characters. - Turing is a film character. (not exclusive) • Places where portrayal occurs are usually films. cf. Selectional Preferences, Argument Typicality, - The Imitation Game is a film. Frame Semantics. 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 9/20

  13. Fact Verification Example claims from the FEVER dataset: • A Floppy disk is lined with turnips. • A Floppy disk is lined with paper. • A Floppy disk is a type of fish. • A Floppy disk is sealed in plastic. • A Floppy disk is sealed in a cave. 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 10/20

  14. Fact Verification Example claims from the FEVER dataset: • A Floppy disk is lined with turnips. • A Floppy disk is lined with paper. • A Floppy disk is a type of fish. • A Floppy disk is sealed in plastic. • A Floppy disk is sealed in a cave. sealed in ? lined with ? Plausible* Nonsense lined with ? sealed in ? type of ? *though atypical Plausible* Plausible Nonsense 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 10/20

  15. Fact Verification Semantic Plausibility as a prior bias for Fact Verification. • If implausible (i.e. nonsense): • Probably refutable and no explicit evidence. E.g. “A Floppy disk is a type of fish.” • If plausible and typical (i.e. obvious): • Probably supported with implicit evidence. E.g. “Dan Trachtenberg is a person.” • If plausible and atypical (i.e. others): • Unknown refutability, explicit evidence should exist. E.g. “Sarah Hyland is a New Yorker.” Intuition: Plausibility should be easier to assess than Truth. 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 11/20

  16. Fact Verification Semantic Plausibility as a prior bias for Fact Verification. Obvious • If implausible (i.e. nonsense): • Probably refutable and no explicit evidence. E.g. “A Floppy disk is a type of fish.” • If plausible and typical (i.e. obvious): • Probably supported with implicit evidence. Requires E.g. “Dan Trachtenberg is a person.” Evidence • If plausible and atypical (i.e. others): • Unknown refutability, explicit evidence should exist. E.g. “Sarah Hyland is a New Yorker.” Intuition: Plausibility should be easier to assess than Truth. Nonsense 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 11/20

  17. Affordance Extraction Affordance Representation: Every symbol (i.e. token) is represented by a vector whose dimensions signal affordances. Can eat ? Can jump ? Used for riding ? Place for getting lost? dog Yes Yes No No cat Yes Yes No No horse Yes Yes Yes No brussels No No No Yes thought No No No No Assignment 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 12/20

  18. Affordance Extraction Affordance Representation: Every symbol (i.e. token) is represented by a vector whose dimensions signal affordances. Can eat ? Can jump ? Used for riding ? Place for getting lost? dog 1.0 1.0 0.2 0.0 cat 1.0 1.0 0.0 0.0 horse 1.0 0.8 1.0 0.0 brussels 0.2 0.0 0.0 1.0 thought 0.0 0.2 0.0 0.2 Assignment > Grading 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 12/20

  19. Affordance Extraction Affordance Representation: Every symbol (i.e. token) is represented by a vector whose dimensions signal affordances. eat | AGENT jump | AGENT ride | PATIENT lose | LOCATION dog 1.0 1.0 0.2 0.0 cat 1.0 1.0 0.0 0.0 horse 1.0 0.8 1.0 0.0 brussels 0.2 0.0 0.0 1.0 thought 0.0 0.2 0.0 0.2 Assignment > Grading > Formalizing 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 12/20

  20. Affordance Extraction • Affordances are based on Predicate-Argument Structures (PASs) extracted from Natural Language using Semantic Role Labeling (SRL). We use [He et. al 2017]’s end-to-end neural SRL to process Wikipedia. • After extraction, PASs are organised into a co-occurrence matrix and weighted using PPMI , similarly to [Levy and Goldberg 2014]. 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 13/20

  21. Affordance Extraction • Affordances are based on Predicate-Argument Structures (PASs) extracted from Natural Language using Semantic Role Labeling (SRL). We use [He et. al 2017]’s end-to-end neural SRL to process Wikipedia. • After extraction, PASs are organised into a co-occurrence matrix and weighted using PPMI , similarly to [Levy and Goldberg 2014]. John drinks red wine slowly. patient agent (ARG1) (ARG0) manner (ARGM-MNR) PropBank annotations [Palmer 2012] 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 13/20

  22. Affordance Extraction • Affordances are based on Predicate-Argument Structures (PASs) extracted from Natural Language using Semantic Role Labeling (SRL). We use [He et. al 2017]’s end-to-end neural SRL to process Wikipedia. • After extraction, PASs are organised into a co-occurrence matrix and weighted using PPMI , similarly to [Levy and Goldberg 2014]. drink | ARG0 drink | ARG1 drink | ARGM-MNR … John 0.8 0.0 0.0 0.0 red 0.0 0.6 0.0 0.0 wine 0.0 0.9 0.0 0.0 slowly 0.0 0.0 0.7 0.0 … 0.0 0.0 0.0 0.0 1. Affordances 2. FEVER 3. Extraction 4. Inference 5. Evaluation 6. Conclusion 13/20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend