Overview Background: Who did what to whom is a major focus in - - PowerPoint PPT Presentation

overview
SMART_READER_LITE
LIVE PREVIEW

Overview Background: Who did what to whom is a major focus in - - PowerPoint PPT Presentation

Overview Background: Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task. Contribution: The first attempt to let SRL enhance text comprehension and inference 2


slide-1
SLIDE 1
slide-2
SLIDE 2

2

Overview

Background: Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task. Contribution: The first attempt to let SRL enhance text comprehension and inference

slide-3
SLIDE 3

This paper focuses on two core text comprehension (TC) tasks, Machine reading comprehension (MRC) and textual entailment (TE).

3

Task

slide-4
SLIDE 4

4

Framework

  • ur semantics augmented model will be an

integration of two end-to-end models through simple embedding concatenation. For each word x, a joint embedding ej(w) is obtained by the concatenation of word embedding ew(x) and SRL embedding es(x), ⊕ is the concatenation operator

slide-5
SLIDE 5

5

Semantic Role Labeling

  • Given a sentence, the task of semantic role labeling is dedicated to recognizing the semantic

relations between the predicates and the arguments.

  • Example:

Charlie sold a book to Sherry last week [predicate: sold] SRL system yields the following outputs, [ARG0 Charlie] [V sold] [ARG1 a book] [ARG2 to Sherry] [AM-TMP last week] ARG0: the seller (agent), ARG1: the thing sold (theme), ARG2: the buyer (recipient), AM - TMP : adjunct indicating the timing of the action V: the predicate.

slide-6
SLIDE 6

6

Semantic Role Labeler

Word Representation: ELMo embedding and predicate indicator embedding (PIE) Encoder: BiLSTM Corpus: English OntoNotes v5.0 dataset for the CoNLL-2012 shared task

slide-7
SLIDE 7

7

Baseline Models

Textual Entailment Enhanced Sequential Inference Model (ESIM) (Chen et al., 2017) Machine Reading Comprehension Document-QA (Clark et al., 2017)

slide-8
SLIDE 8

8

Textual Entailment

SRL embedding can boost the ESIM+ELMo model by +0.7% improvement. Our model achieves a new state-of-the-art, even

  • utperforms all the ensemble models in the leaderboard

SNLI: 570k hypothesis/premise pairs

slide-9
SLIDE 9

SQuAD: 100k+ crowd sourced questionanswer pairs where the answer is a span in a given Wikipedia paragraph.

9

Machine Reading Comprehension

slide-10
SLIDE 10

10

Dimension of SRL Embedding

5-dimension SRL embedding gives the best performance on both SNLI and SQuAD datasets.

slide-11
SLIDE 11

11

Comparison with different NLP tags

SRL gives the best result, showing semantic roles contribute to the performance, which also indicates that semantic information matches the purpose of NLI task best.

slide-12
SLIDE 12

Thanks! Q&A