Te Text Generation from Kn Knowledge Graphs with Gr Graph - - PowerPoint PPT Presentation

te text generation from kn knowledge graphs with gr graph
SMART_READER_LITE
LIVE PREVIEW

Te Text Generation from Kn Knowledge Graphs with Gr Graph - - PowerPoint PPT Presentation

Te Text Generation from Kn Knowledge Graphs with Gr Graph Transforme rmers NAACL19 Rik Koncel-Kedziorski , Dhanush Bekal , Yi Luan , Mirella Lapata , and Hannaneh Hajishirzi University of Washington University of Edinburgh Allen


slide-1
SLIDE 1

Te Text Generation from Kn Knowledge Graphs with Gr Graph Transforme rmers

NAACL19 Rik Koncel-Kedziorski , Dhanush Bekal , Yi Luan , Mirella Lapata , and Hannaneh Hajishirzi University of Washington University of Edinburgh Allen Institute for Artificial Intelligence

Reporter:Xiachong Feng https://www.youtube.com/watch?v=BiRyvB2NmCM

slide-2
SLIDE 2

Outline

  • Author
  • Motivation
  • Task
  • Dataset
  • Model
  • Experiments
  • Conclusion
slide-3
SLIDE 3

Author

  • Rik Koncel-Kedziorski
  • Lives on a sailboat
  • University of Washington Ph.D. Winter 2019
slide-4
SLIDE 4

Knowledge

slide-5
SLIDE 5

Knowledge

slide-6
SLIDE 6

Task

  • Input
  • Title of a scientific article;
  • Knowledge graph constructed by an automatic

information extraction system;

  • Output
  • Abstract (text);

Graph T i t l e

slide-7
SLIDE 7

Dataset

  • Abstract GENeration DAtaset (AGENDA) Dataset
  • 12 top AI conferences
  • SciIE system : a state-of-the-art science domain

information extraction system.

  • NER、Co-Reference、Relations
slide-8
SLIDE 8

Dataset

slide-9
SLIDE 9

Model-GraphWriter

Encoder Decoder

slide-10
SLIDE 10

Graph Preparation

disconnected labeled graph connected unlabeled graph

slide-11
SLIDE 11

Embedding Vertices, Encoding Title

  • Relation:forward- and backward-looking, two

embeddings per relation

  • Entities correspond to scientific terms which are
  • ften multi-word expressions.
  • Bidirectional RNN run over embeddings of each

word

  • The title input is also a short string, and so we

encode it with another BiRNN

slide-12
SLIDE 12

Graph Transformer

slide-13
SLIDE 13

GAT

slide-14
SLIDE 14

Graph Attention

concat

slide-15
SLIDE 15

Block networks

global contextualization

slide-16
SLIDE 16

Decoder

  • At each decoding timestep t we use decoder

hidden state ht to compute context vectors cg and cs for the graph and title sequence

slide-17
SLIDE 17

Copy

entities

slide-18
SLIDE 18

Experiments

  • Evaluation Metrics
  • Human evaluation
  • Grammar
  • Fluency
  • Coherence
  • Informativeness
  • Automatic metrics
  • BLEU
  • METEOR
slide-19
SLIDE 19

Baselines

  • GAT : PReLU activations stacked between 6 self-

attention layers.

  • EntityWriter : uses only entities and title (no graph)
  • Rewriter : uses only the document title
slide-20
SLIDE 20

Does Knowledge Help?

slide-21
SLIDE 21

Conclusion

  • Propose a new graph transformer encoder that

applies the successful sequence transformer to graph structured inputs.

  • Provide a large dataset of knowledge graphs paired

with scientific texts for further study.

slide-22
SLIDE 22

Thanks!