robust distant supervision relation extraction via deep
play

Robust Distant Supervision Relation Extraction via Deep - PowerPoint PPT Presentation

Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning BUPT Pengda Qin , Weiran Xu and William Wang 1 Outline Motivation Algorithm Experiments Conclusion 2 Outline Motivation Algorithm


  1. Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning BUPT Pengda Qin , Weiran Xu and William Wang 1

  2. Outline • Motivation • Algorithm • Experiments • Conclusion 2

  3. Outline • Motivation • Algorithm • Experiments • Conclusion 3

  4. Relation Extraction Relation Type with Labeled Dataset Plain Text Corpus Entity-Relation Triple Classifier (Unstructured Info) (Structured Info) Relation Type without Labeled Dataset 4

  5. Distant Supervision “If two entities participate in a relation, any sentence that contains those two entities might express that relation.” (Mintz, 2009) 5

  6. Distant Supervision Data(x): <Belgium, Nijlen> Label(y): /location/contains Relation Label: /location/contains Sentence Bag: 1. Nijlen is a municipality located in the Belgian Target Corpus province of Antwerp. (Unlabeled) 2. …… 3. …… 6

  7. Wrong Labeling v Within-Sentence-Bag Level § Hoffmann et al., ACL 2011. § Surdean et al., ACL 2012. § Zeng et al., ACL 2015. § Li et al., ACL 2016. v Entity-Pair Level § None 7

  8. Wrong Labeling Place_of_Death (William O’Dwyer, New York city) § i. Some New York city mayors – William O’Dwyer, Vincent R. Impellitteri and Abraham Beame – were born abroad. ii. Plenty of local officials have, too, including two New York city mayors, v Entity-Pair Level James J. Walker, in 1932, and William O’Dwyer, in 1950. 8

  9. Wrong Labeling v Most of entity pairs only have several sentences Other 4% v Lots of entity pairs have repetitive sentences 1 Sentence 2 Sentence 55% 32% 9

  10. Outline • Motivation • Algorithm • Experiments • Conclusion 10

  11. Overview DS Dataset Cleaned Dataset Negative set Negative set False Positive Positive set Positive set False Positive False Positive Indicator 11

  12. Requirements False-Positive Indicator Sentence-Level Indicator Without Supervised Information General Purpose and Offline Process Learn a Policy to Denoise the Training Data 12

  13. Overview DS Dataset Cleaned Dataset Negative set Negative set Classifier 𝑈𝑠𝑏𝑗𝑜 𝑆𝑓𝑥𝑏𝑠𝑒 False Positive Positive set 𝐵𝑑𝑢𝑗𝑝𝑜 Positive set False Positive False Positive Policy-Based Agent Indicator 13

  14. Deep Reinforcement Learning v State § Sentence vector The average vector of previous removed sentences § v Action § Remove & retain v Reward § ??? 14

  15. Deep Reinforcement Learning v One relation type has an agent v Sentence-level § Positive: Distantly-supervised positive sentences § Negative: Sampled from other relations v Split into training set and validation set 15

  16. Deep Reinforcement Learning Removed ;<0 +𝑂 : Noisy part dataset Epoch i -1 Relation 01/ 𝐺 Cleaned ;<0 𝑄 : / Train Classifier RL Agent dataset 0 - 𝐺 01/ ) 𝓢 0 = 𝛽(𝐺 / / Removed ;<0 + 𝑂 : part Noisy Epoch i dataset Relation 0 𝐺 Cleaned RL Agent / ;<0 Train Classifier 𝑄 : × +𝓢 0 + ×(−𝓢 0 ) dataset 16

  17. Reward § Accurate § Steady False Positive § Fast § Obvious Positive Set Negative Set 17

  18. Reward Epoch 𝑗 Positive Negative False Positive Train False Positive Train Relation Relation Classifier Classifier Calculate Negative Set Positive Set 𝐺 / 18

  19. Outline • Motivation • Algorithm • Experiments • Conclusion 19

  20. Evaluation on a Synthetic Noise Dataset v Dataset: SemEval-2010 Task 8 v True Positive: Cause-Effect v False Positive: Other relation types v True Positive + False Positive: 1331 samples 20

  21. Evaluation on a Synthetic Noise Dataset 200 FPs in 1331 Samples 0.685 0.68 (179/260) (180/279) 0.675 (195/308) 0.67 F1 Score 0.665 (197/339) 0.66 0.655 0.65 (198/388) 0.645 0.64 0 10 20 30 40 50 60 70 80 90 100 False Positive Epoch Removed Part 21

  22. Evaluation on a Synthetic Noise Dataset 0 FPs in 1331 samples 0.75 0.74 (0/32) 0.73 (0/59) F1 Score 0.72 (0/121 ) 0.71 (0/150) 0.7 0.69 (0/258) 0.68 0 10 20 30 40 50 60 70 80 90 100 Epoch 22

  23. Distant Supervision v Dataset: Riedel et al., 2010 § http://iesl.cs.umass.edu/riedel/ecml/ v CNN+ONE, PCNN+ONE § Distant supervision for relation extraction via piecewise convolutional neural networks. (Zeng et al., 2015) v CNN+ATT, PCNN+ATT § Neural relation extraction with selective attention over instances. (Lin et al., 2016) 23

  24. Distant Supervision CNN-based 1 CNN+ONE 0.9 CNN+ONE_RL CNN+ATT 0.8 CNN+ATT_RL 0.7 0.6 0.5 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 24

  25. Distant Supervision PCNN-based 1 PCNN+ONE 0.9 PCNN+ONE_RL PCNN+ATT 0.8 PCNN+ATT_RL 0.7 0.6 0.5 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 25

  26. Outline • Motivation • Algorithm • Experiments • Conclusion 26

  27. Conclusion v We propose a deep reinforcement learning method for robust distant supervision relation extraction. v Our method is model-agnostic. v Our method boost the performance of recently proposed neural relation extractors. 27

  28. Thank you! Q&A 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend