on adversarial removal of hypothesis only bias in natural
play

On Adversarial Removal of Hypothesis-only Bias in Natural Language - PowerPoint PPT Presentation

On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference Yonatan Belinkov * , Adam Poliak*, Benjamin Van Durme, Stuart Shieber, Alexander Rush *SEM, Minneapolis, MN June 7, 2019 Co-Authors Yonatan Belinkov Adam Poliak


  1. On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference Yonatan Belinkov * , Adam Poliak*, Benjamin Van Durme, Stuart Shieber, Alexander Rush *SEM, Minneapolis, MN June 7, 2019

  2. Co-Authors Yonatan Belinkov Adam Poliak Benjamin Van Durme Alexander Rush Stuart Shieber

  3. Background

  4. Natural Language Inference Premise: The brown cat ran Hypothesis: The animal moved 4

  5. Natural Language Inference Premise: The brown cat ran Hypothesis: The animal moved entailment neutral contradiction 5

  6. Natural Language Inference Premise: The brown cat ran Hypothesis: The animal moved entailment neutral contradiction 6

  7. Natural Language Inference Premise: The brown cat ran Hypothesis: The animal moved entailment neutral contradiction 7

  8. Natural Language Inference Premise: The brown cat ran Hypothesis: The animal moved entailment neutral contradiction 8

  9. *SEM 2018

  10. Hypothesis Only NLI 10

  11. Hypothesis Only NLI Hypothesis: A woman is sleeping 11

  12. Hypothesis Only NLI Premise: Hypothesis: A woman is sleeping 12

  13. Hypothesis Only NLI Premise: Hypothesis: A woman is sleeping entailment neutral contradiction 13

  14. Hypothesis Only NLI Premise: Hypothesis: A woman is sleeping entailment neutral contradiction 14

  15. SNLI Results 15

  16. A woman is sleeping 16

  17. Premises: Hypothesis: A woman is sleeping 17

  18. Premises: A woman sings a song while playing piano Hypothesis: A woman is sleeping 18

  19. Premises: This woman is laughing at her baby shower Hypothesis: A woman is sleeping 19

  20. Premises: A woman with glasses is playing jenga Hypothesis: A woman is sleeping 20

  21. Why is she sleeping? 21

  22. Studies in eliciting norming data are prone to repeated responses across subjects (see McRae et al. (2005) and discussion in §2 of Zhang et. al. (2017)’s Ordinal Common-sense Inference) 22

  23. Problem: Hypothesis-only biases mean that models may not learn the true relationship between premise and hypothesis 23

  24. How to handle such biases? 24

  25. Strategies for dealing with dataset biases ● Construct new datasets (Sharma et al. 2018) ○ $$$ ○ More bias

  26. Strategies for dealing with dataset biases ● Construct new datasets (Sharma et al. 2018) ○ $$$ ○ More bias ● Filter “easy” examples (Gururangan et al. 2018) ○ Hard to scale ○ May still have biases (see SWAG → BERT → HellaSWAG)

  27. Strategies for dealing with dataset biases ● Construct new datasets (Sharma et al. 2018) ○ $$$ ○ More bias ● Filter “easy” examples (Gururangan et al. 2018) ○ Hard to scale ○ May still have biases (see SWAG → BERT → HellaSWAG) ● Forgo datasets with known biases ○ Not all bias is bad ○ Biased datasets may have other useful information

  28. Our solution: Design architectures that facilitate learning less biased representations

  29. Adversarial Learning to the Rescue

  30. NLI Model Components g – classifier f - encoder p h

  31. Baseline NLI Model p h

  32. Method 1 – Adv. Hypothesis-Only Classifier p h

  33. Method 1 – Adv. Hypothesis-Only Classifier p h

  34. Method 1 – Adv. Hypothesis-Only Classifier Reverse gradients: Penalize hypothesis encoder if classifier p h does well

  35. Method 2 – Adv. Training Examples p h

  36. Method 2 – Adv. Training Examples Perturb training examples ● Randomly swap premises ● Reverse gradients into hypothesis encoder p’ h

  37. Results & Analysis

  38. What happens to model performance?

  39. Degradation in domain

  40. Degradation in domain

  41. Are biases removed?

  42. Hidden biases - Adversarial Classifier

  43. Hidden biases - Adversarial Classifier

  44. Hidden biases - Adversarial Classifier

  45. Hidden biases - Adversarial Data

  46. Hidden biases - Adversarial Data

  47. What happens to specific biases?

  48. Indicator Words Gururangan et al (*NAACL 2018) Poliak et al (*SEM 2018)

  49. Decrease in correlation with contradiction Relative improvement when predicting contradiction

  50. What is this good for?

  51. Are less biased models more transferable?

  52. ACL 2019

  53. Method 1 – Adv. Hypothesis-Only Classifier

  54. Method 2 – Adv. Training Examples

  55. Conclusions ● Adversarial learning may help combat hypothesis-side biases in NLI ● Applicable to other tasks with one-sided biases: reading comprehension, visual question answering, etc.

  56. SiVL 2019

  57. Conclusions ● Adversarial learning may help combat hypothesis-side biases in NLI ● Applicable to other tasks with one-sided biases ● May reduce the amount of bias and improve transferability ● But, the methods should be handled with care ○ Not all bias may be removed ○ The goal matters: some bias may be helpful in certain scenarios ● Acknowledgements

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend