attention and its mis interpretation
play

Attention and its (mis)interpretation Danish Pruthi 1 - PowerPoint PPT Presentation

Attention and its (mis)interpretation Danish Pruthi 1 Acknowledgements Mansi Gupta Bhuwan Dhingra Graham Neubig Zachary C. Lipton 2 Outline 1. What is attention mechanism? 2. Attention-as-explanations 3. Manipulating attention weights 4.


  1. Sequence-to-sequence Tasks Task Example Bigram Flipping {w 1 , w 2 … w 2n-1 , w 2n } → {w 2 , w 1 , … w 2n , w 2n-1 } {w 1 ,w 2, … w n-1 , w n } → {w 1 ,w 2 , … w n , w n-1 } Sequence Copying {w 1 ,w 2, … w n-1 , w n } → {w n ,w n-1 , … w 2 , w 1 } Sequence Reversal This is an example. → Dieser ist ein Beispiel. English - German MT 19

  2. Manipulating Attention 20

  3. 
 Manipulating Attention • Let be the impermissible tokens, m is the mask 
 𝖩 20

  4. 
 
 Manipulating Attention • Let be the impermissible tokens, m is the mask 
 𝖩 • For any task-specific loss function, a penalty term is added 
 20

  5. 
 
 
 Manipulating Attention • Let be the impermissible tokens, m is the mask 
 𝖩 • For any task-specific loss function, a penalty term is added 
 • The penalty term penalizes the model for allocating attention to impermissible tokens 
 20

  6. Manipulating Attention 21

  7. Manipulating Attention Total attention mass on all the "allowed" tokens 21

  8. Manipulating Attention Penalty coefficient that modulates attention on impermissible tokens Total attention mass on all the "allowed" tokens 21

  9. Manipulating Attention Penalty coefficient that modulates attention on impermissible tokens Total attention mass on all the "allowed" tokens • Side note: In a parallel work, Wiegre ff e and Pinter (2019) propose a di ff erent penalty term 21

  10. Manipulating Attention 22

  11. Manipulating Attention • Multiple attention heads 22

  12. 
 Manipulating Attention • Multiple attention heads • Optimizing the mean over a set of attention heads 
 22

  13. 
 
 Manipulating Attention • Multiple attention heads • Optimizing the mean over a set of attention heads 
 • One of the attention heads can be assigned a large amount of attention to impermissible tokens 
 22

  14. Outline 1. What Is attention mechanism? 2. Attention-as-explanations 3. Manipulating attention weights 4. Results and discussion 5. Conclusion 23

  15. BiLSTM + Attention y α 1 α 2 α 3 α n ….. biLSTM biLSTM biLSTM biLSTM x 1 x 2 x 3 x n 24

  16. Embedding + Attention (No recurrent connections) y α 1 α 2 α 3 α n x 3 x 1 x 2 x n 25

  17. Transformer-based Model Devlin et. al 26

  18. Transformer-based Model Devlin et. al 26

  19. Restricted BERT Predictions Movie Good [SEP] L 12 Movie Good [SEP] L.. L 1 L 0 [CLS] [CLS] Original 27

  20. Restricted BERT Predictions Movie Good [SEP] L 12 Movie Good [SEP] L.. L 1 L 0 [CLS] [CLS] Original Predictions Movie L 12 L.. L 1 Good [SEP] Impermissible L 0 [CLS] Permissible Delhi [SEP] Capital Restricted 27

  21. Occupation Prediction 28

  22. Occupation Prediction Accuracy Attention Mass 100 75 50 25 0 Original Manipulated 
 Manipulated 
 ( λ = 0.1) ( λ = 1.0) Attention type 28

  23. Occupation Prediction Accuracy Attention Mass 100 99.7 97.2 75 50 25 0 Original Manipulated 
 Manipulated 
 ( λ = 0.1) ( λ = 1.0) Attention type 28

  24. Occupation Prediction Accuracy Attention Mass 100 99.7 97.2 97.1 75 50 25 0 0 Original Manipulated 
 Manipulated 
 ( λ = 0.1) ( λ = 1.0) Attention type 28

  25. Occupation Prediction Accuracy Attention Mass 100 99.7 97.2 97.4 97.1 75 50 25 0 0 0 Original Manipulated 
 Manipulated 
 ( λ = 0.1) ( λ = 1.0) Attention type 28

  26. Classification Tasks 29

  27. Classification Tasks 29

  28. Classification Tasks 29

  29. Classification Tasks 29

  30. Classification Tasks 29

  31. Classification Tasks 29

  32. Classification Tasks 29

  33. Alternate mechanisms Gender-Identification 30

  34. Alternate mechanisms Gender-Identification At inference time, what if we hard set the corresponding attention mass to ZERO? 30

  35. Alternate mechanisms Gender-Identification 50 % 100% At inference time, what if we hard set the corresponding attention mass to ZERO? 30

  36. Bigram Flip 31

  37. Bigram Flip Accuracy Attention Mass 100 75 50 25 0 Original None Uniform Manipulated Attention type 31

  38. Bigram Flip Accuracy Attention Mass 100 100 94.5 75 50 25 0 Original None Uniform Manipulated Attention type 31

  39. Bigram Flip Accuracy Attention Mass 100 100 96.5 94.5 75 50 25 0 0 Original None Uniform Manipulated Attention type 31

  40. Bigram Flip Accuracy Attention Mass 100 100 97.9 96.5 94.5 75 50 25 0 5.2 0 Original None Uniform Manipulated Attention type 31

  41. Bigram Flip Accuracy Attention Mass 100 100 99.9 97.9 96.5 94.5 75 50 25 0 5.2 0.4 0 Original None Uniform Manipulated Attention type 31

  42. Bigram Flip Original 32

  43. Bigram Flip Original Manipulated 32

  44. Bigram Flip A di ff erent seed Original Manipulated 32

  45. Sequence Copy 33

  46. Sequence Copy Accuracy Attention Mass 100 75 50 25 0 Original None Uniform Manipulated Attention type 33

  47. Sequence Copy Accuracy Attention Mass 100 100 98.8 75 50 25 0 Original None Uniform Manipulated Attention type 33

  48. Sequence Copy Accuracy Attention Mass 100 100 98.8 84.1 75 50 25 0 0 Original None Uniform Manipulated Attention type 33

  49. Sequence Copy Accuracy Attention Mass 100 100 98.8 93.8 84.1 75 50 25 0 5.2 0 Original None Uniform Manipulated Attention type 33

  50. Sequence Copy Accuracy Attention Mass 100 100 99.9 98.8 93.8 84.1 75 50 25 0 5.2 0.01 0 Original None Uniform Manipulated Attention type 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend