algorithms for nlp
play

Algorithms for NLP 11-711, Fall 2019 Lecture 26: Computational - PowerPoint PPT Presentation

Algorithms for NLP 11-711, Fall 2019 Lecture 26: Computational Ethics Yulia Tsvetkov 1 Tsvetkov Socially Responsible NLP What NLP Has To Do With Ethics? Applications Machine Translation Information Retrieval


  1. Algorithms for NLP 11-711, Fall 2019 Lecture 26: Computational Ethics Yulia Tsvetkov 1 Tsvetkov – Socially Responsible NLP

  2. What NLP Has To Do With Ethics? ● Applications ○ Machine Translation ○ Information Retrieval ○ Question Answering ○ Dialogue Systems ○ Information Extraction ○ Summarization ○ Sentiment Analysis ○ ... Tsvetkov – Socially Responsible NLP

  3. Language, People, and Web The common misconception is that language has to do with words and what they mean. It doesn’t. It has to do with people and what they mean. Herbert H. Clark & Michael F. Schober, 1992 Tsvetkov – Socially Responsible NLP

  4. Both Ethics and NLP are Interdisciplinary Fields ● Philosophy ● Sociology ● Psychology ● Linguistics ● Sociolinguistics ● Social psychology ● Computational Social Science ● Machine Learning Tsvetkov – Socially Responsible NLP

  5. What is Ethics? “Ethics is a study of what are good and bad ends to pursue in life and what it is right and wrong to do in the conduct of life. It is therefore, above all, a practical discipline . Its primary aim is to determine how one ought to live and what actions one ought to do in the conduct of one’s life.” -- Introduction to Ethics, John Deigh Tsvetkov – Socially Responsible NLP

  6. What is Ethics? It’s the good things It’s the right things Tsvetkov – Socially Responsible NLP

  7. What is Ethics? It’s the good things It’s the right things How simple is it to define what’s good and what’s right? Tsvetkov – Socially Responsible NLP

  8. The Trolley Dilemma Should you pull the lever to divert the trolley? [From Wikipedia] Tsvetkov – Socially Responsible NLP

  9. The Chicken Dilemma rooster hen Ethical? Tsvetkov – Socially Responsible NLP

  10. The Chicken Dilemma rooster hen Ethics is inner guiding, moral principles, and values of people and ➔ society ➔ There are grey areas, there are often no give binary answers. Ethics changes over time with values and beliefs of people ➔ Legal ≠ ethical ➔ Tsvetkov – Socially Responsible NLP

  11. Ethics ≠ Law ● Illegal+immoral: ● legal+immoral: ● illegal+moral: ● legal+moral: Tsvetkov – Socially Responsible NLP

  12. Ethics ≠ Law ● Illegal+immoral: murder ● legal+immoral: ● illegal+moral: ● legal+moral: Tsvetkov – Socially Responsible NLP

  13. Ethics ≠ Law ● Illegal+immoral: murder ● legal+immoral: cheating on a spouse ● illegal+moral: ● legal+moral: Tsvetkov – Socially Responsible NLP

  14. Ethics ≠ Law ● Illegal+immoral: murder ● legal+immoral: cheating on a spouse ● illegal+moral: civil disobedience ● legal+moral: eating ice cream Tsvetkov – Socially Responsible NLP

  15. Ethics ≠ Law ● Illegal+immoral: murder ● capital punishment ● legal+immoral: cheating on a spouse ● cancelling Game of Thrones ● illegal+moral: civil disobedience ● assassination of a dictator ● legal+moral: eating an ice cream ● eating the last ice cream in the freezer Tsvetkov – Socially Responsible NLP

  16. Ethical Considerations are Time-Dependent Tsvetkov – Socially Responsible NLP

  17. We Cannot Foresee All Possible Uses of Technology Tsvetkov – Socially Responsible NLP

  18. Working on Ethical Issues in AI ● Ethics is hard even to define, it is subjective and it changes over time : should we be then trying to quantify and evaluate ethics in AI? ○ It is another problem with an ill-defined answer ■ It still has some definition of good and bad ■ Not everyone agrees on all examples ■ But they do agree on some examples ■ They do have some correlation between people ○ Complex NLP problems are also hard to quantify and evaluate ■ Summarization, QA, dialog, speech synthesis Tsvetkov – Socially Responsible NLP

  19. Let’s Train an IQ Classifier ● I ntelligence Q uotient: a number used to express the apparent relative intelligence of a person Tsvetkov – Socially Responsible NLP

  20. An IQ Classifier Let’s train a classifier to predict people’s IQ from their photos. ● Who could benefit from such a classifier? Tsvetkov – Socially Responsible NLP

  21. An IQ Classifier Let’s train a classifier to predict people’s IQ from their photos. ● Who could benefit from such a classifier? ● Assume the classifier is 100% accurate. Who can be harmed from such a classifier? Tsvetkov – Socially Responsible NLP

  22. An IQ Classifier Let’s train a classifier to predict people’s IQ from their photos. ● Who could benefit from such a classifier? ● Who can be harmed by such a classifier? ● Suppose, our test results show 90% accuracy Tsvetkov – Socially Responsible NLP

  23. An IQ Classifier Let’s train a classifier to predict people’s IQ from their photos. ● Who could benefit from such a classifier? ● Who can be harmed by such a classifier? ● Suppose, our test results show 90% accuracy ○ Evaluation reveals that white females have 95% accuracy ○ People with blond hair under age of 25 have only 60% accuracy Tsvetkov – Socially Responsible NLP

  24. An IQ Classifier Let’s train a classifier to predict people’s IQ from their photos. ● Who could benefit from such a classifier? ● Who can be harmed by such a classifier? ● Are there biases in data? Tsvetkov – Socially Responsible NLP

  25. An IQ Classifier Let’s train a classifier to predict people’s IQ from their photos. ● Who could benefit from such a classifier? ● Who can be harmed by such a classifier? ● Are there biases in data? ● What personal data was used as training data? Privacy concerns? ● Who is responsible? ○ Researcher/developer? Reviewer? University? Society? Tsvetkov – Socially Responsible NLP

  26. What’s the Difference? Tsvetkov – Socially Responsible NLP

  27. AI and People Applications pervasive in our daily life! Tsvetkov – Socially Responsible NLP

  28. Learn to Assess Computational Systems Adversarially ● Who could benefit from such a technology? ● Who can be harmed by such a technology? ● Representativeness of training data ● Could sharing this data have major effect on people’s lives? ● What are confounding variables and corner cases to control for? ● Does the system optimize for the “right” objective? ● Could prediction errors have major effect on people’s lives? Tsvetkov – Socially Responsible NLP

  29. Learn to Assess Computational Systems Adversarially ● Who could benefit from your technology? ● Who can be harmed by your technology? ● Representativeness of your training data ● Could you by sharing this data have negative effect on people’s lives? ● What are confounding variables and corner cases for you to control for? ● Does your system optimize for the “right” objective? ● Could prediction errors of your technology have major effect on people’s lives? Tsvetkov – Socially Responsible NLP

  30. Topics in the Intersection of Ethics and NLP ● Misrepresentation and human biases in NLP data and models ● Hate speech and civility in online communication ● Privacy and security ● Democracy and the language of manipulation: bias in narratives, censorship, fake news, targeted content ● NLP for social good: low-resource NLP, NLP for disaster response Tsvetkov – Socially Responsible NLP

  31. Topics in the Intersection of Ethics and NLP ● Bias and Fairness concerns ○ Is my NLP model capturing social stereotypes? ○ Are my classifier’s predictions fair? ● Dual Use NLP Applications ○ E.g., Persuasive Language generation ■ in targeted advertisement, say, in Payday loan ads? ● Privacy Concerns ○ Demographic factors prediction (gender, age, etc.) ○ Sexual orientation prediction ● Socially Beneficial Applications ○ Hate speech detection ○ Monitoring disease outbreaks etc. ○ Psychological monitoring/counseling ○ Low resource NLP ○ +many more Tsvetkov – Socially Responsible NLP

  32. Misrepresentation and Bias Tsvetkov – Socially Responsible NLP

  33. Stereotypes Which word is more likely to be used by a female ? Giggle – Laugh (Preotiuc-Pietro et al. ‘16) Tsvetkov – Socially Responsible NLP

  34. Stereotypes Which word is more likely to be used by a female ? Giggle – Laugh (Preotiuc-Pietro et al. ‘16) Tsvetkov – Socially Responsible NLP

  35. Stereotypes Which word is more likely to be used by a female ? Brutal – Fierce (Preotiuc-Pietro et al. ‘16) Tsvetkov – Socially Responsible NLP

  36. Stereotypes Which word is more likely to be used by a female ? Brutal – Fierce (Preotiuc-Pietro et al. ‘16) Tsvetkov – Socially Responsible NLP

  37. Stereotypes Which word is more likely to be used by a older person ? Impressive – Amazing (Preotiuc-Pietro et al. ‘16) Tsvetkov – Socially Responsible NLP

  38. Stereotypes Which word is more likely to be used by a older person ? Impressive – Amazing (Preotiuc-Pietro et al. ‘16) Tsvetkov – Socially Responsible NLP

  39. Stereotypes Which word is more likely to be used by a person of higher occupational class ? Suggestions – Proposals (Preotiuc-Pietro et al. ‘16) Tsvetkov – Socially Responsible NLP

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend