Ethical Machine Learning Taking Dont be Evil Literally Katharine - - PowerPoint PPT Presentation

ethical machine learning
SMART_READER_LITE
LIVE PREVIEW

Ethical Machine Learning Taking Dont be Evil Literally Katharine - - PowerPoint PPT Presentation

Ethical Machine Learning Taking Dont be Evil Literally Katharine Jarmul #QCONSP Kjamistan.com I Cant Breathe: The Killing of Eric Garner Joe Raedle/Getty Images Broken Windows Policing Disparate Impact Pr(C = YES|X = 0)


slide-1
SLIDE 1

Ethical Machine Learning

Taking “Don’t be Evil” Literally Katharine Jarmul #QCONSP Kjamistan.com

slide-2
SLIDE 2

I Can’t Breathe: The Killing of Eric Garner

Joe Raedle/Getty Images

slide-3
SLIDE 3

“Broken Windows” Policing

slide-4
SLIDE 4

Disparate Impact Pr(C = YES|X = 0) Pr(C = YES|X = 1) ≤ τ = 0.8

slide-5
SLIDE 5

Predictive Policing: Runaway Feedback Loops

Ensign et al, 2017

slide-6
SLIDE 6

If our models mimic current police behavior, are we creating a valid model?

slide-7
SLIDE 7

If our models mimic social inequalities and prejudice, are we creating a valid model?

slide-8
SLIDE 8

Are social inequalities and prejudice valid?

slide-9
SLIDE 9

Breaking the Cycle: Determining if Your Data has Prejudice

slide-10
SLIDE 10

FairTest: Evaluating Correlations to Sensitive Attributes

slide-11
SLIDE 11

GenderShades: Creating Better Datasets

GenderShades.org

slide-12
SLIDE 12

NLP: Looking at Word Vector Correlations

slide-13
SLIDE 13
slide-14
SLIDE 14

NLP: Google News Vectors

https://blog.kjamistan.com/embedded-isms-in-vector-based-natural-language-processing/

slide-15
SLIDE 15

Debiasing Word Vectors

https://github.com/tolga-b/debiaswe (Bolukbasi, Chang, Zou, Saligrama and Kalai, 2016)

slide-16
SLIDE 16

Modeling Fairness: Evaluating Models for Prejudice

slide-17
SLIDE 17

Defining Fair

https://algorithmicfairness.wordpress.com/

slide-18
SLIDE 18

Evaluating Fair

https://blog.godatadriven.com/fairness-in-ml/

slide-19
SLIDE 19

NLP: Testing Bias

https://developers.googleblog.com/2018/04/text-embedding-models-contain-bias.html

slide-20
SLIDE 20

Interpreting Our Models

Show, Attend and Tell: Neural Image Caption Generation with Visual Attention Xu et al., 2016

slide-21
SLIDE 21

Radical Transparency: Promoting Conversation & Accountability

slide-22
SLIDE 22

Talking Fair

https://www.fatml.org/

slide-23
SLIDE 23

Acting Fair: Building Accountable Applications

https://2017.ind.ie/ethical-design/

slide-24
SLIDE 24

Ethical Machine Learning: Taking a Logical Stance against Oppression

slide-25
SLIDE 25

Ethical ML Takeaways

  • Doing “nothing” assumes prejudice and unfair treatment is a valid action
  • We need better data
  • Diverse data which better reflects the real world
  • Stop using datasets which are non-representative
  • We need built-in ethics-driven evaluation criteria
  • Scikit-learn disparate impact?
  • Scikit-learn equal odds / opportunity?
  • You can contribute
  • pen-source your work and datasets
  • volunteer with the Algorithmic Justice League or local organization
slide-26
SLIDE 26

Thanks!

Questions?

  • Now?
  • Later?
  • @kjam
  • katharine@kiprotect.com