Bias and Fairness in AI/ML models Swati Gupta Assistant Professor - - PowerPoint PPT Presentation

bias and fairness in ai ml models
SMART_READER_LITE
LIVE PREVIEW

Bias and Fairness in AI/ML models Swati Gupta Assistant Professor - - PowerPoint PPT Presentation

Bias and Fairness in AI/ML models Swati Gupta Assistant Professor School of Industrial and Systems Engineering, Georgia Institute of Technology October 25, 2018 Digital Data Flows Master Class: Emerging Technologies Machine Learning


slide-1
SLIDE 1

Bias and Fairness in 
 AI/ML models

Swati Gupta

Assistant Professor School of Industrial and Systems Engineering, Georgia Institute of Technology October 25, 2018 Digital Data Flows Master Class: Emerging Technologies

slide-2
SLIDE 2

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Source for images: Google images, iStock.

Machine Learning Pipeline

2

Machine Learning/AI Data Data driven decisions What is the effect of these decisions on human well-being?

slide-3
SLIDE 3

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

“Bias. When scientific or technological decisions are based

  • n a narrow set of systemic, structural or social concepts and

norms, the resulting technology can privilege certain groups and harm others.” – Nature comment

What is Bias/Fairness?

3

slide-4
SLIDE 4

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

“Bias. When scientific or technological decisions are based

  • n a narrow set of systemic, structural or social concepts and

norms, the resulting technology can privilege certain groups and harm others.” – Nature comment

What is Bias/Fairness?

3

slide-5
SLIDE 5

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

4

slide-6
SLIDE 6

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

4

slide-7
SLIDE 7

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

4

slide-8
SLIDE 8

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

4

slide-9
SLIDE 9

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

5

Outline of the talk

Bias in the data, models and variables Fairness Metrics

Statistical measures Equity measures

Trolley Problem of Choice

slide-10
SLIDE 10

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Predictive Policing

6

[Lum, Isaac, 2016] Heat map of drug arrests made “application of analytical techniques to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions”

slide-11
SLIDE 11

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

ML finds patterns in data

7

slide-12
SLIDE 12

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

ML finds patterns in data

7

slide-13
SLIDE 13

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

PredPol: crime type, time, loc

8

[Kristian Lum, William Isaac, 2016] Heat map of drug arrests made

slide-14
SLIDE 14

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Not just about collection

9

We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies.. We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies..

slide-15
SLIDE 15

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Not just about collection

9

We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies.. We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies..

slide-16
SLIDE 16

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Not just about collection

9

We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies.. “We also found that setting the gender to female resulted in getting fewer instances of an ad related to high paying jobs than setting it to male. “ We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies..

slide-17
SLIDE 17

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Not just about collection

9

We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies.. We do not want such biases to propagate into systems that make life-changing decisions. Proxies: predicting crime using data on arrests, Not on incidence of crime. We live in a biased society, so it’s inevitable that data collected about that society will be biased: inherent bias, test data, feedback, proxies..

slide-18
SLIDE 18

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

10

Outline of the talk

Bias in the data, models and variables Fairness Metrics

Statistical measures Equity measures

Trolley Problem of Choice

slide-19
SLIDE 19

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Source of images: Google images, iStock.

Machine Learning Pipeline

11

Machine Learning/AI Data Data driven decisions What is the effect of these decisions on human well-being?

slide-20
SLIDE 20

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Classification

12

YES NO Hired for job or not, will re-offend or not (prison), given a loan or not.

slide-21
SLIDE 21

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | ProPublica, May 2016

13

YES Hired for job or not, will re-offend or not (prison), given a loan or not.

Is it fair to achieve highest accuracy in classification?

NO

Statistical Definitions of Fairness

slide-22
SLIDE 22

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | ProPublica, May 2016

13

YES Hired for job or not, will re-offend or not (prison), given a loan or not.

Is it fair to achieve highest accuracy in classification?

NO COMPAS Risk Score: ProPublica

Statistical Definitions of Fairness

slide-23
SLIDE 23

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | ProPublica, May 2016

13

YES Hired for job or not, will re-offend or not (prison), given a loan or not.

Is it fair to achieve highest accuracy in classification?

NO COMPAS Risk Score: ProPublica

Statistical Definitions of Fairness

slide-24
SLIDE 24

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | ProPublica, May 2016

13

YES Hired for job or not, will re-offend or not (prison), given a loan or not.

Is it fair to achieve highest accuracy in classification?

NO

Statistical Definitions of Fairness

Or is it fair to balance false positives across the groups?

slide-25
SLIDE 25

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | ProPublica, May 2016

13

YES Hired for job or not, will re-offend or not (prison), given a loan or not.

Is it fair to achieve highest accuracy in classification?

NO

Statistical Definitions of Fairness

Or is it fair to balance false positives across the groups?

slide-26
SLIDE 26

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Wikipedia

14

YES Hired for job or not, will re-offend or not (prison), given a loan or not.

Is it fair to achieve highest accuracy in classification?

NO

Statistical Definitions of Fairness

Or is it fair to balance false positives across the groups? False negatives?

slide-27
SLIDE 27

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Wikipedia

14

Statistical Definitions of Fairness

In fact, different stakeholders might have different points of view

slide-28
SLIDE 28

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Equity Metrics of Fairness

What about general decisions: how much loan to give? where to place an emergency room? Where to schedule deliveries?

15

Is it fair to minimize total distance travelled by any group? Group A Group B distance

slide-29
SLIDE 29

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Equity Metrics of Fairness

What about general decisions: how much loan to give? where to place an emergency room? Where to schedule deliveries?

16

Group A Group B Is it fair to minimize total distance travelled by any group?

slide-30
SLIDE 30

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Equity Metrics of Fairness

What about general decisions: how much loan to give? where to place an emergency room? Where to schedule deliveries?

16

Group A Group B Is it fair to minimize total distance travelled by any group?

slide-31
SLIDE 31

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Equity Metrics of Fairness

What about general decisions: how much loan to give? where to place an emergency room? Where to schedule deliveries?

17

Group A Group B Is it fair to minimize average distance travelled by any group (per person)?

slide-32
SLIDE 32

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Equity Metrics of Fairness

What about general decisions: how much loan to give? where to place an emergency room? Where to schedule deliveries?

17

Group A Group B Is it fair to minimize average distance travelled by any group (per person)? Group by race? income? insurance?

slide-33
SLIDE 33

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Marsh and Schilling 1993

Equity Metrics of Fairness

18

slide-34
SLIDE 34

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Bias in the data, models and variables Fairness Metrics

Statistical measures Equity measures

Trolley Problem of Choice

19

Outline of the talk

slide-35
SLIDE 35

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Which fairness do we want?

20

?

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

At least 50 ways to be fair

slide-36
SLIDE 36

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Sources: shutterstock, Lum & Isaac 2016

Which fairness do we want?

21

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

Social Scientist: Arrest data is not a good proxy for crime data

slide-37
SLIDE 37

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Boracas and Hardt, 2017

Which fairness do we want?

22

Lawyer/Policy maker: Cannot use protected classes for making decisions.

Race (Civil Rights Act of 1964), Color (Civil Rights Act of 1964), Religion (Civil Rights Act

  • f 1964), National Origin (Civil Rights Act of

1964), Citizenship (Immigration Reform and Control Act), Age (Age discrimination in Employment Act of 1967), Pregnancy (Pregnancy Discrimination Act), Familial status (Civil Rights Act of 1968), Disability status (Rehabilitation Act of 1973; Americans with Disabilities Act of 1990), Veteran Status (Vietnam Era Veterans’ Readjustment Assistance Act of 1974; Uniformed Services Employment and Reemployment Rights Act), Genetic Information (Genetic Information Nondiscrimination Act)

Disparate Treatment v/s Impact

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

slide-38
SLIDE 38

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Boracas and Hardt, 2017

Which fairness do we want?

22

Lawyer/Policy maker: Cannot use protected classes for making decisions.

Race (Civil Rights Act of 1964), Color (Civil Rights Act of 1964), Religion (Civil Rights Act

  • f 1964), National Origin (Civil Rights Act of

1964), Citizenship (Immigration Reform and Control Act), Age (Age discrimination in Employment Act of 1967), Pregnancy (Pregnancy Discrimination Act), Familial status (Civil Rights Act of 1968), Disability status (Rehabilitation Act of 1973; Americans with Disabilities Act of 1990), Veteran Status (Vietnam Era Veterans’ Readjustment Assistance Act of 1974; Uniformed Services Employment and Reemployment Rights Act), Genetic Information (Genetic Information Nondiscrimination Act)

Disparate Treatment v/s Impact

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

slide-39
SLIDE 39

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | Corbett-Davies et al, 2017

Which fairness do we want?

23

Algorithm designer: awareness of protected classes can fix bias

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

slide-40
SLIDE 40

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology |Kleinberg et al 2017, Chouldechova 2017

Which fairness do we want?

24

Statistician: cannot have equal false positive, negative rates & calibration simultaneously COMPAS Debate: Northpointe v/s ProPublica

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

slide-41
SLIDE 41

Bias and Fairness in AI/ML models | Swati Gupta | Ongoing work with Jalan, Ranade, Yang, Zhuang, 2018

Which fairness do we want?

25

Optimizer: can at times have approximately fair solutions for multiple metrics together

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

Group by race? income? insurance?

slide-42
SLIDE 42

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | hscmd.org/momcare-statistics/

Which fairness do we want?

26

Economists, Behavioral scientists, Humans-in the loop, ..

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

slide-43
SLIDE 43

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology | hscmd.org/momcare-statistics/

Which fairness do we want?

26

*

This has to be a collective decision we need to consciously reach at, after a deeper dive into the application.

slide-44
SLIDE 44

Bias and Fairness in AI/ML models | Swati Gupta | Georgia Institute of Technology

Bias in the data, models and variables

Collection, Feedback, Proxies, Test Data, Representation..

Fairness Metrics

Statistical measures: accuracy, false positive rate, true positive rate, calibration, ... Equity measures: general decisions, average metric, total metric, group choice, …

Trolley Problem of Choice: it’s an inclusive story 27

Summary

*

Questions? swatig@gatech.edu, www.swatigupta.tech Transparency, Interpretability, Gameability?