Putting Artifjcial Intelligence Back into Peoples Hands Toward an - - PowerPoint PPT Presentation

putting artifjcial intelligence back into people s hands
SMART_READER_LITE
LIVE PREVIEW

Putting Artifjcial Intelligence Back into Peoples Hands Toward an - - PowerPoint PPT Presentation

Putting Artifjcial Intelligence Back into Peoples Hands Toward an Accessible, Transparent and Fair AI 2 February 2020 FOSDEM, Brussels, Belgium Vincent Lequertier FSFE Volunteer https://vl8r.eu 2/24 Agenda How to create


slide-1
SLIDE 1

Vincent Lequertier · FSFE Volunteer · https://vl8r.eu 2 February 2020 · FOSDEM, Brussels, Belgium

Putting Artifjcial Intelligence Back into People’s Hands

Toward an Accessible, Transparent and Fair AI

slide-2
SLIDE 2

2/24

Agenda

  • How to create accessible Artifjcial Intelligence?
  • Can AI be transparent and accurate?
  • How to build fairness into AI?
slide-3
SLIDE 3

Artifjcial Intelligence accessibility

slide-4
SLIDE 4

4/24

What is a neural network?

input

  • utput
slide-5
SLIDE 5

5/24

Leveraging other models: fjne-tuning

slide-6
SLIDE 6

6/24

Bigger models are not more accurate

Canziani, A., Paszke, A., & Culurciello, E. (2016). An analysis of deep neural network models for practical applications

slide-7
SLIDE 7

7/24

How to make AI accessible?

  • Make it easy to reuse the model (ONNX format)
  • Release the training code and the dataset under a Free

licence

  • Consider the number of FLOP when designing the model
slide-8
SLIDE 8

Artifjcial Intelligence transparency

slide-9
SLIDE 9

9/24

AI is used for critical matters

  • Loan approval
  • Justice
  • Healthcare
  • Self-driving cars
slide-10
SLIDE 10

10/24

Why do we want transparency?

  • Allows to interpret the result
  • Builds trust in the model
  • Makes debugging easier
slide-11
SLIDE 11

11/24

Parameters are not meant to be transparent

xkcd.com

slide-12
SLIDE 12

12/24

LIME: Debugging and selecting models

Local Interpretable Model-agnostic Explanations

Tulio Ribeiro, M., Singh, S., & Guestrin, C. (2016). " Why Should I Trust You?": Explaining the Predictions of Any Classifjer

slide-13
SLIDE 13

13/24

Making sense of images classifjcation

slide-14
SLIDE 14

14/24

How does it work?

  • reilly.com, Local Interpretable Model-Agnostic Explanations (LIME): An Introduction
slide-15
SLIDE 15

15/24

Also for tabular data

slide-16
SLIDE 16

Artifjcial Intelligence fairness

slide-17
SLIDE 17

17/24

Protecting car colors is easy

brand seats year color speed (km/h) A 5 2011 blue 150 B 2 2012 black 200 C 5 2010 red 250

slide-18
SLIDE 18

18/24

Protecting gender is not easy

gender hobby education salary female women’s volleyball team CS degree 35k male football team captain self-taught 37k male chess CS degree 37k

 Think about correlation before removing an attribute

slide-19
SLIDE 19

19/24

Vocabulary

  • True Positive (TP)
  • True Negative (TN)
  • False Positive (FP)
  • False Negative (FN)
slide-20
SLIDE 20

20/24

COMPAS recidivism scoring

All defendants Black defendants White defendants Low High Low High Low High Survived 2681 1282 Survived 990 805 Survived 1139 349 Recidivated 1216 2035 Recidivated 532 1369 Recidivated 461 505 FP rate 32.35 FP rate 44.85 FP rate 23.45 FN rate 37.40 FN rate 27.99 FN rate 47.72

propublica.org (2016)

slide-21
SLIDE 21

21/24

Racial bias in healthcare

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations.
slide-22
SLIDE 22

22/24

Why an algorithm can be unfair?

  • Bias in the input data itself
  • Training with the wrong metric (bias by proxy)
  • Bad prediction model
  • Bias is hard to notice
  • "With great power comes great responsibility" (Peter Parker)
slide-23
SLIDE 23

23/24

A fair loss function

Let be the number of values of a protected attribute Let be a fairness function

slide-24
SLIDE 24

Vincent Lequertier · FSFE Volunteer · https://vl8r.eu 2 February 2020 · FOSDEM, Brussels, Belgium

Thank you! Questions?