the issue of bias
play

THE ISSUE OF BIAS TRADEOFFS AND BALANCE IN ML Prof. dr. Mireille - PowerPoint PPT Presentation

THE ISSUE OF BIAS TRADEOFFS AND BALANCE IN ML Prof. dr. Mireille Hildebrandt Interfacing Law & Technology Vrije je Uni niversiteit B Brussel l Smart Environments, Data Protection & the Rule of Law Radboud Radboud U Uni niversity y


  1. THE ISSUE OF BIAS TRADEOFFS AND BALANCE IN ML Prof. dr. Mireille Hildebrandt Interfacing Law & Technology Vrije je Uni niversiteit B Brussel l Smart Environments, Data Protection & the Rule of Law Radboud Radboud U Uni niversity y

  2. WHAT’S NEXT? 1. 1. Thr hree T Typ ypes o of B Bias 1. 1. inher eren ent b bias 2. 2. bias a as u unfairnes ess 3. 3. bias o on p prohibited ed g grounds 2. 2. Profile le T Trans nsparenc ncy y 3. 3. Automa mated De Decisions ns 4. 4. Purpose e 5. 5. GD GDPR PR 17/11/16 Hildebrandt's KNUT MEMORIAL LECTURE 2016 2

  3. THREE TYPES OF BIAS 8/12/2016 Hildebrandt - NISP ML and the LAW 3

  4. Is ML ML n neutral, o objectiv ive, t true? ■ thr hree t typ ypes o of bi bias: 1. bias inherent in any action-perception-system (APS) 2. bias that some would qualify as unfair 3. bias that discriminates on the basis of prohibited legal grounds 8/12/2016 Hildebrandt - NISP ML and the LAW 4

  5. INHERENT BIAS 8/12/2016 Hildebrandt - NISP ML and the LAW 5

  6. th the e dif ifference t that m makes a a dif ifference (Bat (Bateson) son) ■ bias bias inhe nherent nt i in a n any a y action-p n-perception-s n-sys ystem ( m (APS) – Thomas Nagel’s ‘Seeing like a bat’ – the salience of the output of the APS depends on the agent & the environment – perception is a means to anticipate the consequences of action: ‘enaction’ – there is no such thing as objective neutrality, but – this does not imply that anything goes – on the contrary: life and death may depend on getting it ‘right’ 8/12/2016 Hildebrandt - NISP ML and the LAW 6

  7. Machine Learning (ML) ■ ML i is a about – choosing and pruning relevant, correct and sufficiently complete tra traini ning ngsets ts – developing and training the right algorithm to detect the right mathem ematical f function – ML is based on a productive b e bias, cp. Hume as well as Gadamer – op opti timi mizati tion on always depends on context, purpose, availability of training and test data – ther ere a e are a e always t trade-o e-offs! – reliability depends on the extent to which the f e future c e confirms t the p e past – David Wolpert’s no free lunch theorem should inform our assessment 27 October '16 Robolegal: paralegal or toplawyer? 7

  8. Hume Hume, , Gadamer Gadamer, , Wo Wolpert: n no f free l lunch th theo eorem em Whe here d = = t traini ning ng s set; ; f = = ‘t ‘target’ i ’ input-o -output r rela lations nshi hips; ; h = h = h hyp ypothe hesis ( (the he a alg lgorithm' hm's g guess f for f f ma made i in r n respons nse t to d d); a ; and nd C = = o off-t -training-s -set ‘l ‘loss’ a ’ associated ed w with f f a and h h ( (‘g ‘gen ener eralization er error’) ’) How well you do is determined by how ‘aligned’ your learning algorithm P(h|d) is with the actual posterior, P(f|d). Check http://www.no-free-lunch.org 8/12/2016 Hildebrandt - NISP ML and the LAW 8

  9. Hume Hume, , Gadamer Gadamer, , Wo Wolpert: n no f free l lunch th theo eorem em Im Implications: : – The bias that is necessary to mine the data will co-determine the results – This relates to the fact that the data used to train an algorithm is finite – ‘Reality’, whatever that is, escapes the inherent reduction – Data is not the same as what it refers to or what it is a trace of 8 July 2016 Privacy Hub Summerschool 9

  10. “We shall see that most current theory of machine learning rests on the crucial crucial assum assumptio ion n that the distribution of training examples is identical to the distribution of test examples. Despite our need to make this assumption in order to obtain theoretical results, it is important to keep in mind that this assumption must often be violated in practice.” Tom Mitchell 8 July 2016 Privacy Hub Summerschool 10

  11. Michael Veale: i. ‘the common assum assumptio ion th that fu futu ture re popu populati tions ons a are e no not fu functi nctions ons o of p past dec ecisions is often violated in the public sector;’ ■ actually, pres esen ent f futures es do do c co-d -deter ermine t e the f e future p e pres esen ent – predictions influence the move from training to test set – they change the probability and the hypothesis space – they enlarge both uncertainty and possibility ■ the point is about the d distribution of both: who gets how much of what – this depends on who gets to act on the output – if machines define a situation as real it is real in its consequences 8/12/2016 Hildebrandt - NISP ML and the LAW 11

  12. us elections: data does NOT speak for itself 8/12/2016 Hildebrandt - NISP ML and the LAW 12

  13. us elections: data does NOT speak for itself 8/12/2016 Hildebrandt - NISP ML and the LAW 13

  14. Trustworthiness: Trade-offs ■ ML i involves es a a t training s set, a , algorithms, a , a t tes est s set – whether supervised, reinforced or unsupervised ■ trade-o e-offs a are i e inevitable: e: – choice of training & t & tes est s set: size, relevance, accuracy, completeness – choice of lea earning a algorithms: clustering, decision tree, deep learning, random forests, back propagation, linear regression etc etc – speed eed of output (e.g. real-time) – accuracy accuracy of predictions – outlier er d detec ection ■ N=All i ll is hu humb mbug, though it may apply in a specific sense under certain conditions 8/12/2016 Hildebrandt - NISP ML and the LAW 14

  15. the new catch 22 ■ suppose: e: – experts train algorithms on relevant data sets – and keep on testing the output (reinforcement learning) – until the system does very well (e.g. Zeb, student paper grading, legal intelligence) – and the experts get bored and do other things (semiotic desensitization)? – while the systems start feeding increasingly on each other’s output ■ who c can t tes est w whether er t the s e system em i is s still d doing w wel ell 2 2 y yea ears l later er? ? ■ e.g. medical diagnosis, legal intelligence, critical infrastructure 8/12/2016 Hildebrandt - NISP ML and the LAW 15

  16. the new catch 22: architecture is politics ■ who c can t tes est w whether er t the s e system em i is s still d doing w wel ell 2 2 y yea ears l later er? ? ■ e.g. medical diagnosis, legal intelligence, critical infrastructure ■ what i is ‘d ‘doing w wel ell’? ’? ■ who g gets t to d deter ermine e what i it m mea eans t to ‘d ‘do w wel ell’? ’? ■ so, r , rep eplacem emen ent i is high r risk h high g gain in t ter erms o of f functionality, f , fairnes ess a and o our a ability t to ent – a cognize o e our en environmen as t this c cognition i is m med ediated ed b by M ML s system ems 8/12/2016 Hildebrandt - NISP ML and the LAW 16

  17. e.g. automated prediction of judgment (APoJ) ■ APoJ used as a means to provide feed eedback to lawyers, clients, prosecutors, courts ■ APoJ could involve a sen ensitivity a analysis , modulating facts, legal precepts, claims ■ APoJ as a domain for exp xper erimen entation , developing new insights, argumentation patterns, testing alternative approaches ■ APoJ could detect missing information (facts, legal arguments), helping to improve e (instea ead o of m mer erel ely p pred edict) the outcome of cases ■ APoJ can be used to improve the acuity of human judgment, if n not u used ed t to r rep eplace i e it ■ if APoJ is used to replace, it should not be confused with law; then en i is b bec ecomes es n – t ad adminis ministrat ratio ion the d e differ eren ence i e is c crucial, c , critical a and p per ertinen ent ■ cp . . http://www.vikparuchuri.com/blog/on-the-automated-scoring-of-essays/ 27 October '16 Robolegal: paralegal or toplawyer? 17

  18. BIAS AS UNFAIRNESS 8/12/2016 Hildebrandt - NISP ML and the LAW 18

  19. the d dif ifference t that makes makes a a d dif ifference ■ bias t tha hat s some me w would ld q quali lify a y as unf nfair – this is a matter of et ethics – we may not agree about goals (values) means (nudging, forcing, negotiating) evaluation: – deontological? utilitarian? virtue ethics? pragmatarian? – that i is w why w we n e need eed l law 8/12/2016 Hildebrandt - NISP ML and the LAW 19

  20. BIAS ON PROHIBITED GROUNDS 8/12/2016 Hildebrandt - NISP ML and the LAW 20

  21. the d dif ifference t that makes makes a a d dif ifference ■ bias t tha hat d discrimi mina nates o on t n the he b basis o of prohi hibited le legal g l ground nds – this i is u unlawful a and c can r res esult i in l leg egal r red edres ess: : – fines, tort liability, compensation – invalidation of contracts or legislation 8/12/2016 Hildebrandt - NISP ML and the LAW 21

  22. PROFILE TRANSPARENCY 8/12/2016 Hildebrandt - NISP ML and the LAW 22

  23. detecting bias ■ explanation, interpretability: if y you c cannot t tes est i it y you c cannot c contes est i it – flesh out the productive bias that ensures functionality: test & contest – figure out the unfairness in the training set & the algos: test & contest – infer discrimination on prohibited legal grounds: test & contest 8/12/2016 Hildebrandt - NISP ML and the LAW 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend