The Role of Normware in Trustworthy and Explainable AI Giovanni - - PowerPoint PPT Presentation

the role of normware in trustworthy and explainable ai
SMART_READER_LITE
LIVE PREVIEW

The Role of Normware in Trustworthy and Explainable AI Giovanni - - PowerPoint PPT Presentation

The Role of Normware in Trustworthy and Explainable AI Giovanni Sileno (g.sileno@uva.nl), Alexander Boer, Tom van Engers XAILA, eXplainable AI and Law workshop, JURIX 2018 @ Groningen 12 December 2018 with the (supposedly) near advent of


slide-1
SLIDE 1

The Role of Normware in Trustworthy and Explainable AI

Giovanni Sileno (g.sileno@uva.nl), Alexander Boer, Tom van Engers

XAILA, eXplainable AI and Law workshop, JURIX 2018 @ Groningen 12 December 2018

slide-2
SLIDE 2

with the (supposedly) near advent of autonomous artificial entities, or other forms of distributed automatic decision making,

– humans less and less in the loop – increasing concerns about unintended consequences

slide-3
SLIDE 3

Unintended consequences: bad or limited design

slide-4
SLIDE 4

Unintended consequences: bad or limited design

program programmer specifications, use cases programmer specifications, use cases programmer specifications, use cases

incremental or design and testing development

implementation fault (bugs)

design fault (relevant scenarios not considered)

slide-5
SLIDE 5

Unintended consequences: bad or limited design

  • Wallet hacks, fraudulent actions and bugs in the in the

blockchain sector during 2017:

CoinDash ICO Hack ($10 millions)

Parity Wallet Breach ($105 millions)

Enigma Project Scum

Parity Wallet Freeze ($275 millions)

Tether Token Hack ($30 millions)

Bitcoin Gold Scam ($3 millions)

NiceHash Market Breach ($80 millions)

Source: CoinDesk (2017), Hacks, Scams and Attacks: Blockchain's 2017 Disasters

slide-6
SLIDE 6

Unintended consequences: the “artificial prejudice”

slide-7
SLIDE 7

Unintended consequences: the “artificial prejudice”

black box ML method learning data programmer specifications, use cases

incremental or design and testing development parameters adaptation

incorrect judgment

statistical bias

slide-8
SLIDE 8
  • Software used across the US

predicting future crimes and criminals biased against African Americans (2016)

Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing

Unintended consequences: the “artificial prejudice”

slide-9
SLIDE 9
  • Software used across the US

predicting future crimes and criminals biased against African Americans (2016)

Existing statistical bias (correct description)

When used for prediction on an individual it is read as behavioural predisposition, i.e. it is interpreted as a mechanism.

A biased judgment introduces here negative consequences in society.

Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing

Unintended consequences: the “artificial prejudice”

slide-10
SLIDE 10
  • Software used across the US

predicting future crimes and criminals biased against African Americans (2016)

  • Problem: role of circumstantial

evidence, how to integrate statistical inference in judgment?

Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing

DNA footwear

  • rigin, gender,

ethnicity, wealth, ... ...

improper profiling?

Unintended consequences: the “artificial prejudice”

slide-11
SLIDE 11
  • Software used across the US

predicting future crimes and criminals biased against African Americans (2016)

  • Problem: role of circumstantial

evidence, how to integrate statistical inference in judgment?

Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing

DNA footwear

  • rigin, gender,

ethnicity, wealth, ... ...

improper profiling?

Unintended consequences: the “artificial prejudice”

improper because it causes unfair judgment

slide-12
SLIDE 12
  • The “improvident” qualification to an inductive inference might

be given already before taking into account the practical consequences of its acceptation.

Unacceptable conclusions: improvident induction

slide-13
SLIDE 13
  • The “improvident” qualification to an inductive inference might

be given already before taking into account the practical consequences of its acceptation.

Unacceptable conclusions: improvident induction

  • Consider a diagnostic application

predicting whether the patient has appendicitis:

We would accept a conclusion based on the presence of fever, abdominal pain, or an increased number of white blood cells, but not if based e.g. on the length of the little toe or the fact that outside it is raining!

slide-14
SLIDE 14
  • The “improvident” qualification to an inductive inference might

be given already before taking into account the practical consequences of its acceptation.

Unacceptable conclusions: improvident induction

  • Consider a diagnostic application

predicting whether the patient has appendicitis:

We would accept a conclusion based on the presence of fever, abdominal pain, or an increased number of white blood cells, but not if based e.g. on the length of the little toe or the fact that outside it is raining!

an expert would reject the conclusion when no relevant mechanism can be imagined linking factor with conclusion.

slide-15
SLIDE 15
  • The “improvident” qualification to an inductive inference might

be given already before taking into account the practical consequences of its acceptation.

Unacceptable conclusions: improvident induction

  • Consider a diagnostic application

predicting whether the patient has appendicitis:

We would accept a conclusion based on the presence of fever, abdominal pain, or an increased number of white blood cells, but not if based e.g. on the length of the little toe or the fact that outside it is raining!

an expert would reject the conclusion when no relevant mechanism can be imagined linking factor with conclusion.

for that decision- making context

slide-16
SLIDE 16
  • Problems may also arise for the statistical inference by itself,

as shown e.g. by Simpson’s paradox

Unacceptable conclusions: improvident induction

slide-17
SLIDE 17
  • Problems may also arise for the statistical inference by itself,

as shown e.g. by Simpson’s paradox

Example: hired/applicants data

Unacceptable conclusions: improvident induction

mathematics dept. sociology dept. university 1/1 vs 1/10 1/100 vs 0/1 2/101 vs 1/11 favours females favours females favours males

slide-18
SLIDE 18

Explainable AI

  • Explainable AI has basically two drivers:

– reject unacceptable conclusions – satisfy reasonable requirements of expertise

  • But what qualifies a conclusion as “unacceptable”? And what

might be used to define an expertise to be “reasonable”?

  • claim: normware!

i.e. computational artifacts specifying shared expectations (“norm” as in normality)

slide-19
SLIDE 19

Trustworthy AI

  • Trustworthiness for artificial devices could be associated to the

requirement of not falling into paperclip maximizer scenarios:

– of not taking “wrong” decisions, of performing “wrong”

actions, wrong because having disastrous impact

  • How to (attempt to) satisfy this requirement?
  • claim: normware!

i.e. computational artifacts specifying shared drivers (“norm” as in normativity)

slide-20
SLIDE 20

symbolic device

when running → symbolic mechanism relies on physical mechanisms

software

physical device

when running → physical mechanism situated in a physical environment

hardware normware

control structure control structure ………. ……….. ………..

relies on symbolic mechanisms

A tentative taxonomy

?

normative or epistemic pluralism?

slide-21
SLIDE 21

symbolic device

when running → symbolic mechanism relies on physical mechanisms

software

physical device

when running → physical mechanism situated in a physical environment

hardware normware

control structure control structure ………. ……….. ………..

relies on symbolic mechanisms

A tentative taxonomy

?

normative or epistemic pluralism? Is normware just a type of software?

slide-22
SLIDE 22

symbolic device

when running → symbolic mechanism relies on physical mechanisms

software

physical device

when running → physical mechanism situated in a physical environment

hardware normware

control structure control structure ………. ……….. ………..

relies on symbolic mechanisms

A tentative taxonomy

?

normative and epistemic pluralism? Is normware just a type of software? interaction with sub-symbolic modules?

slide-23
SLIDE 23

environment user

interaction

device

Impact at large

  • Traditionally, engineering is about the conception of devices to

implement certain functions. Functions are always defined within a certain operational context to satisfy certain needs.

slide-24
SLIDE 24

Impact at large

general approach used in problem-solving, machine learning, ... environment user

interaction

device

increasing reward

  • Traditionally, engineering is about the conception of devices to

implement certain functions. Functions are always defined within a certain operational context to satisfy certain needs.

  • optimization is made possible by specifying a reward function

associated to certain goals

slide-25
SLIDE 25

Impact at large

goal: fishing, reward: proportional to quantity of fish, inversely to effort. individual solution to

  • ptimization problem:
slide-26
SLIDE 26

Impact at large

goal: fishing, reward: proportional to quantity of fish, inversely to effort. individual solution to

  • ptimization problem:

“fishing with bombs”

slide-27
SLIDE 27

Impact at large

goal: fishing, reward: proportional to quantity of fish, inversely to effort. individual solution to

  • ptimization problem:

“fishing with bombs” acknowledgement of undesirable second-order effects.

slide-28
SLIDE 28

Impact at large

goal: fishing, reward: proportional to quantity of fish, inversely to effort. individual solution to

  • ptimization problem:

“fishing with bombs” acknowledgement of undesirable second-order effects.

by whom? for whom?

slide-29
SLIDE 29

boundary situational/contextual

plan planner tactical (planning) strategic (policy) strategic monitoring system drivers environmental couplings higher-level diagnostic feedback intentional setup

  • The process illustrated a two steps decision-making process,

enabling “tactical” optimization and “strategic” control.

simulator

Planning with adaptations

slide-30
SLIDE 30

boundary situational/contextual boundary reacting/acting

plan planner tactical (planning) strategic (policy)

  • perational

(acting) simulator strategic monitoring executor

  • perational

monitoring system drivers environmental couplings lower-level diagnostic feedback higher-level diagnostic feedback perceptual setup intentional setup

  • We might add also the
  • perational layer

Planning with adaptations

slide-31
SLIDE 31

black box

  • racle

input desired

  • utput
  • utput
  • error

retroactive feedback feedforward

Supervised Machine Learning

adaptive

slide-32
SLIDE 32

black box

  • racle

input desired

  • utput
  • utput
  • error

retroactive feedback feedforward

  • In general, supervised machine learning involves:

– a data-flow computational network – parameters distributed along the network – a ML method enabling adaptation of parameters

against some feedback, e.g. output error in the training phase

– an oracle making targets explicit

Supervised Machine Learning

adaptive

slide-33
SLIDE 33

black box

  • racle

input desired

  • utput
  • utput
  • error

retroactive feedback feedforward

  • In general, supervised machine learning involves:

– a data-flow computational network – parameters distributed along the network – a ML method enabling adaptation of parameters

against some feedback, e.g. output error in the training phase

– an oracle making targets explicit

Supervised Machine Learning

planner plan executor lower-level diagnostic feedback intentional setup

adaptive

slide-34
SLIDE 34

black box

  • racle

input desired

  • utput
  • utput
  • error

retroactive feedback feedforward

  • In general, supervised machine learning involves:

– a data-flow computational network – parameters distributed along the network – a ML method enabling adaptation of parameters

against some feedback, e.g. output error in the training phase

– an oracle making targets explicit

Supervised Machine Learning

planner plan executor lower-level diagnostic feedback intentional setup higher-level diagnostic feedback?

adaptive

slide-35
SLIDE 35

black box

  • racle

input desired

  • utput
  • utput
  • error

retroactive feedback feedforward

  • In general, supervised machine learning involves:

– a data-flow computational network – parameters distributed along the network – a ML method enabling adaptation of parameters

against some feedback, e.g. output error in the training phase

– an oracle making targets explicit

Supervised Machine Learning

planner plan executor lower-level diagnostic feedback intentional setup higher-level diagnostic feedback?

  • This seems the root of our problems with ML. Can we repair it?

adaptive

slide-36
SLIDE 36

Evolutionary view

  • racle

black box 2 black box 1 ... reward

  • In evolutionary terms, we could consider a multitude of different

non-adaptive black-boxes, covering several configurations of parameters, competing for computational resources.

For each learning step, the oracle sets the means to select the best performing black-box(es), for which access to computational resources for future predictions will be granted as a reward. [...]

  • But who “pays” the oracle?

non-adaptive

slide-37
SLIDE 37

Evolutionary view

  • racle

black box 2 black box 1 ... reward

  • In evolutionary terms, we could consider a multitude of different

non-adaptive black-boxes, covering several configurations of parameters, competing for computational resources.

For each learning step, the oracle sets the means to select the best performing black-box(es), for which access to computational resources for future predictions will be granted as a reward. [...]

  • The higher-level diagnostic feedback implies that also the

system drivers should pass from a selection mechanism.

non-adaptive

slide-38
SLIDE 38

Evolutionary view

black box 2 black box 1 ...

  • racle 2
  • racle 1

... second-order

  • racle?

reward

slide-39
SLIDE 39

Evolutionary view

black box 2 black box 1 ...

  • racle 2
  • racle 1

... second-order

  • racle?

reward

  • Let’s use this architecture on a concrete example: IBM Watson

(building upon a network of intelligent QA agents).

– a question is given – the system has to guess

  • what the question demands (~ oracles)
  • what is the answer (~ black-box),

– correct response is given by the jury (~ second-order oracle)

slide-40
SLIDE 40

Evolutionary view

black box 2 black box 1 ...

  • racle 2
  • racle 1

... second-order

  • racle?

reward

  • Let’s use this architecture on a concrete example: IBM Watson

(building upon a network of intelligent QA agents).

– a question is given – the system has to guess

  • what the question demands (~ oracles)
  • what is the answer (~ black-box),

– correct response is given by the jury (~ second-order oracle)

  • Let’s apply it to our initial problems!
slide-41
SLIDE 41

Example: neutrality constraint

training data black box 2 black box 1 black box 3 a, b, c → class 1 a, b, d → class 2 a, c, e → class 1 neutrality w.r.t. d pruned training data a, b, c → class 1 a, c, e → class 1 neutralized training data a, b, c → class 1 a, b, d → class 2 a, b, d → class 1 a, c, e → class 1

slide-42
SLIDE 42

Example: strategic protection to unintended consequences

netting angling fishing with bombs fish avoid ecological disruption simulator fish without disrupting plan executor tactical driver strategic driver world intentional setup action plan check

slide-43
SLIDE 43

Example: alignment to expert knowledge for explanation

“a → b. c.” “a → b. a.” “c → b. c.” explain b justification tracer

  • a. c.

intentional setup perceptual setup align with expert a → b. c → b. explanation check alignment checking explain b explain b explainers a → b.

slide-44
SLIDE 44

Perspectives

  • This position paper aims to highlight the crucial role of normware

with respect to trustworthy and explainable AI

– ML approaches usually do not consider this level of abstraction – ethical/responsible AI studies target higher level constraints

slide-45
SLIDE 45

Perspectives

  • This position paper aims to highlight the crucial role of normware

with respect to trustworthy and explainable AI

– ML approaches usually do not consider this level of abstraction – ethical/responsible AI studies target higher level constraints

  • It makes clear two perspectives on normware:

– computational artifacts specifying norms – ecology of components guiding the system components including sub-symbolic ones!

slide-46
SLIDE 46

Perspectives

  • This position paper aims to highlight the crucial role of normware

with respect to trustworthy and explainable AI

– ML approaches usually do not consider this level of abstraction – ethical/responsible AI studies target higher level constraints

  • It makes clear two perspectives on normware:

– computational artifacts specifying norms – ecology of components guiding the system components

  • The ecological perspective has been overlooked in our field, but

reminds of visionary ideas presented in the history of AI (Minsky’s society of minds, Brooks’ intelligent creatures).

including sub-symbolic ones!

slide-47
SLIDE 47

symbolic device

when running → symbolic mechanism relies on physical mechanisms

software

physical device

when running → physical mechanism situated in a physical environment

hardware normware

control structure control structure guidance structure coordination device

when adopted → interactional mechanism relies on symbolic mechanisms

A less tentative taxonomy

?