Artificial Intelligence: What Could Possibly Go Wrong? Jim Dempsey - - PowerPoint PPT Presentation

artificial intelligence what could possibly go wrong jim
SMART_READER_LITE
LIVE PREVIEW

Artificial Intelligence: What Could Possibly Go Wrong? Jim Dempsey - - PowerPoint PPT Presentation

Artificial Intelligence: What Could Possibly Go Wrong? Jim Dempsey Executive Director Berkeley Center for Law & Technology jdempsey@berkeley.edu What Is Artificial Intelligence? No single definition a set of technologies that enable


slide-1
SLIDE 1

Artificial Intelligence: What Could Possibly Go Wrong? Jim Dempsey Executive Director Berkeley Center for Law & Technology jdempsey@berkeley.edu

slide-2
SLIDE 2

What Is Artificial Intelligence?

slide-3
SLIDE 3

No single definition

“a set of technologies that enable computers to perceive, learn, reason and assist in decision-making to solve problems in ways that are similar to what people do” (Microsoft, The Future Computed, 2018) Many different flavors:

  • Expert systems (rules-based)
  • Machine learning
  • Deep learning – neural networks

AI, and the concerns it raises, are related to:

  • Robotics
  • Big data
  • Algorithmic decisionmaking
slide-4
SLIDE 4

3 key ingredients to much of what is referred to currently as AI

  • Algorithms (recipes for processing data)
  • Processing power
  • Data
slide-5
SLIDE 5
slide-6
SLIDE 6

General AI - A Long Way Off

slide-7
SLIDE 7

Narrow AI - Already Here

slide-8
SLIDE 8
slide-9
SLIDE 9

Narrow AI Better Than Humans in Specific Tasks

slide-10
SLIDE 10
slide-11
SLIDE 11
slide-12
SLIDE 12

AI/ML – The Role of Training Data

slide-13
SLIDE 13
slide-14
SLIDE 14
slide-15
SLIDE 15
slide-16
SLIDE 16

There Will Be Hype

slide-17
SLIDE 17
slide-18
SLIDE 18

The Hype Goes in Both Directions

  • “[I]ncreasingly useful applications of AI, with

potentially profound positive impacts on our society and economy, are likely to emerge between now and 2030.” AI 100 Study

  • “The development of full artificial

intelligence could spell the end of the human race.” Stephen Hawking

slide-19
SLIDE 19

It’s Not Magic

  • Algorithms are not value-free
  • AI involves human decisions and tradeoffs

“[I]t is a serious misunderstanding to view [AI] tools as objective or neutral simply because they are based on data.” Partnership on AI, Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System (April 26, 2019)

slide-20
SLIDE 20

What Could Possibly Go Wrong?

slide-21
SLIDE 21
slide-22
SLIDE 22
slide-23
SLIDE 23
slide-24
SLIDE 24
slide-25
SLIDE 25
slide-26
SLIDE 26

Why Do Things Go Wrong?

slide-27
SLIDE 27

Limitations in the Training Data

slide-28
SLIDE 28
slide-29
SLIDE 29
slide-30
SLIDE 30
slide-31
SLIDE 31

The view from nowhere

slide-32
SLIDE 32

The view from nowhere

slide-33
SLIDE 33

Deep Proxies

slide-34
SLIDE 34

Deep Proxies

slide-35
SLIDE 35

Failure to Understand the Error or Confidence Level

slide-36
SLIDE 36

Flaws in framing the question

slide-37
SLIDE 37

Virtualization (Hiding the Role of AI)

slide-38
SLIDE 38

Human to Machine Handoffs

slide-39
SLIDE 39

The Human to Machine Handoff

slide-40
SLIDE 40

Adversary-Induced Failures

1 2 3 4 5

slide-41
SLIDE 41
slide-42
SLIDE 42
slide-43
SLIDE 43

What Can Be Done to Minimize Risk?

slide-44
SLIDE 44

Some easy fixes

slide-45
SLIDE 45

Sometimes not so easy

slide-46
SLIDE 46

The black box problem

“… even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs, …[t]he computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

slide-47
SLIDE 47

Trade secrecy and the black box

slide-48
SLIDE 48

“However, the effectiveness of these [AI] systems will be limited by the machine’s inability to explain its thoughts and actions to human users. Explainable AI will be essential, if users are to understand, trust, and effectively manage this emerging generation of artificially intelligent partners.” David Gunning, DARPA

https://www.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf

In Key Contexts, Black Box AI Is Unacceptable

slide-49
SLIDE 49

Transparent AI

IBM researchers have proposed a Supplier’s Declaration of Conformity (SDoC) Basically, a factsheet:

  • What dataset was used to train the AI?
  • What underlying algorithms were used?
  • Was bias mitigation performed on the dataset?
  • Was the service tested on any additional datasets?

https://www.ibm.com/blogs/research/2018/08/factsheets-ai/

slide-50
SLIDE 50

Auditable AI

See also Sandvig et al., Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms http://www-personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20--%20Sandvig%20-- %20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf

slide-51
SLIDE 51

Explainable AI

slide-52
SLIDE 52

Debiasing

slide-53
SLIDE 53

Algorithmic Bug Bounty

  • Proposal: use market and reputational incentives to

facilitate a scalable, crowd-based system of auditing to uncover bias and other flaws.

  • In the US, some of the testing techniques face legal

barriers

Amit Elazari Bar On, Christo Wilson and Motahhare Eslami, ‘Beyond Transparency – Fostering Algorithmic Auditing and Research’ (2018); see also https://motherboard.vice.com/en_us/article/8xkyj3/we-need-bug- bounties-for-bad-algorithms

slide-54
SLIDE 54

Policy Responses

slide-55
SLIDE 55

Ongoing Efforts to Address the Societal and Ethical Concerns

  • Academic Research: Fairness, Accountability and

Transparency in ML https://www.fatml.org/

  • Corporate: Ethics teams at Facebook, Alphabet,

Microsoft

  • NGOs: AI Now: https://ainowinstitute.org/aap-

toolkit.pdf

  • Multi-stakeholder: Partnership for AI
  • Principles: Asilomar AI Principles

https://futureoflife.org/ai-principles/

slide-56
SLIDE 56

Guiding Principles for Corporations and Governments

  • Transparency
  • Safety, security, reliability
  • Data protection/ privacy
  • Fairness

Tools

  • Policies
  • Testing and audits
  • Redress
  • Vendor due diligence
  • Employee training
slide-57
SLIDE 57

AI Inventory and Impact Assessment

Da Data

  • Types of data used
  • Sources of the data
  • Why it is “fit” for

purpose/reliable/ unbiased/timely Al Algorithms and Mo Mode dels

  • General design,

criteria, or how they learn

  • Purposes for which

they operate

  • Any material limits
  • n their capabilities
  • Steps taken to avoid

bias Ou Outp tput

  • Testing/auditing

schedule, results, and remediation

  • Whether and when

to use third parties

  • Whether and when

to have humans in the loop

  • Consider protected
  • r

underrepresented populations

From Lindsey Tonsager, Covington

slide-58
SLIDE 58

Regulatory Responses - Transparency

“Each [airline reservation] system shall provide to any person upon request the current criteria used in editing and ordering flights for the integrated displays and the weight given to each criterion and the specifications used by the system’s programmers in constructing the algorithm.” 15 CFR 255.4.

slide-59
SLIDE 59

Transparent AI – The Role of Government as Customer

Transparency – demand insight into

  • Purpose
  • Algorithm
  • Training Data
  • Validation
  • ORAS – Ohio Risk Assessment System
  • OTS – Offender Screening Tool (Maricopa County)
slide-60
SLIDE 60

Legislative Responses -Bans

  • California: AB 1215 (2019) -3 year moratorium on

law enforcement’s use of any biometric surveillance system in connection with an officer camera or data collected by an officer camera. Penal Code, Section 832.19.

  • “Biometric surveillance system” means any computer

software or application that performs facial recognition

  • r other biometric surveillance (but not in-field

fingerprint collection)

  • Berkeley, CA; Somerville, MA – ban on all

government use of facial recognition technology

  • San Francisco – ban plus approval process for future

adoption

slide-61
SLIDE 61

Opening the black box in the EU

EU General Data Protection Regulation (GDPR)

  • A data controller must provide the data subject

“meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” (Art. 13).

  • See also Arts. 22 and 14-15
slide-62
SLIDE 62

Trade secrecy and the black box

slide-63
SLIDE 63

State v. Loomis (Wisc. S. Ct 2016)

Use of COMPAS risk assessment at sentencing

  • Weighting of factors is proprietary: No due process

violation if PSI includes limitations and cautions regarding the COMPAS risk assessment's accuracy

  • COMPAS predicts group behavior: a circuit court is

expected to consider this caution as it weighs all of the factors.

  • Risk scores: cannot be used to determine whether to

incarcerate or the severity of the sentence.

  • May be used as one factor in probation and supervision
slide-64
SLIDE 64

Litigation in the US

  • K.W. ex rel. D.W. v. Armstrong, 298 F.R.D. 479 (D. Idaho 2014) –

automated decision-making system for Medicaid payments – court

  • rdered the state to disclose the formula. As part of settlement,

state agreed to develop new formula with greater transparency.

  • Ark. Dep’t of Human Servs. v. Ledgerwood, 530 S.W.3d 336 (Ark.

2017) – homecare for individuals with profound physical disabilities – complex computer algorithms - injunctive relief for plaintiffs - failure to adopt the rule according to notice and comment procedures

  • Houston Federation of Teachers v. Houston Independent School

District, 51 F. Supp. 3d 1168 (S.D. Tex. 2017) – court ruled in favor of teachers: SAS’s secrecy about its algorithm prohibited teachers from accessing, understanding, or acting on their own evaluations.

  • Barry v. Lyon, 834 F.3d 706 (6th Cir. 2016) – state adopted matching

algorithm for food assistance eligibility. More than 19,000 people were improperly matched, automatically disqualified, and given only vague notice. Court ruled the notice denied due process.

slide-65
SLIDE 65

Litigation in the EU

  • Netherlands Lawyers’ Committee for Human Rights

(NJCM) vs. The Netherlands (Department of Social Affairs and Employment) – government adopted a risk-profiling system (SyRI) aimed at preventing social security, employment, and tax fraud. In 2018, SyRI flagged over 1,000 individuals or households as “fraud risk.” Lawsuit is pending, alleging due process violations and discrimination, among other claims.