NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN - - PowerPoint PPT Presentation

not so fast the very
SMART_READER_LITE
LIVE PREVIEW

NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN - - PowerPoint PPT Presentation

NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN LAW, LAW PRACTICE, AND LEGAL EDUCATION. ASHLEY LONDON, J.D. & JAMES B. SCHREIBER, PH.D. DUQUESNE UNIVERSITY ITS TIME TO PLAY THE MUSIC, ITS TIME TO LIGHT THE


slide-1
SLIDE 1

NO NOT SO FAST! THE VERY HUMAN LIMITS TO THE DEVELOPMENT OF AI IN LAW, LAW PRACTICE, AND LEGAL EDUCATION.

ASHLEY LONDON, J.D. & JAMES B. SCHREIBER, PH.D. DUQUESNE UNIVERSITY

slide-2
SLIDE 2

“IT’S TIME TO PLAY THE MUSIC, IT’S TIME TO LIGHT THE LIGHTS…”

slide-3
SLIDE 3

ARTIFICIAL INTELLIGENCE???

Let us begin by exploring a little idea together…

  • Is infinity a number?
slide-4
SLIDE 4

WHAT IS AI ANYWAY?

  • What is intelligence?
slide-5
SLIDE 5

THE MERRIAM-WEBSTER DICTIONARY DEFINITION OF INTELLIGENCE

  • “: the ability to learn or understand or to deal with new or

trying situations : . . . the skilled use of reason (2) : the ability to apply knowledge to manipulate one’s environment or to think abstractly…”

slide-6
SLIDE 6

HOW THE ACADEMY DEFINES INTELLIGENCE

  • PSYCHOLOGIST-DAVID WECHSLER
  • A global concept that involves an individual’s ability to act

purposefully, think rationally, and deal effectively with the environment.

  • AI Researchers Legg and Hutter (2006)
  • Intelligence measures an agent’s ability to achieve goals

in a wide range of environments.

slide-7
SLIDE 7

BLACK’S LAW DICTIONARY DEFINITION OF ARTIFICIAL INTELLIGENCE

  • A software used to make computers and robots work better than
  • humans. The systems are rule based or neutral networks. It is used

to help make new products, robotics, human language understanding, and computer vision.

slide-8
SLIDE 8

AI IS NEITHER INTELLIGENCE NOR ARTIFICIAL INTELLIGENCE

  • It is more “Artificial-Artificial Intelligence” – Dr. Cathy

O’Neill.

  • Humans are helping MACHINES help humans.
  • The greater the human reliance on computers, the higher

the risk of potential ethical issues and conundrums.

slide-9
SLIDE 9

ALL WE REALLY HAVE IS A SYSTEM OF MATHEMATICAL MODELS

  • Mostly you hear the phrase “algorithms” or

“machine learning.”

  • The terms are also used interchangeably.
  • So what exactly is an algorithm?
  • Algorithms are sets of rules that a computer is

able to follow. Rules like, you have to be able to subtract from both sides.

slide-10
SLIDE 10

BASIC GOALS OF AN ALGORITHM

  • Predict and classify.
  • Prediction is where we want to predict, a number typically, like the

price of a car, or house, or salary request.

  • Classification is where we want to predict into some pre-defined

category, like yes or no, purchase or not, mechanical failure and so

  • n.
  • To get to the ultimate prediction however, the computer program

must be loaded with “decision points” that trigger whether one route is taken, or another. THIS is where issues arise.

slide-11
SLIDE 11

DECISION POINTS EXAMPLE

  • Decision points are based on the programmer’s values or on the

values of the person or entity commissioning the creation of the particular algorithm. Does anyone see a problem with this?

  • That is a great deal of power—and there is always a power issue

when decisions are being made.

  • Dr. Cathy O’Neill tells the story of making dinner for the family.
  • The data are the ingredients on hand over time, plus the amount
  • f time, and level of motivation to make the dinner.
  • But she also needs to define success. Here, she defines success as

whether her kids eat vegetables.

slide-12
SLIDE 12

DECISION POINTS EXAMPLE, CONTINUED

  • With that as a definition she can start examining all the meal ingredients and

the overall meal and work on what is linked to success.

  • If her kids defined success, a much different model would happen? Right?

Maybe they would choose fewer vegetables and more dessert, that would be a successful dinner.

  • Now with all of this data and the success definition she can start optimizing the

meals based on the linkage between the ingredients and the results to see if every meal is a “success.”

  • What if the ingredients are sugar and fat-based sauces on the vegetables

and increasing the sugar leads to more success?

slide-13
SLIDE 13

ALGORITHMS ARE VALUE LADEN

  • Algorithms have an inherent power differential

embedded right in the decision-making apparatus.

  • By “value laden” we mean the person who

developed the algorithm chose the variables included, the definition of success, and the

  • ptimization process of that success definition.
  • While the developer may have so-called “good

intentions,” it is nearly impossible for one person to account for all potential sources of bias. Including implicit, or unconscious, bias.

slide-14
SLIDE 14

A STRONG BIAS ALREADY EXISTS IN THE LEGAL FIELD FOR THE AGGRESSIVE USE AND IMPLEMENTATION OF AI

  • See also, we are hosting an AI conference right now at Duquesne University School of

Law! (Among many other law schools doing the same.)

  • Articles like, “Law Firms Need Artificial Intelligence to Stay in the Game,” by ALM

Intelligence.

  • A need to find better, faster, cheaper ways to address the growing social justice gap.

ACCESS TO JUSTICE.

  • Cheaper, faster, electronic discovery for litigation.
  • A need to eliminate human errors, reduce risk, and manage costs to clients.
slide-15
SLIDE 15

BIG LAW FIRMS ARE FASTEST ADOPTERS OF AI RIGHT NOW

  • The ABA’s “2018 Legal Technology Survey Report” found that AI

usage is the greatest at law firms with over 100 attorneys were most likely to use the technology.

  • For those that saw a benefit to adopting AI, saving time and

increasing efficiency was the highest rated advantage that AI- powered software could provide. Reducing costs and predicting

  • utcomes/reducing risks was also cited as an important benefit.
  • Accuracy remained the biggest concern about AI, the only

response to receive a consensus of over 50% (61% of the respondents at BigLaw- 500+ attorneys).

Prominent internationally-known “Big Law” firm O’Melveny & Myers LLP based in Los Angeles, CA, recently announced it would serve as a pioneer in the introduction if the use of Artificial Intelligence (AI) in recruiting and hiring associates (O’Melveny & Meyers, 2018) in an attempt to improve diversity.

slide-16
SLIDE 16

THE SEDUCTION OF PREDICTING THE “RIGHT” RESULT

  • Lawyers and law school administrators are salivating over the

prospect of using big data analytics to “predict” a variety of unknowns.

  • Finding algorithms that can predict “success” on metrics such as

first time and “ultimate bar passage rates” in response to new ABA requirements (and, let’s be honest, to improve a law school’s ranking).

  • The global legal analytics market is expected to reach a value of

$1.8 billion by 2022. (Hichman, 2018)

  • Law students need understand the benefits and detriments of the

use of AI not only for their clients, but for themselves. LexisNexis just announced it is releasing a new product called, Context. This language analytics program supposedly will allow legal professionals to build arguments designed to sway judges in favor of their clients.

slide-17
SLIDE 17

WE TEND TO USE THREE SISTERS OF ALGORITHMS

  • Linear Models
  • Tree based Models
  • Neural Networks
slide-18
SLIDE 18

AI IS NOT AUTOMATICALLY EVIL. BUT THE FACT THAT HUMANS CREATE AI SHOULD GIVE US PAUSE.

  • These are powerful technologies.
  • Amazing things can be done with them to

advance human, business, and legal interests.

  • But in this point and click era- amazingly

bad things can be done or perpetuated.

  • Let us look at a few…

As Fei-Fei Li, one of the major developers of these technologies recently argued, “we will hit a moment when it will be impossible to course-correct.”

slide-19
SLIDE 19
slide-20
SLIDE 20

JOBS JOBS JOBS

  • The hiring process is never a single decision.
  • The process is a series of decisions over time.
  • The key with these algorithm systems is the rejection

aspect.

  • These are typically automated and can easily reflect the

bias of the programmer.

  • Use of algorithms on job hunting sites also affects who

learns about the job(s).

  • Hiring algorithms can (and do) rank job seekers and in

doing so highlight what would normally be a marginal or unimportant difference.

slide-21
SLIDE 21

AMAZON

  • Amazon had to kill an algorithm based hiring system. Why?
  • The tool disadvantaged FEMALE candidates.
  • Those who went to certain women’s colleges presumably

not attended by many existing Amazon engineers.

  • Downgraded resumes that included the word “women’s” — as in

“women’s rugby team.”

  • Privileged resumes with the kinds of verbs that men tend to use, like

“executed” and “captured.”

slide-22
SLIDE 22

JOBS JOBS JOBS

  • In the case of systems meant to automate candidate search and hiring, we

need to ask ourselves: What assumptions about worth, ability, and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded? –Meredith Whittaker, Executive Director, AI Now Institute.

  • So if we were to create an algorithm to hire faculty at Duquesne, with all the data on hiring,

promotion and tenure, grants, etc., for thousands and thousands of current and former employees, what would it tell you?

  • Data quality?
  • Potential for Bias?
  • Who is at the table during the creation of this algorithm, and how is it to be applied?
slide-23
SLIDE 23

PREDICTING RE-OFFENDERS- AI’S ROLE IN RISK ASSESSMENT IN A CRIMINAL LAW CONTEXT

  • Pro-Publica obtained the risk scores assigned to more than 7,000 people arrested in Broward

County (2013-2014).

  • Used the same criteria as the company.
  • Algorithm slightly better than coin flip at 61%...which is not the best way to think about
  • this. Criminal Law & Criminal Procedure laws are complex, and fundamental rights are at

stake.

  • Observed it was particularly likely to falsely flag black defendants as future criminals, wrongly

labeling them this way at almost twice the rate as white defendants.

  • White defendants were mislabeled as low risk more often than black defendants.
  • Case 1: Priors 2 armed robberies, 1 attempted armed robbery
  • Case 2: Priors 4 juvenile misdemeanors
slide-24
SLIDE 24

ONE RECENT STUDY HAS ALREADY FOUND SELF-DRIVING CARS MAY BE MORE LIKELY TO HIT PEDESTRIANS OF COLOR

  • Autonomous vehicles are touted as a solution to the

reduce transit costs of goods, as well as reducing an individual’s reliance on owning personal vehicles.

  • BUT, so far, these systems have shown an inability to

entirely mitigate risks of pedestrian fatalities.

  • Several models of the most state-of-the-art detection

systems show uniformly poorer performance of these systems when detecting pedestrians with Fitzpatrick skin types between 4 and 6.

slide-25
SLIDE 25

CHINA- A NEW UNIVERSAL CREDIT SCORE SYSTEM GOES BEYOND HARVESTING YOUR FINANCIAL DATA

  • The development and implementation of the social scoring

system should scare everyone. Unlike traditional credit scoring systems that account for assets, income, and debt, this one counts things like:

  • social media activities (i.e. political commentary, comments by

friends, mention of hobbies and activities);

  • health records;
  • online purchases;
  • tax payments;
  • legal matters, and people you associate with;
  • AND, images gathered from China’s 200 million surveillance

cameras and facial recognition software.

  • Publicly available in 2020.
slide-26
SLIDE 26

BABYSITTERS EVEN? THE PREDICTIM SYSTEM

  • Predictim’s scans analyze the entire history of a babysitter’s social media,

which, for many of the youngest sitters, can cover most of their lives.

  • And the sitters are told they will be at a great disadvantage for the

competitive jobs if they refuse.

  • Predictim’s executives say they use language-processing algorithms and an

image-recognition software known as “computer vision” to assess babysitters’ Facebook, Twitter and Instagram posts for clues about their

  • ffline life.
  • The parent is provided the report exclusively and does not have to tell the

sitter the results.

slide-27
SLIDE 27

THE PREDICTIM SYSTEM DASHBOARD- WHO IS CREATING THESE RISK ASSESSMENTS?

  • From the Washington Post:
  • The system offered an automated “risk

rating” of the 24-year-old woman, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2

  • ut of 5 — for bullying, harassment,

being “disrespectful” and having a “bad attitude.”

slide-28
SLIDE 28

LITTLE HOPE

  • AI Group for European Union Recently Released it's "Ethics Guidelines for Trustworthy AI"
  • There were only 4 ethicists on board at the time (of a group of 52).
  • Committee Member Thomas Metzinger, Professor of Theoretical Philosophy at the University of Mainz,

stated,

  • "The Trustworthy AI story is a marketing narrative invented by industry, a bedtime story for tomorrow's
  • customers. The underlying guiding idea of a “trustworthy AI” is, first and foremost, conceptual nonsense.

Machines are not trustworthy; only humans can be trustworthy (or untrustworthy).”

  • Metzinger and Urz Bergmann created redlines for non-AI use. "The use of lethal autonomous

weapon systems was an obvious item on our list, as was the AI-supported assessment of citizens by the state (social scoring) and, in principle, the use of AIs that people can no longer understand and control.” – Metzinger.

  • But all red-line and non-negotiable language was removed from the

document so that it could have a positive vision.

slide-29
SLIDE 29

AND EVEN LESS HOPE IN THE NEAR FUTURE…

  • Finally, people are just making these models, testing them, and selling them

without an ethical framework let alone a variety of people at the the table.

  • The AI Now Institute at NYU found some pretty shocking lack of diversity in the

AI industry:

  • At Facebook, women comprise just 15% of AI researchers. At Google, that shrinks to

10%.

  • 80% of all AI professors are men.
  • At Google, Facebook and Microsoft, less than 5% of the workforce is black,

compared with 13% of the U.S. as a whole.

  • LACK OF DIVERSITY = BIASED ALGORITHMS
  • Creating problematic algorithms is one thing. Selling them quickly to reduce

personal liability is the next awful step in side-stepping consequences from poorly-designed and bias-driven models.

slide-30
SLIDE 30

ETHICS RULES GOVERNING LAWYER BEHAVIOR ARE NOTORIOUSLY SLOW TO ADAPT AND CHANGE

  • Consider: Attorneys were only permitted to ADVERTISE in 1977. (Bates v. Arizona)
  • In 2018, the ABA Model Rules of Professional Conduct changed to ALLOW real-time

electronic communication between attorneys and potential clients. A HUGE sea change!

  • Critics note that these rules change at a glacier pace, which is a problem when attorneys

will be tasked to both prosecute and represent victims and perpetrators in this rush to adopt, adapt, and employ AI.

  • Lawyer lack of technical knowledge so well-known that some jurisdictions are now

requiring Tech CLEs every other year, just like substance abuse! (NC)

  • It is not enough that attorneys KNOW about technology, we also need to understand the legal

and ethical issues that are already arising.

slide-31
SLIDE 31

QUESTIONS ATTORNEYS NEED TO ASK WHEN WORKING WITH ARTIFICIAL INTELLIGENCE

  • What is the goal of this algorithm?
  • What data is being input?
  • Where and how and from whom was that data obtained?
  • What are the algorithm’s decision points?
  • Who decided on those decision points?
  • Were potential issues of bias accounted for in constructing those decision points, and how?
  • Do you have an ethicist on the development team? Do you have a true critical outsider providing input?

**Read the European Union’s Ethics Guidelines for Trustworthy AI as a basic starting point in educating yourself on the issues.