NO NOT SO FAST! THE VERY HUMAN LIMITS TO THE DEVELOPMENT OF AI IN LAW, LAW PRACTICE, AND LEGAL EDUCATION.
ASHLEY LONDON, J.D. & JAMES B. SCHREIBER, PH.D. DUQUESNE UNIVERSITY
NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN - - PowerPoint PPT Presentation
NOT SO FAST! THE VERY NO HUMAN LIMITS TO THE DEVELOPMENT OF AI IN LAW, LAW PRACTICE, AND LEGAL EDUCATION. ASHLEY LONDON, J.D. & JAMES B. SCHREIBER, PH.D. DUQUESNE UNIVERSITY ITS TIME TO PLAY THE MUSIC, ITS TIME TO LIGHT THE
ASHLEY LONDON, J.D. & JAMES B. SCHREIBER, PH.D. DUQUESNE UNIVERSITY
values of the person or entity commissioning the creation of the particular algorithm. Does anyone see a problem with this?
when decisions are being made.
whether her kids eat vegetables.
the overall meal and work on what is linked to success.
Maybe they would choose fewer vegetables and more dessert, that would be a successful dinner.
meals based on the linkage between the ingredients and the results to see if every meal is a “success.”
and increasing the sugar leads to more success?
embedded right in the decision-making apparatus.
developed the algorithm chose the variables included, the definition of success, and the
intentions,” it is nearly impossible for one person to account for all potential sources of bias. Including implicit, or unconscious, bias.
Law! (Among many other law schools doing the same.)
Intelligence.
ACCESS TO JUSTICE.
usage is the greatest at law firms with over 100 attorneys were most likely to use the technology.
increasing efficiency was the highest rated advantage that AI- powered software could provide. Reducing costs and predicting
response to receive a consensus of over 50% (61% of the respondents at BigLaw- 500+ attorneys).
Prominent internationally-known “Big Law” firm O’Melveny & Myers LLP based in Los Angeles, CA, recently announced it would serve as a pioneer in the introduction if the use of Artificial Intelligence (AI) in recruiting and hiring associates (O’Melveny & Meyers, 2018) in an attempt to improve diversity.
prospect of using big data analytics to “predict” a variety of unknowns.
first time and “ultimate bar passage rates” in response to new ABA requirements (and, let’s be honest, to improve a law school’s ranking).
$1.8 billion by 2022. (Hichman, 2018)
use of AI not only for their clients, but for themselves. LexisNexis just announced it is releasing a new product called, Context. This language analytics program supposedly will allow legal professionals to build arguments designed to sway judges in favor of their clients.
As Fei-Fei Li, one of the major developers of these technologies recently argued, “we will hit a moment when it will be impossible to course-correct.”
aspect.
bias of the programmer.
learns about the job(s).
doing so highlight what would normally be a marginal or unimportant difference.
need to ask ourselves: What assumptions about worth, ability, and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded? –Meredith Whittaker, Executive Director, AI Now Institute.
promotion and tenure, grants, etc., for thousands and thousands of current and former employees, what would it tell you?
County (2013-2014).
stake.
labeling them this way at almost twice the rate as white defendants.
reduce transit costs of goods, as well as reducing an individual’s reliance on owning personal vehicles.
entirely mitigate risks of pedestrian fatalities.
systems show uniformly poorer performance of these systems when detecting pedestrians with Fitzpatrick skin types between 4 and 6.
system should scare everyone. Unlike traditional credit scoring systems that account for assets, income, and debt, this one counts things like:
friends, mention of hobbies and activities);
cameras and facial recognition software.
which, for many of the youngest sitters, can cover most of their lives.
competitive jobs if they refuse.
image-recognition software known as “computer vision” to assess babysitters’ Facebook, Twitter and Instagram posts for clues about their
sitter the results.
rating” of the 24-year-old woman, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2
being “disrespectful” and having a “bad attitude.”
stated,
Machines are not trustworthy; only humans can be trustworthy (or untrustworthy).”
weapon systems was an obvious item on our list, as was the AI-supported assessment of citizens by the state (social scoring) and, in principle, the use of AIs that people can no longer understand and control.” – Metzinger.
without an ethical framework let alone a variety of people at the the table.
AI industry:
10%.
compared with 13% of the U.S. as a whole.
personal liability is the next awful step in side-stepping consequences from poorly-designed and bias-driven models.
electronic communication between attorneys and potential clients. A HUGE sea change!
will be tasked to both prosecute and represent victims and perpetrators in this rush to adopt, adapt, and employ AI.
requiring Tech CLEs every other year, just like substance abuse! (NC)
and ethical issues that are already arising.
**Read the European Union’s Ethics Guidelines for Trustworthy AI as a basic starting point in educating yourself on the issues.