History of AI, Current Trends, Prospective Trajectories Winter - - PowerPoint PPT Presentation

history of ai current trends prospective trajectories
SMART_READER_LITE
LIVE PREVIEW

History of AI, Current Trends, Prospective Trajectories Winter - - PowerPoint PPT Presentation

History of AI, Current Trends, Prospective Trajectories Winter Academy on Artificial Intelligence and International Law Asser Institute 11 February 2019 Giovanni Sileno g.sileno@uva.nl What is Artificial Intelligence? What is Artificial


slide-1
SLIDE 1

History of AI, Current Trends, Prospective Trajectories

Giovanni Sileno g.sileno@uva.nl

Winter Academy on Artificial Intelligence and International Law

Asser Institute – 11 February 2019

slide-2
SLIDE 2

What is Artificial Intelligence?

slide-3
SLIDE 3

What is Artificial Intelligence?

slide-4
SLIDE 4

What is Artificial Intelligence?

  • What is made by humans?
slide-5
SLIDE 5

What is Artificial Intelligence?

  • What is made by humans?
  • What is induced by humans?
slide-6
SLIDE 6

What is Artificial Intelligence?

  • What is made by humans?
  • What is induced by humans?
  • What is simulated, not true?
slide-7
SLIDE 7

What is Artificial Intelligence?

slide-8
SLIDE 8

What is Artificial Intelligence?

  • Problem-solving ability?
slide-9
SLIDE 9

What is Artificial Intelligence?

  • Problem-solving ability?
  • Capacity of abstraction?
slide-10
SLIDE 10

What is Artificial Intelligence?

  • Problem-solving ability?
  • Capacity of abstraction?
  • Capacity of organization?
slide-11
SLIDE 11

What is Artificial Intelligence?

  • Problem-solving ability?
  • Capacity of abstraction?
  • Capacity of organization?
  • Creativity?
slide-12
SLIDE 12

What is Artificial Intelligence?

  • Problem-solving ability?
  • Capacity of abstraction?
  • Capacity of organization?
  • Creativity?
  • Self-awareness?
slide-13
SLIDE 13

What is Artificial Intelligence?

  • Problem-solving ability?
  • Capacity of abstraction?
  • Capacity of organization?
  • Creativity?
  • Self-awareness?
  • Manipulation ability?
slide-14
SLIDE 14

AI as a discipline

  • Most disciplines emerge around specific domains of knowledge,

settling upon methods deemed adequate to that domain. Biology

Life and living

  • rganisms

Laws of the universe

Physics

Legal systems and justice

Law

Computational systems

Computer science

but Artificial Intelligence?

slide-15
SLIDE 15

AI as a discipline

  • As a discipline, AI is not primarily connected to a knowledge

domain, but to a purpose:

conceiving artificial systems that are intelligent

  • All other disciplines (and their methods, or refinements of their

methods) become for AI instrumental to that purpose (or sub- goals derived from that purpose).

  • But what is meant by this purpose?
slide-16
SLIDE 16

Categories of AIs

think like humans think rationally act like humans act rationally systems that

Russell and Norvig, "Artificial Intelligence: a Modern Approach", chapter 1 available at https://people.eecs.berkeley.edu/~russell/aima1e/chapter01.pdf

slide-17
SLIDE 17

Categories of AIs

think like humans think rationally act like humans act rationally systems that

MENTAL dimension BEHAVIOURAL dimension

slide-18
SLIDE 18

Categories of AIs

think like humans think rationally act like humans act rationally systems that

DESCRIPTIVE dimension PRESCRIPTIVE dimension standards set by actual human behaviour standards set by ideal (human) behaviour MENTAL dimension BEHAVIOURAL dimension

slide-19
SLIDE 19

think like humans think rationally act like humans act rationally systems that

Turing test approach

artificial and natural not distinguishable behind a neutral interface

slide-20
SLIDE 20

think like humans think rationally act like humans act rationally systems that

Cognitive modeling approach

AI reproducing cognitive functions observed by humans If these cognitive functions are required for our intelligence they might be required to achieve artificial intelligence

NATURA ARTIS MAGISTRA argument EXPLAINABILITY argument

If they explain our internal working they can help to interpret AI functioning

slide-21
SLIDE 21

think like humans think rationally act like humans act rationally systems that

The “Laws of Thought” approach

AI producing logically valid inferences

slide-22
SLIDE 22

think like humans think rationally act like humans act rationally systems that

The “Rational Agent” approach

AI decision-making following standards of rationality

– the agent selects

the best choice

– to achieve its goals – given its beliefs

autonomous entity

slide-23
SLIDE 23

Recent advances

think like humans think rationally perform like humans act rationally systems that in narrow (specific) general contexts

  • In specific tasks, performance can be easily measured (quantified).

→ systems can adapt to perform better than humans.

  • utperform humans
slide-24
SLIDE 24

AI waves

  • This variety of topics has been developed through a cycles of

springs (and winters) centered around different topics.

  • Some of the peaks:

ad-hoc systems with handcrafted knowledge (60s/70s)

expert systems/problem solving methods (80s)

robotics, computer vision, speech recognition (80s)

evolutionary computing (90s)

agent-based modeling and multi-agent systems (90s/00s)

semantic web (00s)

deep learning (10s)

slide-25
SLIDE 25

AI waves

  • This variety of topics has been developed through a cycles of

springs (and winters) centered around different topics.

  • Some of the peaks:

ad-hoc systems with handcrafted knowledge (60s/70s)

expert systems/problem solving methods (80s)

robotics, computer vision, speech recognition (80s)

evolutionary computing (90s)

agent-based modeling and multi-agent systems (90s/00s)

semantic web (00s)

deep learning (10s)

  • Although each time the mainstream topic eclipsed all the others,

the following advances were enabled only because fundamental research in the others somehow continued.

slide-26
SLIDE 26

AI waves

  • This variety of topics has been developed through a cycles of

springs (and winters) centered around different topics.

  • Some of the peaks:

ad-hoc systems with handcrafted knowledge (60s/70s)

expert systems/problem solving methods (80s)

robotics, computer vision, speech recognition (80s)

evolutionary computing (90s)

agent-based modeling and multi-agent systems (90s/00s)

semantic web (00s)

deep learning (10s)

  • Although each time the mainstream topic eclipsed all the others,

the following advances were enabled only because fundamental research in the others somehow continued. To present more in detail this phenomenon, we will now look into the start of AI, or its first wave.

slide-27
SLIDE 27

The start of AI

slide-28
SLIDE 28

The start of Artificial Intelligence

  • Artificial Intelligence is a research field whose name was

decided in a workshop at Dartmouth College in 1956.

  • A group of scientists gathered at the Dartmouth campus for a

brainstorming long 6-8 weeks on the conception of “Machines that Think” and settled the foundations of at least three decades of research.

slide-29
SLIDE 29

The start of Artificial Intelligence

  • Artificial Intelligence is a research field whose name was

decided in a workshop at Dartmouth College in 1956.

  • A group of scientists gathered at the Dartmouth campus for a

brainstorming long 6-8 weeks on the conception of “Machines that Think” and settled the foundations of at least three decades of research.

  • But such an exploit rarely occurs by chance.
slide-30
SLIDE 30

Contextualization

  • Operational Research (since ~1930s)

a sub-field of applied mathematics emerged in the years prior to World War II, when UK prepared to anticipate war. it focuses on decision-making for operational settings:

manufacturing

transportation

supply chain

routing

scheduling

...

slide-31
SLIDE 31
  • Cybernetics (~1940s)

emerged as a transdisciplinary approach to investigate systems

  • f regulation, in fields as diverse as electronics, mechanics,

biology and neurosciences. It considers systems holistically and study their internal control structures, constraints and possibilities.

negative feedback as a structure as a process

Contextualization

slide-32
SLIDE 32
  • Technological advances in Electronics (~1950)

Invention of bipolar transistor (1947)

First generation computers (vacuum tubes-based) Second generation computers (transistor-based)

ENIAC: 30 tons, area of about 1,800 square feet.

Contextualization

slide-33
SLIDE 33
  • Theoretical results about Computation and Information

Alan Turing Claude E. Shannon Information theory (1948)

  • enabling to quantify information (for

communication purposes), and so to perform data compression and to identify the limits of signal processing

formal model of computation (1937)

  • enabling to write logically all computing

processes (Universal Turing Machine)

the “Imitation Game” (1950)

  • defining an operational standard for

intelligence (Turing Test)

Contextualization

slide-34
SLIDE 34
  • Psychology

behaviorism

  • K. Craik. “The Nature of

Explanation” (1943)

cognitive psychology

  • B. F. Skinner, "The Behavior
  • f Organisms: An Experimental

Analysis” (1938)

  • Removal of mental element, Focus on
  • perant conditioning (reward, punishments)
  • Recovery of mental element,

folk-psychology, compatible with information-processing view of cognition

Contextualization

slide-35
SLIDE 35

Contextualization

  • Psychology (Neural Networks)

Thoughts and body activity result from interactions among neurons within the brain. Alexander Bain (1873), William James (1890).

Simultaneous activation of neurons leads to increases in synaptic strength between them.

Donald Hebb (1949) Presentation of first computational machines simulating neural networks Farley and Clark (1954), Rochester, Holland, Habit, and Duda (1956).

slide-36
SLIDE 36

Who was at the Darmouth Workshop (1956)?

  • A remarkable group of ~20 scientist and engineers, including:

– John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory)

future nobel prizes

slide-37
SLIDE 37

Who was at the Darmouth Workshop (1956)?

  • A remarkable group of ~20 scientist and engineers, including:

– John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory)

future nobel prizes The workshop brought no tangible result, but resulted in a shift from semantic approaches to symbolic processing.

slide-38
SLIDE 38

Who was at the Darmouth Workshop (1956)?

  • A remarkable group of ~20 scientist and engineers, including:

– John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory)

future nobel prizes The workshop brought no tangible result, but resulted in a shift from semantic approaches to symbolic processing. a strong agenda

slide-39
SLIDE 39

AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making

logicist empiricist

slide-40
SLIDE 40

monolithical systems heterogeneous systems homogeneous systems

AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making

logicist empiricist monolithical systems probability artificial neural networks (ANNs) logic

slide-41
SLIDE 41

monolithical systems heterogeneous systems homogeneous systems

“Scruffies” “Neats”

elegant solutions, provably correct ad-hoc solutions, empirical evaluation

AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making

logicist empiricist monolithical systems

characteristics of most people at the Darmouth workshop

slide-42
SLIDE 42

monolithical systems heterogeneous systems homogeneous systems

“Scruffies” “Neats”

elegant solutions, provably correct ad-hoc solutions, empirical evaluation

AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making

logicist empiricist monolithical systems

characteristics of most people at the Darmouth workshop There were few researchers working on neural networks, and more in general learning was not brought to the foreground.

slide-43
SLIDE 43

What/who stayed in the background

  • In the words of another remarkable researcher (who was

invited but could not go):

– John Holland (neural networks, pioneer of complex adaptive systems and genetic algorithms)

[It resulted that] “there was very little interest in learning. In my honest opinion, this held up AI in quite a few ways. It would have been much better if Rosenblatt’s Perceptron work, or in particular Samuels’ checkers playing system, or some of the other early machine learning work, had had more of an impact. In particular, I think there would have been less of this notion that you can just put it all in as expertise” [..] “it’s still not absolutely clear to me why the other approaches fell away. Perhaps there was no forceful advocate.”

  • P. Husbands. (2008). An Interview With John Holland. In P. Husbands, O. Holland, & M.

Wheeler (Eds.), The Mechanical Mind in History (pp. 383–396).

slide-44
SLIDE 44

Ingredients for many stories of shining success and dramatic fall in AI

slide-45
SLIDE 45
  • societal needs
  • strong advocates
  • initial unexpected successes
  • adequate computational technologies

Ingredients for many stories of shining success and dramatic fall in AI

slide-46
SLIDE 46
  • societal needs
  • strong advocates
  • initial unexpected successes
  • adequate computational technologies
  • financial resources

Ingredients for many stories of shining success and dramatic fall in AI

slide-47
SLIDE 47

Ingredients for many stories of shining success and dramatic fall in AI

  • societal needs
  • strong advocates
  • initial unexpected successes
  • adequate computational technologies
  • financial resources

raising expectations illusions and then delusions

hype cycle

slide-48
SLIDE 48

Ingredients for many stories of shining success and dramatic fall in AI

  • societal needs
  • strong advocates
  • initial unexpected successes
  • adequate computational technologies
  • financial resources

raising expectations illusions and then delusions

  • but still (most of the times) there are concrete achievements.

They just become infrastructure: invisible, but necessary.

slide-49
SLIDE 49

Computational intelligence

slide-50
SLIDE 50

Algorithm = Logic + Control

“An algorithm can be regarded as consisting of

– a logic component, which specifies the knowledge to be

used in solving problems, and

– a control component, which determines the problem-

solving strategies by means of which that knowledge is used. The logic component determines the meaning of the algorithm whereas the control component only affects its efficency.”

Kowalski, R. (1979). Algorithm = Logic + Control. Communications of the ACM, 22(7), 424–436.

slide-51
SLIDE 51

Imperative style of programming: you command the directions

slide-52
SLIDE 52

Imperative style of programming: you command the directions

slide-53
SLIDE 53

Imperative style of programming: you command the directions

  • What if the

labyrinth changes?

slide-54
SLIDE 54

Declarative style of programming: you give just the labyrinth. the computer finds the way.

slide-55
SLIDE 55

Declarative style of programming: you give just the labyrinth. the computer finds the way.

  • For instance, via

trial, error and backtracking.

slide-56
SLIDE 56

Declarative style of programming: you give just the labyrinth. the computer finds the way.

  • For instance, via

trial, error and backtracking.

slide-57
SLIDE 57

Declarative style of programming: you give just the labyrinth. the computer finds the way.

  • For instance, via

trial, error and backtracking.

slide-58
SLIDE 58

Declarative style of programming: you give just the labyrinth. the computer finds the way.

  • For instance, via

trial, error and backtracking.

slide-59
SLIDE 59

WELL-DEFINED PROBLEM Goal state Initial state KNOWLEDGE PROBLEM-SOLVING METHOD

Declarative style of programming: you give just the labyrinth. the computer finds the way.

  • For instance, via

trial, error and backtracking.

slide-60
SLIDE 60

Well-defined problems & problem spaces

P r

  • b

l e m s a r e w e l l

  • d

e f i n e d w h e n t h e r e i s a s i m p l e t e s t t

  • c
  • n

c l u d e w h e t h e r a s

  • l

u t i

  • n

i s a s

  • l

u t i

  • n

.

J . M c C a r t h y ( 1 9 5 6 ) T h e i n v e r s i

  • n
  • f

f u n c t i

  • n

s d e f i n e d b y T u r i n g m a c h i n e s . A u t

  • m

a t a S t u d i e s , A n n a l s

  • f

M a t h e m a t i c a l S t u d i e s , 3 4 : 1 7 7 – 1 8 1 .

P e

  • p

l e s

  • l

v e p r

  • b

l e m s b y s e a r c h i n g t h r

  • u

g h a p r

  • b

l e m s p a c e , c

  • n

s i s t i n g

  • f

t h e i n i t i a l s t a t e , t h e g

  • a

l s t a t e , a n d a l l p

  • s

s i b l e s t a t e s i n b e t w e e n .

N e w e l l , A . , & S i m

  • n

, H . A . ( 1 9 7 2 ) . H u m a n p r

  • b

l e m s

  • l

v i n g .

slide-61
SLIDE 61

Problem and solution spaces P S

p r

  • b

l e m s s p a c e s

  • l

u t i

  • n

s p a c e s p

slide-62
SLIDE 62

Problem and solution spaces P S

p r

  • b

l e m s s p a c e s

  • l

u t i

  • n

s p a c e s p

p r

  • b

l e m t y p e a b s t r a c t s

  • l

u t i

  • n

a b s t r a c t i

  • n

r e f i n e m e n t

slide-63
SLIDE 63

Problem and solution spaces P S

p r

  • b

l e m s s p a c e s

  • l

u t i

  • n

s p a c e s

  • l

u t i

  • n

s p

p r

  • b

l e m t y p e a b s t r a c t s

  • l

u t i

  • n

a b s t r a c t i

  • n

r e f i n e m e n t

slide-64
SLIDE 64

Defining the problem...

An old lady wants to visit her friend in a neighbouring village. She takes her car, but halfway the engine stops after some

  • hesitations. On the side of the

road she tries to restart the engine, but to no avail.

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

Wh i c h i s t h e p r

  • b

l e m h e r e ?

slide-65
SLIDE 65

from ill-defined to well-defined problems...

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

slide-66
SLIDE 66

Suite of problem types

m

  • d

e l l i n g

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

slide-67
SLIDE 67

Suite of problem types

m

  • d

e l l i n g d e s i g n

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

s t r u c t u r a l v i e w : s y s t e m

slide-68
SLIDE 68

Suite of problem types

m

  • d

e l l i n g p l a n n i n g d e s i g n

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

s t r u c t u r a l v i e w : s y s t e m b e h a v i

  • u

r a l v i e w : s y s t e m + e n v i r

  • n

m e n t

slide-69
SLIDE 69

Suite of problem types

m

  • d

e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

b e h a v i

  • u

r a l v i e w : s y s t e m + e n v i r

  • n

m e n t s t r u c t u r a l v i e w : s y s t e m s c h e d u l i n g c

  • n

f i g u r a t i

  • n
slide-70
SLIDE 70

Suite of problem types

m

  • d

e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

s t r u c t u r a l v i e w : s y s t e m b e h a v i

  • u

r a l v i e w : s y s t e m + e n v i r

  • n

m e n t

slide-71
SLIDE 71

Suite of problem types

m

  • d

e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t m

  • n

i t

  • r

i n g

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

s t r u c t u r a l v i e w : s y s t e m b e h a v i

  • u

r a l v i e w : s y s t e m + e n v i r

  • n

m e n t

slide-72
SLIDE 72

Suite of problem types

m

  • d

e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t m

  • n

i t

  • r

i n g d i a g n

  • s

i s

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

s t r u c t u r a l v i e w : s y s t e m b e h a v i

  • u

r a l v i e w : s y s t e m + e n v i r

  • n

m e n t

slide-73
SLIDE 73

Suite of problem types

m

  • d

e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t m

  • n

i t

  • r

i n g d i a g n

  • s

i s

B r e u k e r , J . ( 1 9 9 4 ) . C

  • m

p

  • n

e n t s

  • f

p r

  • b

l e m s

  • l

v i n g a n d t y p e s

  • f

p r

  • b

l e m s . A F u t u r e f

  • r

K n

  • w

l e d g e A c q u i s i t i

  • n

, 8 6 7 , 1 1 8 – 1 3 6 .

s t r u c t u r a l v i e w : s y s t e m b e h a v i

  • u

r a l v i e w : s y s t e m + e n v i r

  • n

m e n t

  • AI researchers studied problem-solving methods and

associated knowledge structures for each problem type.

slide-74
SLIDE 74

What is Knowledge in AI?

  • K

n

  • w

l e d g e i s w h a t w e a s c r i b e t

  • a

n a g e n t t

  • p

r e d i c t i t s b e h a v i

  • u

r f

  • l

l

  • w

i n g p r i n c i p l e s

  • f

r a t i

  • n

a l i t y . N

  • t

e : t h i s k n

  • w

l e d g e r e p r e s e n t a t i

  • n

i s n

  • t

i n t e n d e d t

  • b

e a n a c c u r a t e , p h y s i c a l m

  • d

e l .

N e w e l l , A . ( 1 9 8 2 ) . T h e K n

  • w

l e d g e L e v e l . A r t i f i c i a l I n t e l l i g e n c e , 1 8 ( 1 ) , 8 7 – 1 2 7 .

slide-75
SLIDE 75

Data, Information, Knowledge

  • D

a t a : u n i n t e r p r e t e d s i g n a l s

  • r

s y m b

  • l

s

slide-76
SLIDE 76

Data, Information, Knowledge

  • D

a t a : u n i n t e r p r e t e d s i g n a l s

  • r

s y m b

  • l

s

  • I

n f

  • r

ma t i

  • n

: d a t a w i t h a d d e d m e a n i n g

slide-77
SLIDE 77

Data, Information, Knowledge

  • D

a t a : u n i n t e r p r e t e d s i g n a l s

  • r

s y m b

  • l

s

  • I

n f

  • r

ma t i

  • n

: d a t a w i t h a d d e d m e a n i n g

  • K

n

  • w

l e d g e : a l l d a t a a n d i n f

  • r

m a t i

  • n

t h a t p e

  • p

l e u s e t

  • a

c t , a c c

  • m

p l i s h t a s k s a n d t

  • c

r e a t e n e w i n f

  • r

m a t i

  • n

( e . g . k n

  • w
  • h
  • w

,

  • w

h y ,

  • w

h

  • ,
  • w

h e r e a n d

  • w

h e n ) .

slide-78
SLIDE 78

if flower and seed then phanerogam if phanerogam and bare-seed then fir if phanerogam and 1-cotyledon then monocotyledonous if phanerogam and 2-cotyledon then dicotyledonous if monocotyledon and rhizome then thrush if dicotyledon then anemone if monocotyledon and ¬rhizome then lilac if leaf and flower then cryptogamous if cryptogamous and ¬root then foam if cryptogamous and root then fern if ¬leaf and plant then thallophyte if thallophyte and chlorophyll then algae if thallophyte and ¬ chlorophyll then fungus if ¬leaf and ¬flower and ¬plant then colibacille

Expert system (rule base)

rhizome + flower + seed + 1-cotyledon ?

slide-79
SLIDE 79

Frames

  • F

r a m e s a r e " s t e r e

  • t

y p e d " k n

  • w

l e d g e u n i t s r e p r e s e n t i n g s i t u a t i

  • n

s ,

  • b

j e c t s

  • r

e v e n t s

  • r

( c l a s s e s ) s e t s

  • f

s u c h e n t i t i e s .

(base for the Obiect-Oriented Programming paradigm)

slide-80
SLIDE 80

Semantic Networks

(used in contemporary Semantic Web technologies)

slide-81
SLIDE 81

In sum

  • Symbolic AI presents transparent techniques to effectively model

and solve problems that can be described in symbolic terms (where expertise can be verbalized).

  • All IT systems of organizations today rely on some of the

technologies introduced or emerged during the first AI wave.

  • But these results are much inferior than what promised..

(even more in the 70s).

slide-82
SLIDE 82

A p h y s i c a l s y m b

  • l

s y s t e m h a s t h e n e c e s s a r y a n d s u f f i c i e n t m e a n s f

  • r

g e n e r a l i n t e l l i g e n t a c t i

  • n

A l l e n N e w e l l a n d H e r b e r t A . S i m

  • n

C

  • m

p u t e r S c i e n c e a s E m p i r i c a l I n q u i r y : S y m b

  • l

s a n d S e a r c h ( 1 9 7 6 )

slide-83
SLIDE 83

Acknowledged limitations

  • knowledge acquisition bottleneck
  • scaling or modularity
  • tractability (e.g. ramification problem)
  • symbol grounding
slide-84
SLIDE 84

Acknowledged limitations

  • knowledge acquisition bottleneck
  • scaling or modularity
  • tractability (e.g. ramification problem)
  • symbol grounding
  • natural language
  • sensory-motor tasks

– computer vision, – speech recognition, – actuator control

slide-85
SLIDE 85

Acknowledged limitations

  • knowledge acquisition bottleneck
  • scaling or modularity
  • tractability (e.g. ramification problem)
  • symbol grounding
  • natural language
  • sensory-motor tasks

– computer vision, – speech recognition, – actuator control

  • Scruffies never believed the mind was a monolithical system, so

they tinkered with heuristics, ad-hoc methods, and

  • pportunistically with logic (“neat shells for scruffy approaches”).

Hacking solutions

slide-86
SLIDE 86

E L I Z A

We i z e n b a u m ~ 1 9 6 5

Still running e.g. on: https://www.masswerk.at/elizabot/eliza.html

( t h e f i r s t c h a t b

  • t

)

slide-87
SLIDE 87

S H R D L U

Wi n

  • g

r a d ~ 1 9 6 9

  • D

e e p e r l i n g u i s t i c u n d e r s t a n d i n g

  • b

u t l i m i t e d t

  • s

i m p l e b l

  • c

k s w

  • r

l d s

slide-88
SLIDE 88

Acknowledged limitations

  • knowledge acquisition bottleneck
  • scaling or modularity
  • tractability (e.g. ramification problem)
  • symbol grounding
  • natural language
  • sensory-motor tasks

– computer vision, – speech recognition, – actuator control

Scruffies never believed the mind was a monolithical system, so they tinkered with heuristics, ad-hoc methods, and

  • pportunistically with logic (“neat shells for scruffy approaches”).

– but these successes were impossible to generalize.

Hacking solutions

slide-89
SLIDE 89

AI Winter (early 70s/80s)

  • After a series of critical reports, funding to AI projects reduced
  • massively. Researchers started to seek other names for their
  • wn research fields.
slide-90
SLIDE 90
  • Facing overwhelming difficulties to go

beyond from toy problems, radically different paradigms started to be (re)considered, renouncing to symbolic representations.

  • As Rodney Brooks famously put it:

“Elephants don't play chess”

slide-91
SLIDE 91

The revenge of machine learning

slide-92
SLIDE 92

M a c h i n e l e a r n i n g i s a p r

  • c

e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t

  • i

m p r

  • v

e f r

  • m

e x p e r i e n c e .

according to well-defined criteria

Machine learning

slide-93
SLIDE 93

M a c h i n e l e a r n i n g i s a p r

  • c

e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t

  • i

m p r

  • v

e f r

  • m

e x p e r i e n c e .

  • Rather then writing a program, here the developer has to

collect adequate training data and decide a ML method.

ML black box ML method learning data

parameters adaptation

program

vs

INPUT OUTPUT INPUT OUTPUT

Machine learning

slide-94
SLIDE 94

Machine learning

M a c h i n e l e a r n i n g i s a p r

  • c

e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t

  • i

m p r

  • v

e f r

  • m

e x p e r i e n c e .

  • Rather then writing a program, here the developer has to

collect adequate training data and decide a ML method.

ML black box ML method learning data

parameters adaptation

program

vs

INPUT OUTPUT INPUT OUTPUT

  • Unfortunately, an adequate parameter adaptation can be

highly data-demanding, especially for rich inputs.

slide-95
SLIDE 95

Machine learning & co.

M a c h i n e l e a r n i n g i s a p r

  • c

e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t

  • i

m p r

  • v

e f r

  • m

e x p e r i e n c e .

  • Many learning methods are available, but studied and used

by different communities!

  • Neural networks are only one among many.

(In some situations, evolutionary algorithms can also be of use for this task)

Nice video applying evolutionary algorithms: https://www.youtube.com/watch?v=pgaEE27nsQw

slide-96
SLIDE 96

Neural Networks timeline

slide-97
SLIDE 97

Neural Networks timeline

  • ne year after

Dartmouth!

slide-98
SLIDE 98

Neural Networks timeline

Minsky and Papert matematically prove that the Perceptron could not model “exclusive or”

slide-99
SLIDE 99

Neural Networks timeline

Backpropagation and the addition of layers solved the problem

slide-100
SLIDE 100

Neural Networks timeline

reuse of previous training possible → fine-tuning

slide-101
SLIDE 101

Neural Networks timeline

pervasive introduction of internet, smart devices, global IT corporations → big data era

slide-102
SLIDE 102

Neural Networks timeline

All ingredients to start another AI wave are there!!! pervasive introduction of internet, smart devices, global IT corporations → big data era

slide-103
SLIDE 103

Biological neurons vs ANN nodes

slide-104
SLIDE 104

weighted accumulation

Biological neurons vs ANN nodes

slide-105
SLIDE 105

weighted accumulation non-linearization

Biological neurons vs ANN nodes

slide-106
SLIDE 106

weighted accumulation non-linearization a sort of informational filter

Biological neurons vs ANN nodes

slide-107
SLIDE 107

weighted accumulation non-linearization a sort of informational filter

  • A multi-layered artificial neural network

is similar to a cascade of filters, that can be used to extract what is relevant and transform it adequately.

Biological neurons vs ANN nodes

slide-108
SLIDE 108
  • To reduce data requirements,

in classic ML features deemed to be relevant are manually selected by the developer from the available input.

Goodfellow, Benjo and Courville, "Deep Learning" (2016)

slide-109
SLIDE 109
  • When this is not possible,

features have to be extracted as well, through some representation learning.

Goodfellow, Benjo and Courville, "Deep Learning" (2016)

slide-110
SLIDE 110
  • Deep learning relies on a

hierarchy of representation learning, producing different level of abstractions

Goodfellow, Benjo and Courville, "Deep Learning" (2016)

slide-111
SLIDE 111
  • Deep learning relies on a

hierarchy of representation learning, producing different level of abstractions

Goodfellow, Benjo and Courville, "Deep Learning" (2016)

slide-112
SLIDE 112
  • Problem: the developer does not have direct control on which

features are considered to be relevant to the task.

slide-113
SLIDE 113

even qualitative introspection is subjective and arguable

  • Problem: the developer does not have direct control on which

features are considered to be relevant to the task.

slide-114
SLIDE 114

Adversarial attacks

https://blog.openai.com/adversarial-example-research/

  • Knowing what is deemed of attention by the machine can

be exploited by an attacker can produce targeted “optical illusions” for the machine, but not for us.

slide-115
SLIDE 115

Using encoding/decoding abilities of deep learning

  • On the other hand, knowing what is relevant to our vision,

someone can play dirty tricks.

Face to face: https://www.youtube.com/watch?v=ohmajJTcpNk Voice to lips: https://www.youtube.com/watch?v=9Yq67CjDqvw

slide-116
SLIDE 116

From software/knowledge engineering to data engineering

ML black box ML method learning data

parameters adaptation

program

vs

  • Clearly, the outcome of applying a ML method critically

depends on the training data.

slide-117
SLIDE 117

From software/knowledge engineering to data engineering

  • Country A’s army demands a classifier to recognize whether a

tanks is from country A or country B. It provides the developers with a series of photos of tanks from both countries.

slide-118
SLIDE 118

From software/knowledge engineering to data engineering

  • Country A’s army demands a classifier to recognize whether a

tanks is from country A or country B. It provides the developers with a series of photos of tanks from both countries.

  • After the training, the developers investigate by introspection

the activation patterns. They discover that “daylight” is a major factor supporting a B-tank classification. Returning on the source data, the developers discovered that there was no photo of B-tanks at night.

slide-119
SLIDE 119

From software/knowledge engineering to data engineering

  • Country A’s army demands a classifier to recognize whether a

tanks is from country A or country B. It provides the developers with a series of photos of tanks from both countries.

  • After the training, the developers investigate by introspection

the activation patterns. They discover that “daylight” is a major factor supporting a B-tank classification. Returning on the source data, the developers discovered that there was no photo of B-tanks at night. statistical biases endanger ML predictive abilities

slide-120
SLIDE 120

On the “artificial prejudice”

slide-121
SLIDE 121
  • Software used across the US

predicting future crimes and criminals argued to be biased against African Americans (2016)

Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing

On the “artificial prejudice”

slide-122
SLIDE 122
  • Software used across the US

predicting future crimes and criminals argued to be biased against African Americans (2016)

  • Core problem: role of circumstantial

evidence, how to integrate statistical inference in judgment?

DNA footwear

  • rigin, gender,

ethnicity, wealth, ... ...

On the “artificial prejudice”

slide-123
SLIDE 123
  • Software used across the US

predicting future crimes and criminals argued to be biased against African Americans (2016)

  • Core problem: role of circumstantial

evidence, how to integrate statistical inference in judgment?

DNA footwear

  • rigin, gender,

ethnicity, wealth, ... ...

improper profiling?

On the “artificial prejudice”

slide-124
SLIDE 124
  • Software used across the US

predicting future crimes and criminals argued to be biased against African Americans (2016)

  • Core problem: role of circumstantial

evidence, how to integrate statistical inference in judgment?

DNA footwear

  • rigin, gender,

ethnicity, wealth, ... ...

improper profiling?

On the “artificial prejudice”

improper because it causes unfair judgment

slide-125
SLIDE 125
  • Software used across the US

predicting future crimes and criminals argued to be biased against African Americans (2016)

  • Core problem: role of circumstantial

evidence, how to integrate statistical inference in judgment?

Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing

DNA footwear

  • rigin, gender,

ethnicity, wealth, ... ...

improper profiling?

On the “artificial prejudice”

improper because it causes unfair judgment Norms determine which factors are acceptable or not.

slide-126
SLIDE 126

The present

  • By using a mixture of ML techniques, several human or super-

human performances are achieved every year in specific tasks (mostly by corporation-driven research).

The AI index publishes reports on these records: https://aiindex.org

slide-127
SLIDE 127

The present

  • By using a mixture of ML techniques, several human or super-

human performances are achieved every year in specific tasks (mostly by corporation-driven research).

The AI index publishes reports on these records: https://aiindex.org Google DeepMind (2016) Microsoft (2018) Uber (Feb 2019)

slide-128
SLIDE 128

The present

  • By using a mixture of ML techniques, several human or super-

human performances are achieved every year in specific tasks (mostly by corporation-driven research).

  • But the problems of generalization, explainability,

transparency, responsibility, fairness, etc. are still there.

The AI index publishes reports on these records: https://aiindex.org Google DeepMind (2016) Microsoft (2018) Uber (Feb 2019)

slide-129
SLIDE 129

The present

  • New research trends are emerging to face these issues, trying a

variety of different approaches.

slide-130
SLIDE 130

The present

  • New research trends are emerging to face these issues, trying a

variety of different approaches.

  • Still unclear which one will achieve the intent.
slide-131
SLIDE 131

Prospective trajectories

slide-132
SLIDE 132

Refocus on interaction

  • Intelligence can be rephrased in terms of adequate

performance within a certain interactional niche:

slide-133
SLIDE 133

Refocus on interaction

  • Intelligence can be rephrased in terms of adequate

performance within a certain interactional niche: i.e. the ability of one agent:

  • to select or create a script that can be ascribed to

to the other agent (including the environment)

contextualization

slide-134
SLIDE 134

Refocus on interaction

  • Intelligence can be rephrased in terms of adequate

performance within a certain interactional niche: i.e. the ability of one agent:

  • to select or create a script that can be ascribed to

to the other agent (including the environment)

  • to select or create a script that drives rewarding

interactions with the other agent

fitting to given context contextualization

slide-135
SLIDE 135

Challenges

  • Today, AI and decision-making capitalize too much on
  • ptimization working on the “fitting to given context” phase.
slide-136
SLIDE 136

Challenges

  • Today, AI and decision-making capitalize too much on
  • ptimization working on the “fitting to given context” phase.
  • But the “contexualization” phase is particularly problematic

w.r.t. the social environment, for its high variability.

slide-137
SLIDE 137

Challenges

  • Today, AI and decision-making capitalize too much on
  • ptimization working on the “fitting to given context” phase.
  • But the “contexualization” phase is particularly problematic

w.r.t. the social environment, for its high variability.

  • e.g. the evolution of dress

codes along history

slide-138
SLIDE 138

Challenges

  • Today, AI and decision-making capitalize too much on
  • ptimization working on the “fitting to given context” phase.
  • But the “contexualization” phase is particularly problematic

w.r.t. the social environment, for its high variability.

  • The social structure adds upon the physical structure in

indicating and then establishing rewards to agents, via

  • explicit norms
  • informal and tacit norms: social practices
slide-139
SLIDE 139

Challenges

  • Today, AI and decision-making capitalize too much on
  • ptimization working on the “fitting to given context” phase.
  • But the “contexualization” phase is particularly problematic

w.r.t. the social environment, for its high variability.

  • The social structure adds upon the physical structure in

indicating and then establishing rewards to agents, via

  • explicit norms
  • informal and tacit norms: social practices

Norms are crucial for intelligent (social) behaviour

slide-140
SLIDE 140

Challenges

  • Today, AI and decision-making capitalize too much on
  • ptimization working on the “fitting to given context” phase.
  • But the “contexualization” phase is particularly problematic

w.r.t. the social environment, for its high variability.

  • The social structure adds upon the physical structure in

indicating and then establishing rewards to agents, via

  • explicit norms
  • informal and tacit norms: social practices

Norms are crucial for intelligent (social) behaviour

  • G. Sileno, A. Boer and T. van Engers, The role of normware in trustworthy and explainable AI (2018)
slide-141
SLIDE 141

The call for Explanaible AI (XAI)

Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence

slide-142
SLIDE 142

Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence

The call for Explanaible AI (XAI)

statistical alignment

slide-143
SLIDE 143

Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence

The call for Explanaible AI (XAI)

statistical alignment

slide-144
SLIDE 144

Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence

The call for Explanaible AI (XAI)

? ? ? statistical alignment

slide-145
SLIDE 145

The call for Explanaible AI (XAI)

statistical alignment

~ dog conditioning ~ child development

? ? ?

adapted to rewards conscious of rewards

slide-146
SLIDE 146

The call for Explanaible AI (XAI)

statistical alignment grounding

experential (indirect) (direct)

communicating conceptualizing

experential normative ~ dog conditioning ~ child development

adapted to rewards conscious of rewards

slide-147
SLIDE 147

The call for Explanaible AI (XAI)

statistical alignment

experential (indirect) (direct) experential normative

the INTERFACE problem

computation human cognition

grounding communicating conceptualizing

slide-148
SLIDE 148

the INTERFACE problem

computation human cognition

  • bottom-up: use statistical ML to recreate functions mimicking

to some extent human cognition

  • top-down: conceive algorithms reproducing by design

functions observable in human cognition

Possible approaches

slide-149
SLIDE 149

the INTERFACE problem

computation human cognition

Possible approaches

  • bottom-up: use statistical ML to recreate functions mimicking

to some extent human cognition

  • top-down: conceive algorithms reproducing by design

functions observable in human cognition

  • nly here we have control on what we want to reproduce
slide-150
SLIDE 150

Will cognitive architectures be the third AI wave?

slide-151
SLIDE 151

Conclusions

slide-152
SLIDE 152

No AGI in view

  • I believe (with many others) that crucial pieces are still missing

to embed general intelligence into a single artificial device.

  • These pieces might be simple or not, it’s the ML method that is

not satisfactory to design them.

slide-153
SLIDE 153

Rise of artificially dumber systems

  • However, already today, the introduction of ubiquitous cyber-

physical connections in all human activities raises serious concerns at societal and at cognitive level.

– high risks to be entangled in artificially dumber systems.

slide-154
SLIDE 154

Rise of artificially dumber systems

  • However, already today, the introduction of ubiquitous cyber-

physical connections in all human activities raises serious concerns at societal and at cognitive level.

– high risks to be entangled in artificially dumber systems.

  • The potential impact is too critical to be belittled for the belief in

technologically-driven 'magnificent and progressive fate'.

slide-155
SLIDE 155

AI as an extension to humans

  • Humans, as species, evolved being shaped by their tools.
  • We should look at our tools not as means, but as forces that

determine not only our societies, but also our very existence.

slide-156
SLIDE 156

AI as an extension to humans

  • Humans, as species, evolved being shaped by their tools.
  • We should look at our tools not as means, but as forces that

determine not only our societies, but also our very existence. If we want to decide upon our existence, then we have also to decide upon our tools.