History of AI, Current Trends, Prospective Trajectories
Giovanni Sileno g.sileno@uva.nl
Winter Academy on Artificial Intelligence and International Law
Asser Institute – 11 February 2019
History of AI, Current Trends, Prospective Trajectories Winter - - PowerPoint PPT Presentation
History of AI, Current Trends, Prospective Trajectories Winter Academy on Artificial Intelligence and International Law Asser Institute 11 February 2019 Giovanni Sileno g.sileno@uva.nl What is Artificial Intelligence? What is Artificial
Giovanni Sileno g.sileno@uva.nl
Winter Academy on Artificial Intelligence and International Law
Asser Institute – 11 February 2019
settling upon methods deemed adequate to that domain. Biology
Life and living
Laws of the universe
Physics
Legal systems and justice
Law
Computational systems
Computer science
but Artificial Intelligence?
domain, but to a purpose:
conceiving artificial systems that are intelligent
methods) become for AI instrumental to that purpose (or sub- goals derived from that purpose).
think like humans think rationally act like humans act rationally systems that
Russell and Norvig, "Artificial Intelligence: a Modern Approach", chapter 1 available at https://people.eecs.berkeley.edu/~russell/aima1e/chapter01.pdf
think like humans think rationally act like humans act rationally systems that
MENTAL dimension BEHAVIOURAL dimension
think like humans think rationally act like humans act rationally systems that
DESCRIPTIVE dimension PRESCRIPTIVE dimension standards set by actual human behaviour standards set by ideal (human) behaviour MENTAL dimension BEHAVIOURAL dimension
think like humans think rationally act like humans act rationally systems that
artificial and natural not distinguishable behind a neutral interface
think like humans think rationally act like humans act rationally systems that
AI reproducing cognitive functions observed by humans If these cognitive functions are required for our intelligence they might be required to achieve artificial intelligence
NATURA ARTIS MAGISTRA argument EXPLAINABILITY argument
If they explain our internal working they can help to interpret AI functioning
think like humans think rationally act like humans act rationally systems that
AI producing logically valid inferences
think like humans think rationally act like humans act rationally systems that
AI decision-making following standards of rationality
– the agent selects
the best choice
– to achieve its goals – given its beliefs
autonomous entity
think like humans think rationally perform like humans act rationally systems that in narrow (specific) general contexts
→ systems can adapt to perform better than humans.
springs (and winters) centered around different topics.
–
ad-hoc systems with handcrafted knowledge (60s/70s)
–
expert systems/problem solving methods (80s)
–
robotics, computer vision, speech recognition (80s)
–
evolutionary computing (90s)
–
agent-based modeling and multi-agent systems (90s/00s)
–
semantic web (00s)
–
deep learning (10s)
springs (and winters) centered around different topics.
–
ad-hoc systems with handcrafted knowledge (60s/70s)
–
expert systems/problem solving methods (80s)
–
robotics, computer vision, speech recognition (80s)
–
evolutionary computing (90s)
–
agent-based modeling and multi-agent systems (90s/00s)
–
semantic web (00s)
–
deep learning (10s)
the following advances were enabled only because fundamental research in the others somehow continued.
springs (and winters) centered around different topics.
–
ad-hoc systems with handcrafted knowledge (60s/70s)
–
expert systems/problem solving methods (80s)
–
robotics, computer vision, speech recognition (80s)
–
evolutionary computing (90s)
–
agent-based modeling and multi-agent systems (90s/00s)
–
semantic web (00s)
–
deep learning (10s)
the following advances were enabled only because fundamental research in the others somehow continued. To present more in detail this phenomenon, we will now look into the start of AI, or its first wave.
decided in a workshop at Dartmouth College in 1956.
brainstorming long 6-8 weeks on the conception of “Machines that Think” and settled the foundations of at least three decades of research.
decided in a workshop at Dartmouth College in 1956.
brainstorming long 6-8 weeks on the conception of “Machines that Think” and settled the foundations of at least three decades of research.
a sub-field of applied mathematics emerged in the years prior to World War II, when UK prepared to anticipate war. it focuses on decision-making for operational settings:
–
manufacturing
–
transportation
–
supply chain
–
routing
–
scheduling
–
...
emerged as a transdisciplinary approach to investigate systems
biology and neurosciences. It considers systems holistically and study their internal control structures, constraints and possibilities.
negative feedback as a structure as a process
Invention of bipolar transistor (1947)
First generation computers (vacuum tubes-based) Second generation computers (transistor-based)
ENIAC: 30 tons, area of about 1,800 square feet.
Alan Turing Claude E. Shannon Information theory (1948)
communication purposes), and so to perform data compression and to identify the limits of signal processing
formal model of computation (1937)
processes (Universal Turing Machine)
the “Imitation Game” (1950)
intelligence (Turing Test)
behaviorism
Explanation” (1943)
cognitive psychology
Analysis” (1938)
folk-psychology, compatible with information-processing view of cognition
Thoughts and body activity result from interactions among neurons within the brain. Alexander Bain (1873), William James (1890).
Simultaneous activation of neurons leads to increases in synaptic strength between them.
Donald Hebb (1949) Presentation of first computational machines simulating neural networks Farley and Clark (1954), Rochester, Holland, Habit, and Duda (1956).
– John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory)
future nobel prizes
– John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory)
future nobel prizes The workshop brought no tangible result, but resulted in a shift from semantic approaches to symbolic processing.
– John McCarty (LISP language, situation calculus, non-monotonic logics) – Marvin Minsky (frames, perceptron, society of minds) – Herbert Simon (logic theorist, general problem solver, bounded rationality) – Allen Newell (logic theorist, general problem solver, the knowledge level) – Ray Solomonoff (father of algorithmic probability, algorithmic information theory) – Arthur Lee Samuel (first machine learning algorithm for checkers) – W. Ross Ashby (pioneer in cybernetics, law of requisite variety) – Claude Shannon (father of information theory) – John Nash (father of game theory)
future nobel prizes The workshop brought no tangible result, but resulted in a shift from semantic approaches to symbolic processing. a strong agenda
AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making
logicist empiricist
monolithical systems heterogeneous systems homogeneous systems
AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making
logicist empiricist monolithical systems probability artificial neural networks (ANNs) logic
monolithical systems heterogeneous systems homogeneous systems
“Scruffies” “Neats”
elegant solutions, provably correct ad-hoc solutions, empirical evaluation
AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making
logicist empiricist monolithical systems
characteristics of most people at the Darmouth workshop
monolithical systems heterogeneous systems homogeneous systems
“Scruffies” “Neats”
elegant solutions, provably correct ad-hoc solutions, empirical evaluation
AI AS ENGINEERING OF THE “MIND” induction of functions from data reasoning and decision-making
logicist empiricist monolithical systems
characteristics of most people at the Darmouth workshop There were few researchers working on neural networks, and more in general learning was not brought to the foreground.
invited but could not go):
– John Holland (neural networks, pioneer of complex adaptive systems and genetic algorithms)
[It resulted that] “there was very little interest in learning. In my honest opinion, this held up AI in quite a few ways. It would have been much better if Rosenblatt’s Perceptron work, or in particular Samuels’ checkers playing system, or some of the other early machine learning work, had had more of an impact. In particular, I think there would have been less of this notion that you can just put it all in as expertise” [..] “it’s still not absolutely clear to me why the other approaches fell away. Perhaps there was no forceful advocate.”
Wheeler (Eds.), The Mechanical Mind in History (pp. 383–396).
raising expectations illusions and then delusions
hype cycle
raising expectations illusions and then delusions
They just become infrastructure: invisible, but necessary.
“An algorithm can be regarded as consisting of
– a logic component, which specifies the knowledge to be
used in solving problems, and
– a control component, which determines the problem-
solving strategies by means of which that knowledge is used. The logic component determines the meaning of the algorithm whereas the control component only affects its efficency.”
Kowalski, R. (1979). Algorithm = Logic + Control. Communications of the ACM, 22(7), 424–436.
Imperative style of programming: you command the directions
Imperative style of programming: you command the directions
Imperative style of programming: you command the directions
labyrinth changes?
Declarative style of programming: you give just the labyrinth. the computer finds the way.
Declarative style of programming: you give just the labyrinth. the computer finds the way.
trial, error and backtracking.
Declarative style of programming: you give just the labyrinth. the computer finds the way.
trial, error and backtracking.
Declarative style of programming: you give just the labyrinth. the computer finds the way.
trial, error and backtracking.
Declarative style of programming: you give just the labyrinth. the computer finds the way.
trial, error and backtracking.
WELL-DEFINED PROBLEM Goal state Initial state KNOWLEDGE PROBLEM-SOLVING METHOD
Declarative style of programming: you give just the labyrinth. the computer finds the way.
trial, error and backtracking.
P r
l e m s a r e w e l l
e f i n e d w h e n t h e r e i s a s i m p l e t e s t t
c l u d e w h e t h e r a s
u t i
i s a s
u t i
.
J . M c C a r t h y ( 1 9 5 6 ) T h e i n v e r s i
f u n c t i
s d e f i n e d b y T u r i n g m a c h i n e s . A u t
a t a S t u d i e s , A n n a l s
M a t h e m a t i c a l S t u d i e s , 3 4 : 1 7 7 – 1 8 1 .
P e
l e s
v e p r
l e m s b y s e a r c h i n g t h r
g h a p r
l e m s p a c e , c
s i s t i n g
t h e i n i t i a l s t a t e , t h e g
l s t a t e , a n d a l l p
s i b l e s t a t e s i n b e t w e e n .
N e w e l l , A . , & S i m
, H . A . ( 1 9 7 2 ) . H u m a n p r
l e m s
v i n g .
p r
l e m s s p a c e s
u t i
s p a c e s p
p r
l e m s s p a c e s
u t i
s p a c e s p
p r
l e m t y p e a b s t r a c t s
u t i
a b s t r a c t i
r e f i n e m e n t
p r
l e m s s p a c e s
u t i
s p a c e s
u t i
s p
p r
l e m t y p e a b s t r a c t s
u t i
a b s t r a c t i
r e f i n e m e n t
An old lady wants to visit her friend in a neighbouring village. She takes her car, but halfway the engine stops after some
road she tries to restart the engine, but to no avail.
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
Wh i c h i s t h e p r
l e m h e r e ?
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
m
e l l i n g
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
m
e l l i n g d e s i g n
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
s t r u c t u r a l v i e w : s y s t e m
m
e l l i n g p l a n n i n g d e s i g n
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
s t r u c t u r a l v i e w : s y s t e m b e h a v i
r a l v i e w : s y s t e m + e n v i r
m e n t
m
e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
b e h a v i
r a l v i e w : s y s t e m + e n v i r
m e n t s t r u c t u r a l v i e w : s y s t e m s c h e d u l i n g c
f i g u r a t i
m
e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
s t r u c t u r a l v i e w : s y s t e m b e h a v i
r a l v i e w : s y s t e m + e n v i r
m e n t
m
e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t m
i t
i n g
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
s t r u c t u r a l v i e w : s y s t e m b e h a v i
r a l v i e w : s y s t e m + e n v i r
m e n t
m
e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t m
i t
i n g d i a g n
i s
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
s t r u c t u r a l v i e w : s y s t e m b e h a v i
r a l v i e w : s y s t e m + e n v i r
m e n t
m
e l l i n g p l a n n i n g d e s i g n a s s i g n m e n t a s s e s s m e n t m
i t
i n g d i a g n
i s
B r e u k e r , J . ( 1 9 9 4 ) . C
p
e n t s
p r
l e m s
v i n g a n d t y p e s
p r
l e m s . A F u t u r e f
K n
l e d g e A c q u i s i t i
, 8 6 7 , 1 1 8 – 1 3 6 .
s t r u c t u r a l v i e w : s y s t e m b e h a v i
r a l v i e w : s y s t e m + e n v i r
m e n t
associated knowledge structures for each problem type.
n
l e d g e i s w h a t w e a s c r i b e t
n a g e n t t
r e d i c t i t s b e h a v i
r f
l
i n g p r i n c i p l e s
r a t i
a l i t y . N
e : t h i s k n
l e d g e r e p r e s e n t a t i
i s n
i n t e n d e d t
e a n a c c u r a t e , p h y s i c a l m
e l .
N e w e l l , A . ( 1 9 8 2 ) . T h e K n
l e d g e L e v e l . A r t i f i c i a l I n t e l l i g e n c e , 1 8 ( 1 ) , 8 7 – 1 2 7 .
a t a : u n i n t e r p r e t e d s i g n a l s
s y m b
s
a t a : u n i n t e r p r e t e d s i g n a l s
s y m b
s
n f
ma t i
: d a t a w i t h a d d e d m e a n i n g
a t a : u n i n t e r p r e t e d s i g n a l s
s y m b
s
n f
ma t i
: d a t a w i t h a d d e d m e a n i n g
n
l e d g e : a l l d a t a a n d i n f
m a t i
t h a t p e
l e u s e t
c t , a c c
p l i s h t a s k s a n d t
r e a t e n e w i n f
m a t i
( e . g . k n
,
h y ,
h
h e r e a n d
h e n ) .
if flower and seed then phanerogam if phanerogam and bare-seed then fir if phanerogam and 1-cotyledon then monocotyledonous if phanerogam and 2-cotyledon then dicotyledonous if monocotyledon and rhizome then thrush if dicotyledon then anemone if monocotyledon and ¬rhizome then lilac if leaf and flower then cryptogamous if cryptogamous and ¬root then foam if cryptogamous and root then fern if ¬leaf and plant then thallophyte if thallophyte and chlorophyll then algae if thallophyte and ¬ chlorophyll then fungus if ¬leaf and ¬flower and ¬plant then colibacille
rhizome + flower + seed + 1-cotyledon ?
r a m e s a r e " s t e r e
y p e d " k n
l e d g e u n i t s r e p r e s e n t i n g s i t u a t i
s ,
j e c t s
e v e n t s
( c l a s s e s ) s e t s
s u c h e n t i t i e s .
(base for the Obiect-Oriented Programming paradigm)
(used in contemporary Semantic Web technologies)
and solve problems that can be described in symbolic terms (where expertise can be verbalized).
technologies introduced or emerged during the first AI wave.
(even more in the 70s).
A l l e n N e w e l l a n d H e r b e r t A . S i m
C
p u t e r S c i e n c e a s E m p i r i c a l I n q u i r y : S y m b
s a n d S e a r c h ( 1 9 7 6 )
– computer vision, – speech recognition, – actuator control
– computer vision, – speech recognition, – actuator control
they tinkered with heuristics, ad-hoc methods, and
We i z e n b a u m ~ 1 9 6 5
Still running e.g. on: https://www.masswerk.at/elizabot/eliza.html
( t h e f i r s t c h a t b
)
Wi n
r a d ~ 1 9 6 9
e e p e r l i n g u i s t i c u n d e r s t a n d i n g
u t l i m i t e d t
i m p l e b l
k s w
l d s
– computer vision, – speech recognition, – actuator control
Scruffies never believed the mind was a monolithical system, so they tinkered with heuristics, ad-hoc methods, and
– but these successes were impossible to generalize.
beyond from toy problems, radically different paradigms started to be (re)considered, renouncing to symbolic representations.
M a c h i n e l e a r n i n g i s a p r
e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t
m p r
e f r
e x p e r i e n c e .
according to well-defined criteria
M a c h i n e l e a r n i n g i s a p r
e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t
m p r
e f r
e x p e r i e n c e .
collect adequate training data and decide a ML method.
ML black box ML method learning data
parameters adaptation
program
INPUT OUTPUT INPUT OUTPUT
M a c h i n e l e a r n i n g i s a p r
e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t
m p r
e f r
e x p e r i e n c e .
collect adequate training data and decide a ML method.
ML black box ML method learning data
parameters adaptation
program
INPUT OUTPUT INPUT OUTPUT
highly data-demanding, especially for rich inputs.
M a c h i n e l e a r n i n g i s a p r
e s s t h a t e n a b l e s a r t i f i c i a l s y s t e m s t
m p r
e f r
e x p e r i e n c e .
by different communities!
(In some situations, evolutionary algorithms can also be of use for this task)
Nice video applying evolutionary algorithms: https://www.youtube.com/watch?v=pgaEE27nsQw
Dartmouth!
Minsky and Papert matematically prove that the Perceptron could not model “exclusive or”
Backpropagation and the addition of layers solved the problem
reuse of previous training possible → fine-tuning
pervasive introduction of internet, smart devices, global IT corporations → big data era
All ingredients to start another AI wave are there!!! pervasive introduction of internet, smart devices, global IT corporations → big data era
weighted accumulation
weighted accumulation non-linearization
weighted accumulation non-linearization a sort of informational filter
weighted accumulation non-linearization a sort of informational filter
is similar to a cascade of filters, that can be used to extract what is relevant and transform it adequately.
in classic ML features deemed to be relevant are manually selected by the developer from the available input.
Goodfellow, Benjo and Courville, "Deep Learning" (2016)
features have to be extracted as well, through some representation learning.
Goodfellow, Benjo and Courville, "Deep Learning" (2016)
hierarchy of representation learning, producing different level of abstractions
Goodfellow, Benjo and Courville, "Deep Learning" (2016)
hierarchy of representation learning, producing different level of abstractions
Goodfellow, Benjo and Courville, "Deep Learning" (2016)
features are considered to be relevant to the task.
even qualitative introspection is subjective and arguable
features are considered to be relevant to the task.
https://blog.openai.com/adversarial-example-research/
be exploited by an attacker can produce targeted “optical illusions” for the machine, but not for us.
someone can play dirty tricks.
Face to face: https://www.youtube.com/watch?v=ohmajJTcpNk Voice to lips: https://www.youtube.com/watch?v=9Yq67CjDqvw
ML black box ML method learning data
parameters adaptation
program
depends on the training data.
tanks is from country A or country B. It provides the developers with a series of photos of tanks from both countries.
tanks is from country A or country B. It provides the developers with a series of photos of tanks from both countries.
the activation patterns. They discover that “daylight” is a major factor supporting a B-tank classification. Returning on the source data, the developers discovered that there was no photo of B-tanks at night.
tanks is from country A or country B. It provides the developers with a series of photos of tanks from both countries.
the activation patterns. They discover that “daylight” is a major factor supporting a B-tank classification. Returning on the source data, the developers discovered that there was no photo of B-tanks at night. statistical biases endanger ML predictive abilities
predicting future crimes and criminals argued to be biased against African Americans (2016)
Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing
predicting future crimes and criminals argued to be biased against African Americans (2016)
evidence, how to integrate statistical inference in judgment?
DNA footwear
ethnicity, wealth, ... ...
predicting future crimes and criminals argued to be biased against African Americans (2016)
evidence, how to integrate statistical inference in judgment?
DNA footwear
ethnicity, wealth, ... ...
improper profiling?
predicting future crimes and criminals argued to be biased against African Americans (2016)
evidence, how to integrate statistical inference in judgment?
DNA footwear
ethnicity, wealth, ... ...
improper profiling?
improper because it causes unfair judgment
predicting future crimes and criminals argued to be biased against African Americans (2016)
evidence, how to integrate statistical inference in judgment?
Angwin J. et al. ProPublica, May 23 (2016). Machine Bias: risk assessments in criminal sentencing
DNA footwear
ethnicity, wealth, ... ...
improper profiling?
improper because it causes unfair judgment Norms determine which factors are acceptable or not.
human performances are achieved every year in specific tasks (mostly by corporation-driven research).
The AI index publishes reports on these records: https://aiindex.org
human performances are achieved every year in specific tasks (mostly by corporation-driven research).
The AI index publishes reports on these records: https://aiindex.org Google DeepMind (2016) Microsoft (2018) Uber (Feb 2019)
human performances are achieved every year in specific tasks (mostly by corporation-driven research).
transparency, responsibility, fairness, etc. are still there.
The AI index publishes reports on these records: https://aiindex.org Google DeepMind (2016) Microsoft (2018) Uber (Feb 2019)
variety of different approaches.
variety of different approaches.
performance within a certain interactional niche:
performance within a certain interactional niche: i.e. the ability of one agent:
to the other agent (including the environment)
contextualization
performance within a certain interactional niche: i.e. the ability of one agent:
to the other agent (including the environment)
interactions with the other agent
fitting to given context contextualization
w.r.t. the social environment, for its high variability.
w.r.t. the social environment, for its high variability.
codes along history
w.r.t. the social environment, for its high variability.
indicating and then establishing rewards to agents, via
w.r.t. the social environment, for its high variability.
indicating and then establishing rewards to agents, via
Norms are crucial for intelligent (social) behaviour
w.r.t. the social environment, for its high variability.
indicating and then establishing rewards to agents, via
Norms are crucial for intelligent (social) behaviour
Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence
Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence
statistical alignment
Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence
statistical alignment
Source: DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence
? ? ? statistical alignment
statistical alignment
~ dog conditioning ~ child development
? ? ?
adapted to rewards conscious of rewards
statistical alignment grounding
experential (indirect) (direct)
communicating conceptualizing
experential normative ~ dog conditioning ~ child development
adapted to rewards conscious of rewards
statistical alignment
experential (indirect) (direct) experential normative
computation human cognition
grounding communicating conceptualizing
computation human cognition
to some extent human cognition
functions observable in human cognition
computation human cognition
to some extent human cognition
functions observable in human cognition
to embed general intelligence into a single artificial device.
not satisfactory to design them.
physical connections in all human activities raises serious concerns at societal and at cognitive level.
– high risks to be entangled in artificially dumber systems.
physical connections in all human activities raises serious concerns at societal and at cognitive level.
– high risks to be entangled in artificially dumber systems.
technologically-driven 'magnificent and progressive fate'.
determine not only our societies, but also our very existence.
determine not only our societies, but also our very existence. If we want to decide upon our existence, then we have also to decide upon our tools.