What AI is A.Y. 2019/2020 A taste of AI http://bit.ly/2RW7xlv All - - PowerPoint PPT Presentation

what ai is
SMART_READER_LITE
LIVE PREVIEW

What AI is A.Y. 2019/2020 A taste of AI http://bit.ly/2RW7xlv All - - PowerPoint PPT Presentation

What AI is A.Y. 2019/2020 A taste of AI http://bit.ly/2RW7xlv All problems present in a few seconds: fears, NLU, speech, knowledge, problem solving, monetisation over-fitting, A taste of AI (contd.) http://bit.ly/2Sl2XMT A taste of AI


slide-1
SLIDE 1

What AI is

A.Y. 2019/2020

slide-2
SLIDE 2

A taste of AI

http://bit.ly/2RW7xlv All problems present in a few seconds: fears, NLU, speech, knowledge, problem solving, monetisation over-fitting, …

slide-3
SLIDE 3

A taste of AI (contd.)

http://bit.ly/2Sl2XMT

slide-4
SLIDE 4

A taste of AI (contd.)

http://bit.ly/2GUDPaA

slide-5
SLIDE 5

When was AI born?

slide-6
SLIDE 6
slide-7
SLIDE 7

Strictly speaking AI was born in ‘50s, but…

slide-8
SLIDE 8

The Lullian Circle - 1274

  • A paper machine operated by rotating concentrically arranged circles to

combine a symbolic alphabet, which was repeated on each level. These combinations were said to show all possible truth about the subject of inquiry.

  • Llull based this notion on the idea that there were a limited number of basic,

undeniable truths in all fields of knowledge, and that everything about these fields of knowledge could be understood by studying combinations of these elemental truths.

slide-9
SLIDE 9

Pascal's calculator - 1642

  • Pascal's calculator (aka Pascaline) is a mechanical calculator invented

by Blaise Pascal in the early 17th century.

  • Pascal was led to develop a calculator by the laborious arithmetical

calculations required by his father's work as the supervisor of taxes in Rouen

  • He designed the machine to add and subtract two numbers directly and to

perform multiplication and division through repeated addition or subtraction.

slide-10
SLIDE 10

Leibniz’ Characteristica Universalis and Diagrammatic Reasoning - 1666

  • Characteristica universalis is a formal and universal language able to

express, through a series of symbols, mathematical, scientific and metaphysical concepts.

slide-11
SLIDE 11

The Mechanical Turk - 1770

  • The Mechanical Turk or Automaton Chess Player was a fake chess-

playing machine constructed in the late 18th century.

  • The mechanism appeared to be able to play a strong game of chess against

a human opponent, as well as perform the knight's tour, a puzzle that requires the player to move a knight to occupy every square of a chessboard exactly

  • nce.
slide-12
SLIDE 12

Babbage’s Analytical Engine - 1837

  • A mechanical general-purpose and Turing-complete computer designed by

Charles Babbage in 1837.

slide-13
SLIDE 13

Boole’s Law of Thought - 1854

slide-14
SLIDE 14

What is AI?

slide-15
SLIDE 15

What is artificial? intelligence? learning? rationality? knowledge? schema?

slide-16
SLIDE 16

A few definitions of AI

Thinking Humanly “The exciting new effort to make computers think . . . machines with minds, in the full and literal sense.” (Haugeland, 1985) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978) Thinking Rationally “The study of mental faculties through the use of computational models.”
 (Charniak and McDermott, 1985) “The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992) Acting Humanly “The art of creating machines that per- form functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991) Acting Rationally “Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998) “AI ...is concerned with intelligent be- havior in artifacts.” (Nilsson, 1998)

slide-17
SLIDE 17

A few definitions of AI

Thinking Humanly “The exciting new effort to make computers think . . . machines with minds, in the full and literal sense.” (Haugeland, 1985) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978) Thinking Rationally “The study of mental faculties through the use of computational models.”
 (Charniak and McDermott, 1985) “The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992) Acting Humanly “The art of creating machines that per- form functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991) Acting Rationally “Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998) “AI ...is concerned with intelligent be- havior in artifacts.” (Nilsson, 1998)

AI measured in terms of human performance

slide-18
SLIDE 18

A few definitions of AI

Thinking Humanly “The exciting new effort to make computers think . . . machines with minds, in the full and literal sense.” (Haugeland, 1985) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978) Thinking Rationally “The study of mental faculties through the use of computational models.”
 (Charniak and McDermott, 1985) “The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992) Acting Humanly “The art of creating machines that per- form functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991) Acting Rationally “Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998) “AI ...is concerned with intelligent be- havior in artifacts.” (Nilsson, 1998)

AI measured in terms of rationality

slide-19
SLIDE 19

A few definitions of AI

Thinking Humanly “The exciting new effort to make computers think . . . machines with minds, in the full and literal sense.” (Haugeland, 1985) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978) Thinking Rationally “The study of mental faculties through the use of computational models.”
 (Charniak and McDermott, 1985) “The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992) Acting Humanly “The art of creating machines that per- form functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991) Acting Rationally “Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998) “AI ...is concerned with intelligent be- havior in artifacts.” (Nilsson, 1998)

AI as thought processes and reasoning

slide-20
SLIDE 20

A few definitions of AI

Thinking Humanly “The exciting new effort to make computers think . . . machines with minds, in the full and literal sense.” (Haugeland, 1985) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978) Thinking Rationally “The study of mental faculties through the use of computational models.”
 (Charniak and McDermott, 1985) “The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992) Acting Humanly “The art of creating machines that per- form functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991) Acting Rationally “Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998) “AI ...is concerned with intelligent be- havior in artifacts.” (Nilsson, 1998)

AI addressing behaviour

slide-21
SLIDE 21

Acting humanly

slide-22
SLIDE 22

The Turing Test approach

  • The Turing Test, proposed by Alan Turing (1950), was designed to provide a

satisfactory operational definition of intelligence.

  • A computer passes the test if a human interrogator, after posing some written

questions, cannot tell whether the written responses come from a person or from a computer.

slide-23
SLIDE 23

AI capabilities required by the Turing Test

  • Natural language processing: to enable it to communicate successfully in English; 

  • Knowledge representation: to store what it knows or hears; 

  • Automated reasoning: to use the stored information to answer questions and to

draw new conclusions;

  • Machine learning: to adapt to new circumstances and to detect and extrapolate
  • patterns. 


slide-24
SLIDE 24

Thinking humanly

slide-25
SLIDE 25

The cognitive modelling approach

  • Writing programs that thinks like humans requires that we understand how

humans think

  • Examples
  • The General Problem Solver (GPS) by Newell and Simon (1961)
  • Cognitive science
slide-26
SLIDE 26

General Problem Solver

  • The General Problem Solver (GPS) by Newell and Simon (1961) was

designed in order to compare the trace of its reasoning steps to traces of human subjects solving the same problems

  • Intuition: if the program’s input–output behaviour matches corresponding

human behaviour, that is evidence that some of the program’s mechanisms could also be operating in humans

slide-27
SLIDE 27

Cognitive science

  • It brings together computer models from AI and experimental techniques from

psychology to construct precise and testable theories of the human mind.

  • The interdisciplinary, scientific study of the mind and its processes.
  • It examines the nature, the tasks, and the functions of cognition (in a broad

sense). Cognitive scientists study intelligence and behaviour, with a focus on how nervous systems represent, process, and transform information.

slide-28
SLIDE 28

Thinking rationally

slide-29
SLIDE 29

The “laws of thought” approach

  • The logicist tradition within artificial intelligence is focused on building on programs

modelled as logical axioms about all kinds of objects in the world (or a part of it) and the relations among them

  • In the field of logics the laws of thought are
  • fundamental axiomatic rules upon which rational discourse itself is often considered

to be based.

  • supposed to govern the operation of the mind.
  • Aristotle was one of the first to attempt to codify “right thinking,” that is, irrefutable

reasoning processes.

  • Syllogisms provided patterns for argument structures that always yielded correct

conclusions when given correct premises

  • “Socrates is a man; all men are mortal; therefore, Socrates is mortal.”
slide-30
SLIDE 30

Acting rationally

slide-31
SLIDE 31

The rational agent approach

  • An agent is just something that acts (agent comes from the Latin agere, to

do).

  • Of course, all computer programs do something, but computer agents are

expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals.

  • A rational agent is one that acts so as to achieve the best outcome or, when

there is uncertainty, the best expected outcome.

slide-32
SLIDE 32

Towards a conceptual framework to understand AI

slide-33
SLIDE 33

What is an agent?

  • An agent is anything that can be viewed as perceiving its environment

through sensors and acting upon that environment through actuators.

  • A human agent has eyes, ears, and other organs for sensors and hands, legs,

vocal tract, and so on for actuators.

  • A robotic agent might have cameras and infrared range finders for sensors

and various motors for actuators.

  • A software agent receives keystrokes, file contents, and network packets as

sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets.

slide-34
SLIDE 34

How an agents is able to act?

slide-35
SLIDE 35

The main axis of the framework

  • Problem solving
  • Knowledge representation
slide-36
SLIDE 36

Problem solving

  • How to design an agent with “appropriate” behaviour based on given inputs?

This is representable as a function from a perceptual history to an action [f: P* A]

  • How to automatise “rational” decisions in a given situation?
  • Keywords: automatisation; control flow; relevance; appropriateness;

rationality; pro-activity vs. re-activity; agents, robots, and services; (de-)centralisation

slide-37
SLIDE 37

What is rational at any given time depends on four things

  • The performance measure that defines the criterion of success.
  • The agent’s prior knowledge of the environment.
  • The actions that the agent can perform.
  • The agent’s percept sequence to date
slide-38
SLIDE 38

Rational agent: Maximising the Expected Utility

  • For each possible percept sequence, a rational agent should select an action

that is expected to maximise its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

slide-39
SLIDE 39

Example: A vacuum-cleaner agent

Percept sequence Action [A, Clean] Right [A, Dirty] Suck [B, Clean] Left [B, Dirty] Suck [A, Clean], [A, Clean] Right [A, Clean], [A, Dirty] Suck

Is this a rational agent?

slide-40
SLIDE 40

Example: A vacuum-cleaner agent (contd.)

  • First, we need to say what the performance measure is, what is known about the environment,

and what sensors and actuators the agent has.

  • For example:
  • The performance measure awards one point for each clean square at each time step,
  • ver a “lifetime” of 1000 time steps. 

  • The “geography” of the environment is known a priori but the dirt distribution and the initial

location of the agent are not. Clean squares stay clean and sucking cleans the current

  • square. The Left and Right actions move the agent left and right except when this would

take the agent outside the environment, in which case the agent remains where it is. 


  • The only available actions are Left, Right, and Suck. 

  • The agent correctly perceives its location and whether that location contains dirt.
slide-41
SLIDE 41

Knowledge representation #1

  • How to extract and represent what is known so that a machine can use it?
  • How to extract and represent all and only relevant knowledge?
  • Keywords: information; data; semantics; ontology; logic; inference; discovery;

learning; interoperability; natural languages; epistemology

slide-42
SLIDE 42

Knowledge representation #2

  • Data vs. Knowledge
  • Data structures (e.g. arrays, lists, records, objects)
  • Levels: symbolic vs. interpreted
  • Ontology design
  • Reasoning
  • Deduction vs. Induction
  • Classical vs. Non-Classical
  • Modularisation and contextualisation
  • Automated (Machine) Learning and Data Mining (aka Knowledge Discovery)
  • The Web has changed everything
  • quantitatively (huge amount of K)
  • qualitatively (openness, dereferencing, decentralisation)
slide-43
SLIDE 43

Paper we will discuss next time

Turing’s Computing Machinery and Intelligence, 1950 http://bit.ly/2Sh1lnk

slide-44
SLIDE 44

Questions