CORRECT? Introduction Working hypothesis Observations: - - PowerPoint PPT Presentation

correct
SMART_READER_LITE
LIVE PREVIEW

CORRECT? Introduction Working hypothesis Observations: - - PowerPoint PPT Presentation

The Knowledge Representation Hypothesis (Brian Smith, 1982): IT3706 Knowledge Representation An intelligent systems competence is Rodney Brooks: determined by its knowledge. Knowledge without The knowledge is contained within


slide-1
SLIDE 1

IT3706 Knowledge Representation

Rodney Brooks: Knowledge without Representation

Artificial Intelligence 47 (1991) 139 – 159 (Received Sept. 1987)

The Knowledge Representation Hypothesis (Brian Smith, 1982):

An intelligent system’s competence is

determined by its knowledge.

The knowledge is contained within

structural ingredients (the knowledge representation) that we as observers can identify

The system’s actions are taken after

deliberation and reasoning over the system’s knowledge.

CORRECT?

Introduction

Observations:

Human intelligence is to complex to:

Decompose into “levels of abstraction” Find “interfaces” between components

Researchers focus on sub-problems, no-one

implements complete intelligence systems

Brooks’ approach:

Incrementally build complete systems Test each version extensively in the real world.

Working hypothesis

Hypothesis:

”Representations is the wrong unit of abstraction in building the bulkiest parts of intelligent systems.”

Claim:

”When we examine very simple level intelligence we find that explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model.”

slide-2
SLIDE 2

The evolution

  • f intelligence

What can we learn from this?

Sensory- and motor-skills in a dynamic world are the

difficult parts.

The rest is “pretty simple” (!) Single-cell organisms: 3,5 billion years ago First fish and vertebrates: 550 million years ago Insects: 450 million years ago Reptiles: 370 million years ago Primates: 220 million years ago Man: 2,5 million years ago Agriculture: 19 000 years ago Writing: 5 000 years ago “Expert” knowledge: The last few hundred years Our evolution: Time

A Story about Artificial flight

Looks fancy Must be Important! No passenger understands it Waste of time!

Abstraction is a dangerous weapon

We can’t use the simplified solution to solve the

general problem

The process of abstraction part of the intelligence Block-worlds are dangerous, and at best: useless!

We don’t know how we represent things in our

  • wn mind

Even if we knew, it is not known that the same

representation is right for AI

Brooks: Don’t try to make the computer use the same

abstractions as we (think we) do.

Intelligent Creatures

Brooks defines a “Creature”, and wants to make it intelligent:

Must cope with changes in its dynamic

environment

Should be robust with respect to its environment. Maintains multiple goals depending on the

circumstances

Should have a purpose; do something in the

world

slide-3
SLIDE 3

Decomposition of intelligence

Decomposition by activity:

No distinction between peripheral and central systems. Divided into activity producing subsystems (layers) Each subsystem connects sensing to action

THE WAY TO PROCEED !! Decomposition by function:

Central system with perceptual

modules as input, and actions as

  • utput.

Central systems are decomposed

into learning, reasoning, planning etc.

DON’T DO IT !!

Who has the representation?

No central representation – This helps the Creature to

quickly respond to dangers

No identifiable place where the output of the

representation can be found – All layers help

Totally different processing and sensory data proceed

independently and in parallel.

Competition between the layers replaces the need for

centrally figuring out what to do.

Intelligence emerges from the combination of layer

goals

Hypothesis: “… much of human level activity is similarly

a reflection of the world through very simple mechanisms without detailed representation.”

1.

Expand the Creature with one layer

2.

Test and debug it fully in the real world.

3.

Go to 1

  • Sources of error:
  • Current layer
  • Connection between current and former layer
  • Former layers (should not really happen ☺)
  • Only one place to tweak when looking for

errors: The current layer

Creating Creatures Creating Creatures (cont’d)

Each layer..

Consists of multiple finite state machines Communicates with the layers below through inhibiting

and and suppressing signals there

Has access to sensory input and can activate outputs

Their simplest robot contains three layers:

Avoid: Both static and dynamic obstacles Wander: Walk in given – random – direction Explore: Walk to interesting spaces the robot can see

slide-4
SLIDE 4

How it works: Finite state machines…

What this is not

Connectionism: Subsumption uses fixed

connections; non-uniform processing nodes

Neural networks: No biological plausibility in

the Finite State Machines

Production rules: Subsumption system lacks,

e.g., working memory

Blackboard: Subsumption systems use forced

locations for collecting knowledge

Limits to growth

Problems to solve before world domination is assured:

Complexity increases with the number of

layers – does this bound usefulness?

Are there limits for how complex behavior that

can emerge from such a system?

Do we need a central reasoning layer on top

after all?

Can higher level functions (e.g., learning) occur

in a layered system?

The future

Cardea – The new robot

“Only experiments with real Creatures in real worlds can answer the natural doubts about

  • ur approach.

Time will tell.”

slide-5
SLIDE 5

Summary

Representation not required for intelligent behavior!

As far as Brooks’ system is “intelligent”, it contradicts the “Knowledge Representation Hypothesis”

Start simple – make an insect-like intelligence first.

If not: The Artificial Flight story

Always make things that operate in real world – no

simulated environments!

The Subsumption Architecture

Layers, where higher layers are higher-level functions. All work in parallel. All receive sensory input. No representation – only signals and state machines.