False-belief tasks and their formalisation Thomas Bolander, DTU - - PowerPoint PPT Presentation

false belief tasks and their formalisation
SMART_READER_LITE
LIVE PREVIEW

False-belief tasks and their formalisation Thomas Bolander, DTU - - PowerPoint PPT Presentation

False-belief tasks and their formalisation Thomas Bolander, DTU Compute, Technical University of Denmark IRIT, Toulouse, 30 June 2014 Thomas Bolander, False-belief tasks, 30 June 2014 p. 1/20 Social Intelligence and Theory of Mind Theory of


slide-1
SLIDE 1

False-belief tasks and their formalisation

Thomas Bolander, DTU Compute, Technical University of Denmark IRIT, Toulouse, 30 June 2014

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 1/20

slide-2
SLIDE 2

Social Intelligence and Theory of Mind

Theory of Mind (ToM): The ability of attributing mental states—beliefs, intentions, desires, etc.—to other agents. Having a ToM is essential for successful social interaction in humans [Baron-Cohen, 1997]. The presence of a ToM in children is often tested through false-belief tasks, e.g. the Sally-Anne test.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 2/20

slide-3
SLIDE 3

Goal of the present work

Goal: To formalise the Sally-Anne task in a suitable variant of dynamic epistemic logic.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 3/20

slide-4
SLIDE 4

Goal of the present work

Goal: To formalise the Sally-Anne task in a suitable variant of dynamic epistemic logic. But why?

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 3/20

slide-5
SLIDE 5

Goal of the present work

Goal: To formalise the Sally-Anne task in a suitable variant of dynamic epistemic logic. But why? Three uses of logical formalisations of dynamical epistemic reasoning:

  • 1. Specification, analysis and verification of agent systems (e.g.

computer systems or security protocols).

  • 2. Basis for reasoning engine of autonomous agents.
  • 3. Providing formal models of human reasoning.

My focus is on 2. My ultimate aim is to construct planning agents (e.g. robots) with ToM capabilities. Trying to find out what it takes for a computer or robot to pass the Sally-Anne test is a good test case for this research aim.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 3/20

slide-6
SLIDE 6

Comparison of false-belief task agents

The Sally-Anne task requires second-order reasoning (the agent believes that Sally believes the cube to be in the large container). Some false-belief tasks require n-th order reasoning for n > 2.

platform h-o reas.

  • ther features

CRIBB

[Wahl and Spada, 2000]

Prolog ≤ 3 goal recognition, plan recognition Edd Hifeng

[Arkoudas and Bringsjord, 2008]

event calc. ≤ 2 Second Life avatar Leonardo

[Breazeal et al., 2011]

C5 agent arch. ≤ 2 goal recognition, learning Epistemic planning

[Bolander and Andersen, 2011]

DEL ∞ planning ACT-R agent

[Arslan et al., 2013]

ACT-R

  • cogn. arch.

∞ learning

Only [Bolander and Andersen, 2011] supports planning (but several of the other formalisms could possibly be extended to support it).

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 4/20

slide-7
SLIDE 7

Dynamic Epistemic Logic (DEL) by example

We use the event models of DEL [Baltag et al., 1998] with added postconditions (ontic actions) as in [Ditmarsch et al., 2008].

slide-8
SLIDE 8

Dynamic Epistemic Logic (DEL) by example

We use the event models of DEL [Baltag et al., 1998] with added postconditions (ontic actions) as in [Ditmarsch et al., 2008].

  • Example. The secret turn of a coin:

black i, u epistemic model

  • Epistemic models: Multi-agent K models. Elements of domain

called worlds. Actual world is colored green ( ).

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 5/20

slide-9
SLIDE 9

Dynamic Epistemic Logic (DEL) by example

We use the event models of DEL [Baltag et al., 1998] with added postconditions (ontic actions) as in [Ditmarsch et al., 2008].

  • Example. The secret turn of a coin:

black i, u epistemic model i black, ¬black precond. postcond. i, u ⊤, ⊤ event event model u

  • Epistemic models: Multi-agent K models. Elements of domain

called worlds. Actual world is colored green ( ).

  • Event model: Represent the action of secretly turning the coin.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 5/20

slide-10
SLIDE 10

Dynamic Epistemic Logic (DEL) by example

We use the event models of DEL [Baltag et al., 1998] with added postconditions (ontic actions) as in [Ditmarsch et al., 2008].

  • Example. The secret turn of a coin:

black i, u epistemic model i black, ¬black precond. postcond. i, u ⊤, ⊤ event event model u = i ¬black i, u black epistemic model u ⊗ product update

  • Epistemic models: Multi-agent K models. Elements of domain

called worlds. Actual world is colored green ( ).

  • Event model: Represent the action of secretly turning the coin.
  • Product update: The updated model represents the situation after

the action has taken place.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 5/20

slide-11
SLIDE 11

Our version of Sally-Anne

The Sally-Anne test exists in many variants [Wellman et al., 2001]. We use the version where the observer (child) is asked: “Where does Sally think the cube is?”.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 6/20

slide-12
SLIDE 12

Our version of Sally-Anne

The Sally-Anne test exists in many variants [Wellman et al., 2001]. We use the version where the observer (child) is asked: “Where does Sally think the cube is?”. We will interpret this as meaning: “Where does Sally believe the cube to be?”

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 6/20

slide-13
SLIDE 13

Constants of modelling language

In the following we will use the following agent symbols:

  • O: The Observer (the child/agent taking the Sally-Anne test).
  • S: Sally.
  • A: Anne.

We will use the following propositional symbols:

  • large: The cube is in the large container.
  • small: The cube is in the small container.
  • sally: Sally is present in the room with Anne and the observer.

In epistemic models, we will use green nodes ( ) to denote the actual world.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 7/20

slide-14
SLIDE 14

Modelling Sally-Anne in DEL

  • 1. Sally has placed cube in large container:

A O A O S S

s1 = large, sally

O, S, A Thomas Bolander, False-belief tasks, 30 June 2014 – p. 8/20

slide-15
SLIDE 15

Modelling Sally-Anne in DEL

  • 2. Sally leaves room:

A O A O S S

s1 = large, sally

O, S, A

a2 = ⊤, ¬sally

O, S, A Thomas Bolander, False-belief tasks, 30 June 2014 – p. 8/20

slide-16
SLIDE 16

Modelling Sally-Anne in DEL

  • 2. Sally leaves room:

A O A O S S

s1 = large, sally

O, S, A

a2 = ⊤, ¬sally

O, S, A

s2 = s1 ⊗ a2 = large

O, S, A Thomas Bolander, False-belief tasks, 30 June 2014 – p. 8/20

slide-17
SLIDE 17

Modelling Sally-Anne in DEL

  • 3. Anne transfers cube to small container:

A O A O S S

s1 = large, sally

O, S, A

a2 = ⊤, ¬sally

O, S, A

s2 = s1 ⊗ a2 = large

O, S, A

a3 = ⊤, ¬large ∧ small

O, A

⊤, ⊤

O, S, A

S

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 8/20

slide-18
SLIDE 18

Modelling Sally-Anne in DEL

  • 3. Anne transfers cube to small container:

A O A O S S

s1 = large, sally

O, S, A

a2 = ⊤, ¬sally

O, S, A

s2 = s1 ⊗ a2 = large

O, S, A

a3 = ⊤, ¬large ∧ small

O, A

⊤, ⊤

O, S, A

S s3 = s2 ⊗ a3 = small

O, A

large

O, S, A

S

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 8/20

slide-19
SLIDE 19

Modelling Sally-Anne in DEL

  • 4. Sally re-enters:

A O A O S S

s1 = large, sally

O, S, A

a2 = ⊤, ¬sally

O, S, A

s2 = s1 ⊗ a2 = large

O, S, A

a3 = ⊤, ¬large ∧ small

O, A

⊤, ⊤

O, S, A

S s3 = s2 ⊗ a3 = small

O, A

large

O, S, A

S a4 = ⊤, sally

O, S, A Thomas Bolander, False-belief tasks, 30 June 2014 – p. 8/20

slide-20
SLIDE 20

Modelling Sally-Anne in DEL

  • 4. Sally re-enters:

A O A O S S

s1 = large, sally

O, S, A

a2 = ⊤, ¬sally

O, S, A

s2 = s1 ⊗ a2 = large

O, S, A

a3 = ⊤, ¬large ∧ small

O, A

⊤, ⊤

O, S, A

S s3 = s2 ⊗ a3 = small

O, A

large

O, S, A

S a4 = ⊤, sally

O, S, A

s4 = s3 ⊗ a4 = small, sally

O, A

large, sally

O, S, A

S

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 8/20

slide-21
SLIDE 21

Modelling Sally-Anne in DEL

  • 1. Sally has placed cube in large container: s1 =

large, sally

O, S, A

  • 2. Sally leaves the room: a2 =

⊤, ¬sally

O, S, A

  • 3. Anne transfers cube: a3 =

⊤, ¬large ∧ small

O, A

⊤, ⊤

O, S, A

S

  • 4. Sally re-enters: a4 =

⊤, sally

O, S, A

s4 = s1 ⊗ a2 ⊗ a3 ⊗ a4 = small, sally

O, A

large, sally

O, S, A

S We have: s4 | = BOBSlarge Thus the observer will answer the question “where does Sally believe the cube is” with “in the large container”, hence passing the Sally-Anne test!

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 9/20

slide-22
SLIDE 22

Modelling Sally-Anne in DEL

  • 1. Sally has placed cube in large container: s1 =

large, sally

O, S, A

  • 2. Sally leaves the room: a2 =

⊤, ¬sally

O, S, A

  • 3. Anne transfers cube: a3 =

⊤, ¬large ∧ small

O, A

⊤, ⊤

O, S, A

S

  • 4. Sally re-enters: a4 =

⊤, sally

O, S, A

s4 = s1 ⊗ a2 ⊗ a3 ⊗ a4 = small, sally

O, A

large, sally

O, S, A

S We have: s4 | = BOBSlarge Thus the observer will answer the question “where does Sally believe the cube is” with “in the large container”, hence passing the Sally-Anne test! Now note that s1 ⊗ a3 = s4. Thus Sally leaving and re-entering doesn’t have any effect on the model the agent ends up with! Something is not right!...

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 9/20

slide-23
SLIDE 23

Modelling attention and observability in DEL

Consider the action “Anne transfers cube”.

  • When all agents observe the action taking place:

⊤, ¬large ∧ small

O, S, A

  • When only a subset B of the set of all agents A observe the action

taking place: ⊤, ¬large ∧ small

B

⊤, ⊤

A

A − B The lower event model is recognised as having the same structure as a private announcement to the group of agents B.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 10/20

slide-24
SLIDE 24

Modelling attention and observability in DEL

Consider the action “Anne transfers cube”.

  • When all agents observe the action taking place:

⊤, ¬large ∧ small

O, S, A

  • When only a subset B of the set of all agents A observe the action

taking place: ⊤, ¬large ∧ small

B

⊤, ⊤

A

A − B The lower event model is recognised as having the same structure as a private announcement to the group of agents B. Which agents observe a given action taking place should be encoded in the state to which the action is applied. In the Sally-Anne test, the propositional symbol sally encodes whether sally observes the action “Anne transfers cube”.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 10/20

slide-25
SLIDE 25

Modelling observability in DEL

In the Sally-Anne test, a correct event model for “Anne transfers the cube” that works independent on where Sally is, is the following multi-pointed event model:

¬sally, ¬large ∧ small

O, A

¬sally, ⊤

O, S, A

S sally, ¬large ∧ small

O, S, A

But this is a bit ad hoc and only gives a solution to this concrete

  • problem. We need something more principled...

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 11/20

slide-26
SLIDE 26

Modelling observability in DEL

In the Sally-Anne test, a correct event model for “Anne transfers the cube” that works independent on where Sally is, is the following multi-pointed event model:

¬sally, ¬large ∧ small

O, A

¬sally, ⊤

O, S, A

S sally, ¬large ∧ small

O, S, A

But this is a bit ad hoc and only gives a solution to this concrete

  • problem. We need something more principled...

Two possible solutions:

  • Encoding observational information into states. The idea is:

Who observes the action should not be an aspect of the description

  • f the action itself, but an aspect of the state in which it is being

applied.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 11/20

slide-27
SLIDE 27

Modelling observability in DEL

In the Sally-Anne test, a correct event model for “Anne transfers the cube” that works independent on where Sally is, is the following multi-pointed event model:

¬sally, ¬large ∧ small

O, A

¬sally, ⊤

O, S, A

S sally, ¬large ∧ small

O, S, A

But this is a bit ad hoc and only gives a solution to this concrete

  • problem. We need something more principled...

Two possible solutions:

  • Encoding observational information into states. The idea is:

Who observes the action should not be an aspect of the description

  • f the action itself, but an aspect of the state in which it is being

applied.

  • Adding observational axioms. The idea is: Who observes which

actions under which circumstances is encoded in observational axioms as part of the domain description.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 11/20

slide-28
SLIDE 28

Encoding observations

What should observations be connected to? Several possibilities:

  • Propositions. Proposition p is observed by agent i if . . .

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 12/20

slide-29
SLIDE 29

Encoding observations

What should observations be connected to? Several possibilities:

  • Propositions. Proposition p is observed by agent i if . . .
  • All actions. All actions taking place are observed by agent i if . . .

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 12/20

slide-30
SLIDE 30

Encoding observations

What should observations be connected to? Several possibilities:

  • Propositions. Proposition p is observed by agent i if . . .
  • All actions. All actions taking place are observed by agent i if . . .
  • Particular actions. Action a is observed by agent i if . . .

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 12/20

slide-31
SLIDE 31

Encoding observations

What should observations be connected to? Several possibilities:

  • Propositions. Proposition p is observed by agent i if . . .
  • All actions. All actions taking place are observed by agent i if . . .
  • Particular actions. Action a is observed by agent i if . . .

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 12/20

slide-32
SLIDE 32

Encoding observations

What should observations be connected to? Several possibilities:

  • Propositions. Proposition p is observed by agent i if . . .
  • All actions. All actions taking place are observed by agent i if . . .
  • Particular actions. Action a is observed by agent i if . . .

Proposals in the existing literature:

axiom encoded state encoded propositions

[Brenner and Nebel, 2009]

sensor models Axioms: sensor(i, p, cond)

[Van Der Hoek et al., 2011]

Note: observable propositions are fixed all actions

[van Ditmarsch et al., 2013]

New propositions: hi means i is paying attention particular actions

[Baral et al., 2012]

Action language mA+ Axioms: i observes a if φ

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 12/20

slide-33
SLIDE 33

Encoding observations

What should observations be connected to? Several possibilities:

  • Propositions. Proposition p is observed by agent i if . . .
  • All actions. All actions taking place are observed by agent i if . . .
  • Particular actions. Action a is observed by agent i if . . .

Proposals in the existing literature:

axiom encoded state encoded propositions

[Brenner and Nebel, 2009]

sensor models Axioms: sensor(i, p, cond)

[Van Der Hoek et al., 2011]

Note: observable propositions are fixed all actions

[van Ditmarsch et al., 2013]

New propositions: hi means i is paying attention particular actions

[Baral et al., 2012]

Action language mA+ Axioms: i observes a if φ

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 12/20

slide-34
SLIDE 34

Sally-Anne with propositions for paying attention

We want to formalise the Sally-Anne test in an extension of the logic of

[van Ditmarsch et al., 2013]: extended by the addition of propositional

  • assignments. We remove announcements, since we don’t need them here.

Syntax of the extended language: φ ::= p | hi | ¬φ | φ ∧ φ | Biφ | [+A]φ | [−A]φ | [+p]φ | [−p]φ where p is any proposition, i any agent and A any set of agents. Semantics (informally):

  • hi: agent i pays attention (to all actions).
  • Biφ: agent i believes φ.
  • [+A]φ: after the agents in A are made to pay attention, φ holds.
  • [−A]φ: after the agents in A stops paying attention, φ holds.
  • [+p]φ: after p is made true, φ holds.
  • [−p]φ: after p is made false, φ holds.

Defining the semantics for this extended language is straightforward using the constructions in [van Ditmarsch et al., 2013]: [+A] and [−A] are already propositional assignments (for the hi propositions).

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 13/20

slide-35
SLIDE 35

Modelling Sally-Anne in the new language

  • 1. Sally has placed cube in large container:

A O A O S S

s1 = large, hO, hS, hA

O, S, A Thomas Bolander, False-belief tasks, 30 June 2014 – p. 14/20

slide-36
SLIDE 36

Modelling Sally-Anne in the new language

  • 2. Sally leaves room:

A O A O S S

s1 = large, hO, hS, hA

O, S, A

a2 = −S

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 14/20

slide-37
SLIDE 37

Modelling Sally-Anne in the new language

  • 2. Sally leaves room:

A O A O S S

s1 = large, hO, hS, hA

O, S, A

a2 = −S s2 = s−S

1

= large, hO, hA

O, S, A

large, hO, hS, hA

O, S, A

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 14/20

slide-38
SLIDE 38

Modelling Sally-Anne in the new language

  • 3. Anne transfers cube to small container:

A O A O S S

s1 = large, hO, hS, hA

O, S, A

a2 = −S s2 = s−S

1

= large, hO, hA

O, S, A

large, hO, hS, hA

O, S, A

a3 = −large; +small

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 14/20

slide-39
SLIDE 39

Modelling Sally-Anne in the new language

  • 3. Anne transfers cube to small container:

A O A O S S

s1 = large, hO, hS, hA

O, S, A

a2 = −S s2 = s−S

1

= large, hO, hA

O, S, A

large, hO, hS, hA

O, S, A

a3 = −large; +small s3 = s−large;+small

2

↔ small, hO, hA

O, A

large, hO, hA

O, S, A

S

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 14/20

slide-40
SLIDE 40

Modelling Sally-Anne in the new language

  • 4. Sally re-enters:

A O A O S S

s1 = large, hO, hS, hA

O, S, A

a2 = −S s2 = s−S

1

= large, hO, hA

O, S, A

large, hO, hS, hA

O, S, A

a3 = −large; +small s3 = s−large;+small

2

↔ small, hO, hA

O, A

large, hO, hA

O, S, A

S a4 = +S

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 14/20

slide-41
SLIDE 41

Modelling Sally-Anne in the new language

  • 4. Sally re-enters:

A O A O S S

s1 = large, hO, hS, hA

O, S, A

a2 = −S s2 = s−S

1

= large, hO, hA

O, S, A

large, hO, hS, hA

O, S, A

a3 = −large; +small s3 = s−large;+small

2

↔ small, hO, hA

O, A

large, hO, hA

O, S, A

S a4 = +S s4 = s+S

3

↔ small, hO, hS, hA

O, A

large, hO, hS, hA

O, S, A

S

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 14/20

slide-42
SLIDE 42

Modelling Sally-Anne in the new language

  • 1. Sally has placed cube in large container: s1 =

large, hO, hS, hA

O, S, A

  • 2. Sally leaves the room: a2 = −S
  • 3. Anne transfers cube: a3 = −large; +small
  • 4. Sally re-enters: a4 = +S

s4 = s−S;−large;+small;+S

1

= small, hO, hS, hA

O, A

large, hO, hS, hA

O, S, A

S We have: s4 | = BOBSlarge Thus again the observer will pass the Sally-Anne test.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 15/20

slide-43
SLIDE 43

Modelling Sally-Anne in the new language

  • 1. Sally has placed cube in large container: s1 =

large, hO, hS, hA

O, S, A

  • 2. Sally leaves the room: a2 = −S
  • 3. Anne transfers cube: a3 = −large; +small
  • 4. Sally re-enters: a4 = +S

s4 = s−S;−large;+small;+S

1

= small, hO, hS, hA

O, A

large, hO, hS, hA

O, S, A

S We have: s4 | = BOBSlarge Thus again the observer will pass the Sally-Anne test. But now we also have s−large;+small

1

= small, hO, hS, hA

O, S, A

= s4. Hence

  • ur previous problem has been solved!

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 15/20

slide-44
SLIDE 44

Still not quite there yet...

  • 1. When Sally is outside the room, she doesn’t observe any actions

(not even here own).

  • 2. And what if Sally peeked through the door while being outside,

without Anne noticing. The problems become apparent when going to higher-order false-belief tasks...

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 16/20

slide-45
SLIDE 45

Higher-order false-belief tasks

The last action has depth 2 (all actions of Sally-Anne had depth ≤ 1):

⊤, ¬drawer ∧ box

B(oy)

⊤, ¬drawer ∧ box

G

⊤, ⊤

B, G

G(irl) B

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 17/20

slide-46
SLIDE 46

Proposal: A richer language

Now for some hand-waiving. What we need is not propositions hi for “agent i observes all actions” but rather propositions i∢j for “i sees j” meaning “agent i observes all actions of agent j”. Then we can have e.g. B∢G and ¬G∢B in the same world: the boy sees the actions of the girl, but the girl doesn’t see the actions of the boy. Leads to the following language: φ ::= p | i∢j | ¬φ | φ ∧ φ | Biφ | [+S]φ | [−S]φ | [i : +p]φ | [i : −p]φ where S is a set of seeing propositions. Details to be filled out...

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 18/20

slide-47
SLIDE 47

Even more to look at

  • Intention recognition. What if Sally suspects that Anne is going to

trick her?

  • Epistemic inertia. We have applied a principle of “epistemic

intertia”: If you don’t observe an action, you think that nothing has

  • happened. That’s not always the case, in particular in relation to

intention recognitions.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 19/20

slide-48
SLIDE 48

It’s not all about false-belief tasks

Plan verification, planning and abduction:

  • Solving a false-belief task is a plan verification problem: Given an

initial state and sequence of actions, does a certain formula hold?

  • But we can also use the formalisms for planning: Given an initial

state and goal state, find a sequence of actions that leads from the initial state to the goal.

  • And for abduction: Finding a most plausible sequence of actions

leading from an initial state to a goal.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 20/20

slide-49
SLIDE 49

It’s not all about false-belief tasks

Plan verification, planning and abduction:

  • Solving a false-belief task is a plan verification problem: Given an

initial state and sequence of actions, does a certain formula hold?

  • But we can also use the formalisms for planning: Given an initial

state and goal state, find a sequence of actions that leads from the initial state to the goal.

  • And for abduction: Finding a most plausible sequence of actions

leading from an initial state to a goal. Languages for planning domain descriptions:

  • Engineering of event models is hard! We need simpler, possibly less

expressive, languages that induce event models. The language of [van Ditmarsch et al., 2013] and the above considered extensions might be good candidates.

Thomas Bolander, False-belief tasks, 30 June 2014 – p. 20/20

slide-50
SLIDE 50

APPENDIX About why it is essential to become able to construct AI agents (robots, personal digital assistants, NPCs in computer games, etc.) with Theory of Mind capabilities.

Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 1

slide-51
SLIDE 51

Human child, 18 months old

The child is not given any instructions beforehand.

[Warneken and Tomasello, 2006]

Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 2

slide-52
SLIDE 52

TUG Robot, Hospital of Southern Jutland (2013)

The Hospital of Southern Jutland in Denmark has since mid 2012 been employing TUG hospital robots.

[ing.dk, 16. januar 2013]

Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 3

slide-53
SLIDE 53

Anti-social TUG hospital robots (2009)

Frustrated users of hospital robots in USA:

  • “TUG was a hospital worker, and its

colleagues expected it to have some social smarts, the absence of which led to frustration—for example, when it always spoke in the same way in both quiet and busy situations.”

TUG hospital robot

[Barras, 2009]

Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 4

slide-54
SLIDE 54

Anti-social TUG hospital robots (2009)

Frustrated users of hospital robots in USA:

  • “TUG was a hospital worker, and its

colleagues expected it to have some social smarts, the absence of which led to frustration—for example, when it always spoke in the same way in both quiet and busy situations.”

  • “I’m on the phone! If you say ’TUG

has arrived’ one more time I’m going to kick you in your camera.”

TUG hospital robot

[Barras, 2009]

Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 4

slide-55
SLIDE 55

Anti-social TUG hospital robots (2009)

Frustrated users of hospital robots in USA:

  • “TUG was a hospital worker, and its

colleagues expected it to have some social smarts, the absence of which led to frustration—for example, when it always spoke in the same way in both quiet and busy situations.”

  • “I’m on the phone! If you say ’TUG

has arrived’ one more time I’m going to kick you in your camera.”

  • “It doesn’t have the manners we teach
  • ur children. I find it insulting that I

stand out of the way for patients... but it just barrels right on.”

TUG hospital robot

[Barras, 2009]

Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 4

slide-56
SLIDE 56

Socially intelligent robots

The TUG robots ought to be more like the 18 months old child... Required for socially intelligent robots: (a) Higher-order reasoning: reasoning about the beliefs of other agents. (b) Goal recognition: inferring the goals of others based on their actions. Higher-order reasoning capabilities in humans are often tested with false-belief tasks, so it seems reasonable to try to construct AI agents that can pass these tests.

Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 5

slide-57
SLIDE 57

Appendix: References I

Arkoudas, K. and Bringsjord, S. (2008). Toward Formalizing Common-Sense Psychology: An Analysis of the False-Belief Task. In PRICAI, (Ho, T. B. and Zhou, Z.-H., eds), vol. 5351, of Lecture Notes in Computer Science pp. 17–29, Springer. Arslan, B., Taatgen, N. and Verbrugge, R. (2013). Modeling Developmental Transitions in Reasoning about False Beliefs of Others. In Proc. of the 12th International Conference on Cognitive Modelling. Aucher, G. and Bolander, T. (2013). Undecidability in Epistemic Planning. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI) pp. 27–33,. Baltag, A., Moss, L. S. and Solecki, S. (1998). The Logic of Public Announcements and Common Knowledge and Private Suspicions. In Proceedings of the 7th Conference on Theoretical Aspects of Rationality and Knowledge (TARK-98), (Gilboa, I., ed.), pp. 43–56, Morgan Kaufmann. Baral, C., Gelfond, G., Son, T. C. and Pontelli, E. (2012). An Action Language for Reasoning about Beliefs in Multi-Agent Domains. In Proceedings of the 14th International Workshop on Non-Monotonic Reasoning. Baron-Cohen, S. (1997). Mindblindness: An essay on autism and theory of mind. MIT press. Barras, C. (2009). Useful, lovable and unbelievably annoying. New Scientist 204, 22–23. Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 6

slide-58
SLIDE 58

Appendix: References II

Bolander, T. and Andersen, M. B. (2011). Epistemic Planning for Single- and Multi-Agent Systems. Journal of Applied Non-Classical Logics 21, 9–34. Breazeal, C., Gray, J. and Berin, M. (2011). Mindreading as a foundational skill for socially intelligent robots. In Robotics Research pp. 383–394. Springer. Brenner, M. and Nebel, B. (2009). Continual planning and acting in dynamic multiagent environments. Autonomous Agents and Multi-Agent Systems 19, 297–331. Van Der Hoek, W., Troquard, N. and Wooldridge, M. (2011). Knowledge and control. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2

  • pp. 719–726, International Foundation for Autonomous Agents and Multiagent Systems.

van Ditmarsch, H., Herzig, A., Lorini, E. and Schwarzentruber, F. (2013). Listen to me! Public announcements to agents that pay attention—or not. In Logic, Rationality, and Interaction pp. 96–109. Springer. Van Ditmarsch, H. and Labuschagne, W. (2007). My beliefs about your beliefs: a case study in theory of mind and epistemic logic. Synthese 155, 191–209. Wahl, S. and Spada, H. (2000). Children’s reasoning about intentions, beliefs and behaviour. Cognitive Science Quarterly 1, 3–32. Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 7

slide-59
SLIDE 59

Appendix: References III

Warneken, F. and Tomasello, M. (2006). Altruistic helping in human infants and young chimpanzees. Science 311, 1301–1303. Wellman, H. M., Cross, D. and Watson, J. (2001). Meta-analysis of theory-of-mind development: the truth about false belief. Child development 72, 655–684. Thomas Bolander, False-belief tasks, 30 June 2014 – Appendix p. 8