Imagination-based Conditional Belief Christopher Badura R uhr -U - - PowerPoint PPT Presentation

imagination based conditional belief
SMART_READER_LITE
LIVE PREVIEW

Imagination-based Conditional Belief Christopher Badura R uhr -U - - PowerPoint PPT Presentation

Imagination-based Conditional Belief Christopher Badura R uhr -U niversity B ochum Workshop Doxastic Agency & Epistemic Logic 16th Dec. 2017 Introduction Suppose youre packing for your vacation and youre imagining yourself at the


slide-1
SLIDE 1

Imagination-based Conditional Belief

Christopher Badura

Ruhr-University Bochum

Workshop Doxastic Agency & Epistemic Logic 16th Dec. 2017

slide-2
SLIDE 2

Introduction

Suppose you’re packing for your vacation and you’re imagining yourself at the beach. Moreover, you imagine that the sun is shining and you’re wearing sun glasses, and you feel the sun burning on your skin. In the real world, you go to the wardrobe, pick your sunglasses and sunscreen and put them in your suit case. Why did you act this way?

slide-3
SLIDE 3

Introduction

Suppose you’re packing for your vacation and you’re imagining yourself at the beach. Moreover, you imagine that the sun is shining and you’re wearing sun glasses, and you feel the sun burning on your skin. In the real world, you go to the wardrobe, pick your sunglasses and sunscreen and put them in your suit case. Why did you act this way? Natural (?) Explanation, Assumptions

◮ You had an intention to pack your sunglasses ◮ The intention led to the action of packing your sunglasses ◮ Some beliefs have formed this intention

slide-4
SLIDE 4

Questions and (claimed) answers

Questions

  • 1. Where did the beliefs come from, or how are they justified?
  • 2. What kind of beliefs?

My claims

  • 1. The beliefs came from/are justified by imagining
  • 2. The beliefs are conditional beliefs

Disclaimers

  • 1. WiP
  • 2. Scope: “reality oriented” imagination [Williamson, 2016] for

somewhat idealized agents.

  • 3. Philosophy part is short
slide-5
SLIDE 5

Outline

Imagination Imagination justifies conditional belief Towards a Formal Semantics Issues and Further Research

slide-6
SLIDE 6

Imagination and Agency

Many [Dorsch, 2007], [Kind, 2013] [Langland-Hassan, 2016], [van Leeuwen, 2016], [Williamson, 2016] agree that imagination is an instance of agency b/c it has a purpose, it’s voluntary, it’s sometimes intentional action, etc.

slide-7
SLIDE 7

Structure of imaginative project

◮ Motivational states generate purpose to represent a certain

content

slide-8
SLIDE 8

Structure of imaginative project

◮ Motivational states generate purpose to represent a certain

content

◮ Initialize: Set out to represent some of the content.

Active, Voluntary/direct, Purposeful, can be intentional

slide-9
SLIDE 9

Structure of imaginative project

◮ Motivational states generate purpose to represent a certain

content

◮ Initialize: Set out to represent some of the content.

Active, Voluntary/direct, Purposeful, can be intentional

◮ Unfold: Using mechanisms to enrich the previous content

(inference, association). Active (for still purpose-directed), mostly involuntary/indirect, Purposeful, new initializations can happen in this step

slide-10
SLIDE 10

Structure of imaginative project

◮ Motivational states generate purpose to represent a certain

content

◮ Initialize: Set out to represent some of the content.

Active, Voluntary/direct, Purposeful, can be intentional

◮ Unfold: Using mechanisms to enrich the previous content

(inference, association). Active (for still purpose-directed), mostly involuntary/indirect, Purposeful, new initializations can happen in this step

◮ Possible Termination: fulfill purpose + represent the desired

content successfully/rich enough

slide-11
SLIDE 11

Imagination and Epistemic Projects

While unfolding (maybe passively) use reliable mechanisms (deductive inference, or other inference) to generate new mental

  • representations. Still imaginative project for overall purpose is to

represent certain contents. The subproject can be non-imaginative and purely epistemic, for the representations are not directly determined

slide-12
SLIDE 12

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

slide-13
SLIDE 13

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

  • 2. Then a (implicitly) believes that a entertains that imaginative

project

slide-14
SLIDE 14

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

  • 2. Then a (implicitly) believes that a entertains that imaginative

project

  • 3. a believes that a imagines A
slide-15
SLIDE 15

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

  • 2. Then a (implicitly) believes that a entertains that imaginative

project

  • 3. a believes that a imagines A
  • 4. a believes that the unfolding mechanism is usually reliable
slide-16
SLIDE 16

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

  • 2. Then a (implicitly) believes that a entertains that imaginative

project

  • 3. a believes that a imagines A
  • 4. a believes that the unfolding mechanism is usually reliable
  • 5. a believes that a imagines B
slide-17
SLIDE 17

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

  • 2. Then a (implicitly) believes that a entertains that imaginative

project

  • 3. a believes that a imagines A
  • 4. a believes that the unfolding mechanism is usually reliable
  • 5. a believes that a imagines B
  • 6. a believes that from A through reliable unfolding, it comes to

imagine B

slide-18
SLIDE 18

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

  • 2. Then a (implicitly) believes that a entertains that imaginative

project

  • 3. a believes that a imagines A
  • 4. a believes that the unfolding mechanism is usually reliable
  • 5. a believes that a imagines B
  • 6. a believes that from A through reliable unfolding, it comes to

imagine B

  • 7. Given the unfolding is a reliable mechanism, a has justification

for B, given A.

slide-19
SLIDE 19

Imagination justifies conditional belief

Inspired by example and explanation in [Williamson, 2016]

  • 1. a entertains imaginative project with the purpose to represent

some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.

  • 2. Then a (implicitly) believes that a entertains that imaginative

project

  • 3. a believes that a imagines A
  • 4. a believes that the unfolding mechanism is usually reliable
  • 5. a believes that a imagines B
  • 6. a believes that from A through reliable unfolding, it comes to

imagine B

  • 7. Given the unfolding is a reliable mechanism, a has justification

for B, given A.

  • 8. So, a is justified in believing B, or at least ⋄B, given A
slide-20
SLIDE 20

Back to example

◮ Motivational states: you intend to go to a sunny beach vacation

and pack your bag for that

◮ Purpose: Represent the vacation with all the stuff you should

bring.

slide-21
SLIDE 21

Back to example

◮ Motivational states: you intend to go to a sunny beach vacation

and pack your bag for that

◮ Purpose: Represent the vacation with all the stuff you should

bring.

◮ Initialize: you imagine yourself on a sunny beach vacation

slide-22
SLIDE 22

Back to example

◮ Motivational states: you intend to go to a sunny beach vacation

and pack your bag for that

◮ Purpose: Represent the vacation with all the stuff you should

bring.

◮ Initialize: you imagine yourself on a sunny beach vacation ◮ Unfold: You imagine (propositionally that) Usually, when it’s

sunny on the beach, you wear sunglasses. So you imagine (objectual) yourself wearing sunglasses.

slide-23
SLIDE 23

Back to example

◮ Motivational states: you intend to go to a sunny beach vacation

and pack your bag for that

◮ Purpose: Represent the vacation with all the stuff you should

bring.

◮ Initialize: you imagine yourself on a sunny beach vacation ◮ Unfold: You imagine (propositionally that) Usually, when it’s

sunny on the beach, you wear sunglasses. So you imagine (objectual) yourself wearing sunglasses.

◮ Unfold: You imagine (propositionally that) Usually, when it’s

sunny on the beach, you wear sunscreen. So you imagine (objectual) yourself wearing sunscreen.

slide-24
SLIDE 24

Back to example

◮ Motivational states: you intend to go to a sunny beach vacation

and pack your bag for that

◮ Purpose: Represent the vacation with all the stuff you should

bring.

◮ Initialize: you imagine yourself on a sunny beach vacation ◮ Unfold: You imagine (propositionally that) Usually, when it’s

sunny on the beach, you wear sunglasses. So you imagine (objectual) yourself wearing sunglasses.

◮ Unfold: You imagine (propositionally that) Usually, when it’s

sunny on the beach, you wear sunscreen. So you imagine (objectual) yourself wearing sunscreen.

◮ Unfold: You imagine (propositionally that) Given you’re on a

sunny beach vacation, usually, you wear sunglasses and sunscreen

slide-25
SLIDE 25

Example continued

◮ Since your unfolding was based on usually reliable mechanisms,

you have prima facie justification for believing that you wear sunglasses and sunscreen, given you’re on a beach vacation.

◮ I am going to ignore the following interaction of conditional belief,

knowledge and action: In the real world, you know you haven’t packed sunglasses and sunscreen. You know you are about to go on a sunny beach vacation. Together with your conditional belief this should entail that you act and pack the sunglasses and the sunscreen

slide-26
SLIDE 26

Towards A formal semantics

Aims

◮ model agentive aspect of initialization ◮ model indirect/involuntary unfolding ◮ model relation to conditional belief

Candidate: stit-imagination logic by [Wansing, 2015], [Olkhovikov and Wansing, 2017] Missing: Conditional Belief and indirect/involuntary unfolding My contribution, hopefully: Adding conditional belief, interpreting semantics to explain unfolding

slide-27
SLIDE 27

Language

Single agent case, agent a, drop subscript p|¬A|A ∧ A|SA|[c]A|A|Bel(B|A) Intuitive Interpretation

◮ SA: A is settled ◮ [c]A: the agent c-stit realizes that A ◮ A: A is in the mental image of a ◮ Bel(B|A): a believes B, given A

We enrich the language by defining A ∨ B ≡ ¬(¬A ∧ ¬B), A ⊃ B ≡ ¬A ∨ B, and our imagination operator as IA ≡ [c]A ∧ ¬SA So the agent actively imagines A if it chooses to make A its mental image and it’s not settled that A. The imagination operator binds as strong as ¬

slide-28
SLIDE 28

Semantics - Models

A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:

◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards

branching and historical connectedness (each pair of moments has a common ancestor moment)

slide-29
SLIDE 29

Semantics - Models

A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:

◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards

branching and historical connectedness (each pair of moments has a common ancestor moment)

◮ The set Histories is the set of all maximal ≤-chains in Tree. A

history h passes through a moment m iff m ∈ h. The set of all histories passing through m is Hm.

slide-30
SLIDE 30

Semantics - Models

A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:

◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards

branching and historical connectedness (each pair of moments has a common ancestor moment)

◮ The set Histories is the set of all maximal ≤-chains in Tree. A

history h passes through a moment m iff m ∈ h. The set of all histories passing through m is Hm.

◮ Agent is a finite set of agents, since it’s single agent case,

Agent = {a}. I drop any agent subscripts in what follows.

slide-31
SLIDE 31

Semantics - Models

A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:

◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards

branching and historical connectedness (each pair of moments has a common ancestor moment)

◮ The set Histories is the set of all maximal ≤-chains in Tree. A

history h passes through a moment m iff m ∈ h. The set of all histories passing through m is Hm.

◮ Agent is a finite set of agents, since it’s single agent case,

Agent = {a}. I drop any agent subscripts in what follows.

slide-32
SLIDE 32

Semantics - Models II

◮ Choice is a function on Tree × Agent, such that Choicem is a

partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided

  • histories. So if two histories passing through m branch at a

strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)

slide-33
SLIDE 33

Semantics - Models II

◮ Choice is a function on Tree × Agent, such that Choicem is a

partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided

  • histories. So if two histories passing through m branch at a

strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)

◮ N is a neighborhood function from moment-history pairs to a set

  • f sets of moment-history pairs. This can be interpreted as the

mental image the agent entertains at that moment.

slide-34
SLIDE 34

Semantics - Models II

◮ Choice is a function on Tree × Agent, such that Choicem is a

partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided

  • histories. So if two histories passing through m branch at a

strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)

◮ N is a neighborhood function from moment-history pairs to a set

  • f sets of moment-history pairs. This can be interpreted as the

mental image the agent entertains at that moment.

◮ V is a valuation function assigning to each atomic formula a set

  • f moment-history pairs.
slide-35
SLIDE 35

Semantics - Models II

◮ Choice is a function on Tree × Agent, such that Choicem is a

partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided

  • histories. So if two histories passing through m branch at a

strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)

◮ N is a neighborhood function from moment-history pairs to a set

  • f sets of moment-history pairs. This can be interpreted as the

mental image the agent entertains at that moment.

◮ V is a valuation function assigning to each atomic formula a set

  • f moment-history pairs.
slide-36
SLIDE 36

Semantics - Truth

We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)

◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B

slide-37
SLIDE 37

Semantics - Truth

We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)

◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A

slide-38
SLIDE 38

Semantics - Truth

We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)

◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A

slide-39
SLIDE 39

Semantics - Truth

We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)

◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A ◮ M, (m, h) A ⇔ ||A|| ∈ N((m, h))

slide-40
SLIDE 40

Semantics - Truth

We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)

◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A ◮ M, (m, h) A ⇔ ||A|| ∈ N((m, h)) ◮ M, (m, h) Bel(B|A) ⇔ ∀X ∈ N((m, h)) : X ∩ ||A|| = ∅ OR ∃Y ∈

N((m, h)) : Y ∩ ||A|| ∅ and Y ⊆ ||A ⊃ B|| (adapted from [Girlando et al., 2016], instead of worlds, moment-history pairs)

slide-41
SLIDE 41

Semantics - Truth

We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)

◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A ◮ M, (m, h) A ⇔ ||A|| ∈ N((m, h)) ◮ M, (m, h) Bel(B|A) ⇔ ∀X ∈ N((m, h)) : X ∩ ||A|| = ∅ OR ∃Y ∈

N((m, h)) : Y ∩ ||A|| ∅ and Y ⊆ ||A ⊃ B|| (adapted from [Girlando et al., 2016], instead of worlds, moment-history pairs) It follows that the truth-condition for IA([c]A ∧ ¬SA) is

◮ M, (m, h) IA ⇔

  • 1. ∀h′ ∈ Choicem(h) : AM ∈ N((m, h′))&
  • 2. ∃h′ ∈ Hm : AM N((m, h′))

Interpretation: the agent deliberately chooses to make A its mental image, it’s a genuine choice.

slide-42
SLIDE 42

Validity

A formula is valid in M iff. A is true at every moment-history pair in M and A is valid iff. A is valid in every sa-doxastic stit imagination model.

slide-43
SLIDE 43

Semantics - N

Advantage of neighborhood function determining a’s imagination is avoiding logical omniscience for imagination operator I. Restrict N with the following conditions, which can be seen as (minimal) rationality requirements : non-emptiness ∀X ∈ N : X ∅ ([Girlando et al., 2016]) Closure under finite intersection If X, Y ∈ N, then X ∩ Y ∈ N

slide-44
SLIDE 44

Semantics - (In)validites

I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs.

slide-45
SLIDE 45

Semantics - (In)validites

I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?)

slide-46
SLIDE 46

Semantics - (In)validites

I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h.

slide-47
SLIDE 47

Semantics - (In)validites

I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h. Remarks: Then ⊤, where ⊤ is any tautology. Yet I⊤ b/c condition ii) fails (∃h ∈ Hm : ||⊤|| N((m, h))). So the additional condition seems prima facie ok.

slide-48
SLIDE 48

Semantics - (In)validites

I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h. Remarks: Then ⊤, where ⊤ is any tautology. Yet I⊤ b/c condition ii) fails (∃h ∈ Hm : ||⊤|| N((m, h))). So the additional condition seems prima facie ok. Aggregation Bel(B|A), Bel(C|A) Bel(B ∧ C|A) holds if we add Nesting: ∀X, Y ∈ N((m, h)) : X ⊆ Y or Y ⊆ X

slide-49
SLIDE 49

Semantics - (In)validites

I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h. Remarks: Then ⊤, where ⊤ is any tautology. Yet I⊤ b/c condition ii) fails (∃h ∈ Hm : ||⊤|| N((m, h))). So the additional condition seems prima facie ok. Aggregation Bel(B|A), Bel(C|A) Bel(B ∧ C|A) holds if we add Nesting: ∀X, Y ∈ N((m, h)) : X ⊆ Y or Y ⊆ X Remark: Then whenever IA, then all that is in a’s imagery is either implied by A or implies it. (plausible?)

slide-50
SLIDE 50

Semantics - (in)validities

Simplification Bel(B ∧ C|A) ⊃ Bel(B|A), Bel(B ∧ C|A) ⊃ Bel(C|A) Non-Monotonicity Bel(B|A) ⊃ Bel(B|A ∧ C)

slide-51
SLIDE 51

Need to check

Cautious monotonicity (Bel(B|A) ∧ Bel(C|A)) ⊃ Bel(C|A ∧ B) Cautious Transitivity n (Bel(B|A) ∧ Bel(C|A ∧ B)) ⊃ Bel(C|A) The conditions on N from [Girlando et al., 2016] (adjusted notation) are sufficient (which ones aren’t necessary?-not checked yet): non-emptiness as before nesting as before strong closure under intersection If S ⊆ N((m, h)) and S ∅, then S ∈ N((m, h)) total reflexivity ∃X ∈ N((m, h)) s.t. (m, h) ∈ X Interpretation : some proposition in mental image at a time contains that time (reality-preservation in mental image?) Interpretation/Consequences for I: ?

slide-52
SLIDE 52

Need to check

Cautious monotonicity (Bel(B|A) ∧ Bel(C|A)) ⊃ Bel(C|A ∧ B) Cautious Transitivity n (Bel(B|A) ∧ Bel(C|A ∧ B)) ⊃ Bel(C|A) The conditions on N from [Girlando et al., 2016] (adjusted notation) are sufficient (which ones aren’t necessary?-not checked yet): non-emptiness as before nesting as before strong closure under intersection If S ⊆ N((m, h)) and S ∅, then S ∈ N((m, h)) total reflexivity ∃X ∈ N((m, h)) s.t. (m, h) ∈ X Interpretation : some proposition in mental image at a time contains that time (reality-preservation in mental image?) Interpretation/Consequences for I: ? Local absoluteness If X ∈ N((m, h)) and (m′, h′) ∈ X, then N((m, h)) = N((m′, h′)) Interpretation : If a proposition in mental image at a time contains another time, then mental images of both times are equal. (implausible?)

slide-53
SLIDE 53

Need to check

Cautious monotonicity (Bel(B|A) ∧ Bel(C|A)) ⊃ Bel(C|A ∧ B) Cautious Transitivity n (Bel(B|A) ∧ Bel(C|A ∧ B)) ⊃ Bel(C|A) The conditions on N from [Girlando et al., 2016] (adjusted notation) are sufficient (which ones aren’t necessary?-not checked yet): non-emptiness as before nesting as before strong closure under intersection If S ⊆ N((m, h)) and S ∅, then S ∈ N((m, h)) total reflexivity ∃X ∈ N((m, h)) s.t. (m, h) ∈ X Interpretation : some proposition in mental image at a time contains that time (reality-preservation in mental image?) Interpretation/Consequences for I: ? Local absoluteness If X ∈ N((m, h)) and (m′, h′) ∈ X, then N((m, h)) = N((m′, h′)) Interpretation : If a proposition in mental image at a time contains another time, then mental images of both times are equal. (implausible?)

slide-54
SLIDE 54

Issues

◮ Imagination and belief use the same neighborhood function ◮ Is conditional belief belief in (B, given A)? [Leitgeb, 2007] ◮ Relation belief in subjunctive conditionals vs. conditional belief? ◮ Formal semantics for conditional belief capture what we mean

philosophically by “conditional belief”?

◮ Other (in)validities we might (not) want? ◮ More/other conditions on N? ◮ Interpretation of ? ◮ if Bel(B|⊤) := BelB, then B implies BelB, which seems weird if

interpreting as mental imagery, so instead it’s implicit belief?

◮ Interpretation of IA ⊃ SB as reliable unfolding mechanism very

bold (dynamics?)

slide-55
SLIDE 55

Further research

◮ address issues ◮ imagining contradictions or impossibilities? ◮ separate neighborhood function for the conditional belief

  • perator?

◮ hyperintensionality ◮ jstit, epistemic stit, instead of cstit (or dstit), etc.? ◮ bridge principles not based on subset relations? ◮ arguments for doxastic responsibility, given imagination is an

action and can justify beliefs?

slide-56
SLIDE 56

Thank You!

slide-57
SLIDE 57

References I

Dorsch, F . (2007). Imagination and the Will. PhD thesis, University College London. http://discovery.ucl.ac.uk/1300296/. Girlando, M., Negri, S., Olivetti, N., and Risch, V. (2016). The Logic of Conditional Beliefs: Neighbourhood Semantics and Sequent Calculus. In Beklemishev, L., Demri, S., and Máté, A., editors, Advances in Modal Logic, pages 322–341. College Publications. Kind, A. (2013). The heterogeneity of the imagination. Erkenntnis, 78:141–159. Langland-Hassan, P . (2016). On choosing what to imagine. In Kind, A. and Kung, P ., editors, Knowledge through Imagination, pages x–y. Oxford University Press.

slide-58
SLIDE 58

References II

Leitgeb, H. (2007). Beliefs in Conditionals vs. Conditional Beliefs. Topoi, 26:115–132. Olkhovikov, G. K. and Wansing, H. (2017). An axiomatic system and a tableau calculus for STIT imagination logic. Journal of Philosophical Logic. Pacuit, E. (2017). Neighborhood semantics for modal logic. Springer. van Leeuwen, N. (2016). The imaginative agent. In The Routledge Handbook of Philosophy of Imagination, pages 85–109. Routledge.

slide-59
SLIDE 59

References III

Wansing, H. (2015). Remarks on the logic of imagination. a step towards understanding doxastic control through agency. Synthese, pages 1–19. Williamson, T. (2016). Knowing by imagining. In Kind, A. and Kung, P ., editors, Knowledge through Imagination, pages 113–124. Oxford University Press.