SLIDE 1 Imagination-based Conditional Belief
Christopher Badura
Ruhr-University Bochum
Workshop Doxastic Agency & Epistemic Logic 16th Dec. 2017
SLIDE 2
Introduction
Suppose you’re packing for your vacation and you’re imagining yourself at the beach. Moreover, you imagine that the sun is shining and you’re wearing sun glasses, and you feel the sun burning on your skin. In the real world, you go to the wardrobe, pick your sunglasses and sunscreen and put them in your suit case. Why did you act this way?
SLIDE 3 Introduction
Suppose you’re packing for your vacation and you’re imagining yourself at the beach. Moreover, you imagine that the sun is shining and you’re wearing sun glasses, and you feel the sun burning on your skin. In the real world, you go to the wardrobe, pick your sunglasses and sunscreen and put them in your suit case. Why did you act this way? Natural (?) Explanation, Assumptions
◮ You had an intention to pack your sunglasses ◮ The intention led to the action of packing your sunglasses ◮ Some beliefs have formed this intention
SLIDE 4 Questions and (claimed) answers
Questions
- 1. Where did the beliefs come from, or how are they justified?
- 2. What kind of beliefs?
My claims
- 1. The beliefs came from/are justified by imagining
- 2. The beliefs are conditional beliefs
Disclaimers
- 1. WiP
- 2. Scope: “reality oriented” imagination [Williamson, 2016] for
somewhat idealized agents.
- 3. Philosophy part is short
SLIDE 5
Outline
Imagination Imagination justifies conditional belief Towards a Formal Semantics Issues and Further Research
SLIDE 6
Imagination and Agency
Many [Dorsch, 2007], [Kind, 2013] [Langland-Hassan, 2016], [van Leeuwen, 2016], [Williamson, 2016] agree that imagination is an instance of agency b/c it has a purpose, it’s voluntary, it’s sometimes intentional action, etc.
SLIDE 7 Structure of imaginative project
◮ Motivational states generate purpose to represent a certain
content
SLIDE 8 Structure of imaginative project
◮ Motivational states generate purpose to represent a certain
content
◮ Initialize: Set out to represent some of the content.
Active, Voluntary/direct, Purposeful, can be intentional
SLIDE 9 Structure of imaginative project
◮ Motivational states generate purpose to represent a certain
content
◮ Initialize: Set out to represent some of the content.
Active, Voluntary/direct, Purposeful, can be intentional
◮ Unfold: Using mechanisms to enrich the previous content
(inference, association). Active (for still purpose-directed), mostly involuntary/indirect, Purposeful, new initializations can happen in this step
SLIDE 10 Structure of imaginative project
◮ Motivational states generate purpose to represent a certain
content
◮ Initialize: Set out to represent some of the content.
Active, Voluntary/direct, Purposeful, can be intentional
◮ Unfold: Using mechanisms to enrich the previous content
(inference, association). Active (for still purpose-directed), mostly involuntary/indirect, Purposeful, new initializations can happen in this step
◮ Possible Termination: fulfill purpose + represent the desired
content successfully/rich enough
SLIDE 11 Imagination and Epistemic Projects
While unfolding (maybe passively) use reliable mechanisms (deductive inference, or other inference) to generate new mental
- representations. Still imaginative project for overall purpose is to
represent certain contents. The subproject can be non-imaginative and purely epistemic, for the representations are not directly determined
SLIDE 12 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
SLIDE 13 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
- 2. Then a (implicitly) believes that a entertains that imaginative
project
SLIDE 14 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
- 2. Then a (implicitly) believes that a entertains that imaginative
project
- 3. a believes that a imagines A
SLIDE 15 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
- 2. Then a (implicitly) believes that a entertains that imaginative
project
- 3. a believes that a imagines A
- 4. a believes that the unfolding mechanism is usually reliable
SLIDE 16 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
- 2. Then a (implicitly) believes that a entertains that imaginative
project
- 3. a believes that a imagines A
- 4. a believes that the unfolding mechanism is usually reliable
- 5. a believes that a imagines B
SLIDE 17 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
- 2. Then a (implicitly) believes that a entertains that imaginative
project
- 3. a believes that a imagines A
- 4. a believes that the unfolding mechanism is usually reliable
- 5. a believes that a imagines B
- 6. a believes that from A through reliable unfolding, it comes to
imagine B
SLIDE 18 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
- 2. Then a (implicitly) believes that a entertains that imaginative
project
- 3. a believes that a imagines A
- 4. a believes that the unfolding mechanism is usually reliable
- 5. a believes that a imagines B
- 6. a believes that from A through reliable unfolding, it comes to
imagine B
- 7. Given the unfolding is a reliable mechanism, a has justification
for B, given A.
SLIDE 19 Imagination justifies conditional belief
Inspired by example and explanation in [Williamson, 2016]
- 1. a entertains imaginative project with the purpose to represent
some content C. The initial episode is A, epistemic unfolding mechanisms, “consequence” B.
- 2. Then a (implicitly) believes that a entertains that imaginative
project
- 3. a believes that a imagines A
- 4. a believes that the unfolding mechanism is usually reliable
- 5. a believes that a imagines B
- 6. a believes that from A through reliable unfolding, it comes to
imagine B
- 7. Given the unfolding is a reliable mechanism, a has justification
for B, given A.
- 8. So, a is justified in believing B, or at least ⋄B, given A
SLIDE 20 Back to example
◮ Motivational states: you intend to go to a sunny beach vacation
and pack your bag for that
◮ Purpose: Represent the vacation with all the stuff you should
bring.
SLIDE 21 Back to example
◮ Motivational states: you intend to go to a sunny beach vacation
and pack your bag for that
◮ Purpose: Represent the vacation with all the stuff you should
bring.
◮ Initialize: you imagine yourself on a sunny beach vacation
SLIDE 22 Back to example
◮ Motivational states: you intend to go to a sunny beach vacation
and pack your bag for that
◮ Purpose: Represent the vacation with all the stuff you should
bring.
◮ Initialize: you imagine yourself on a sunny beach vacation ◮ Unfold: You imagine (propositionally that) Usually, when it’s
sunny on the beach, you wear sunglasses. So you imagine (objectual) yourself wearing sunglasses.
SLIDE 23 Back to example
◮ Motivational states: you intend to go to a sunny beach vacation
and pack your bag for that
◮ Purpose: Represent the vacation with all the stuff you should
bring.
◮ Initialize: you imagine yourself on a sunny beach vacation ◮ Unfold: You imagine (propositionally that) Usually, when it’s
sunny on the beach, you wear sunglasses. So you imagine (objectual) yourself wearing sunglasses.
◮ Unfold: You imagine (propositionally that) Usually, when it’s
sunny on the beach, you wear sunscreen. So you imagine (objectual) yourself wearing sunscreen.
SLIDE 24 Back to example
◮ Motivational states: you intend to go to a sunny beach vacation
and pack your bag for that
◮ Purpose: Represent the vacation with all the stuff you should
bring.
◮ Initialize: you imagine yourself on a sunny beach vacation ◮ Unfold: You imagine (propositionally that) Usually, when it’s
sunny on the beach, you wear sunglasses. So you imagine (objectual) yourself wearing sunglasses.
◮ Unfold: You imagine (propositionally that) Usually, when it’s
sunny on the beach, you wear sunscreen. So you imagine (objectual) yourself wearing sunscreen.
◮ Unfold: You imagine (propositionally that) Given you’re on a
sunny beach vacation, usually, you wear sunglasses and sunscreen
SLIDE 25 Example continued
◮ Since your unfolding was based on usually reliable mechanisms,
you have prima facie justification for believing that you wear sunglasses and sunscreen, given you’re on a beach vacation.
◮ I am going to ignore the following interaction of conditional belief,
knowledge and action: In the real world, you know you haven’t packed sunglasses and sunscreen. You know you are about to go on a sunny beach vacation. Together with your conditional belief this should entail that you act and pack the sunglasses and the sunscreen
SLIDE 26 Towards A formal semantics
Aims
◮ model agentive aspect of initialization ◮ model indirect/involuntary unfolding ◮ model relation to conditional belief
Candidate: stit-imagination logic by [Wansing, 2015], [Olkhovikov and Wansing, 2017] Missing: Conditional Belief and indirect/involuntary unfolding My contribution, hopefully: Adding conditional belief, interpreting semantics to explain unfolding
SLIDE 27 Language
Single agent case, agent a, drop subscript p|¬A|A ∧ A|SA|[c]A|A|Bel(B|A) Intuitive Interpretation
◮ SA: A is settled ◮ [c]A: the agent c-stit realizes that A ◮ A: A is in the mental image of a ◮ Bel(B|A): a believes B, given A
We enrich the language by defining A ∨ B ≡ ¬(¬A ∧ ¬B), A ⊃ B ≡ ¬A ∨ B, and our imagination operator as IA ≡ [c]A ∧ ¬SA So the agent actively imagines A if it chooses to make A its mental image and it’s not settled that A. The imagination operator binds as strong as ¬
SLIDE 28 Semantics - Models
A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:
◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards
branching and historical connectedness (each pair of moments has a common ancestor moment)
SLIDE 29 Semantics - Models
A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:
◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards
branching and historical connectedness (each pair of moments has a common ancestor moment)
◮ The set Histories is the set of all maximal ≤-chains in Tree. A
history h passes through a moment m iff m ∈ h. The set of all histories passing through m is Hm.
SLIDE 30 Semantics - Models
A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:
◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards
branching and historical connectedness (each pair of moments has a common ancestor moment)
◮ The set Histories is the set of all maximal ≤-chains in Tree. A
history h passes through a moment m iff m ∈ h. The set of all histories passing through m is Hm.
◮ Agent is a finite set of agents, since it’s single agent case,
Agent = {a}. I drop any agent subscripts in what follows.
SLIDE 31 Semantics - Models
A single agent doxastic stit-imagination model is a tuple M = Tree, ≤, Agent = {a}, Choice, N, V where:
◮ Tree ∅ is a set of moments in time. ◮ ≤⊂ Tree × Tree is a partial order such that there is no backwards
branching and historical connectedness (each pair of moments has a common ancestor moment)
◮ The set Histories is the set of all maximal ≤-chains in Tree. A
history h passes through a moment m iff m ∈ h. The set of all histories passing through m is Hm.
◮ Agent is a finite set of agents, since it’s single agent case,
Agent = {a}. I drop any agent subscripts in what follows.
SLIDE 32 Semantics - Models II
◮ Choice is a function on Tree × Agent, such that Choicem is a
partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided
- histories. So if two histories passing through m branch at a
strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)
SLIDE 33 Semantics - Models II
◮ Choice is a function on Tree × Agent, such that Choicem is a
partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided
- histories. So if two histories passing through m branch at a
strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)
◮ N is a neighborhood function from moment-history pairs to a set
- f sets of moment-history pairs. This can be interpreted as the
mental image the agent entertains at that moment.
SLIDE 34 Semantics - Models II
◮ Choice is a function on Tree × Agent, such that Choicem is a
partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided
- histories. So if two histories passing through m branch at a
strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)
◮ N is a neighborhood function from moment-history pairs to a set
- f sets of moment-history pairs. This can be interpreted as the
mental image the agent entertains at that moment.
◮ V is a valuation function assigning to each atomic formula a set
SLIDE 35 Semantics - Models II
◮ Choice is a function on Tree × Agent, such that Choicem is a
partition of Hm. If h ∈ Hm, then Choicem(h) is the partition cell that contains h. Choice satisfies no choice between undivided
- histories. So if two histories passing through m branch at a
strictly later moment m′, they’re choice-equivalent for the agent in moment m. (Usually, there’s also independence of agents but since we’re in the single agent case I omit this)
◮ N is a neighborhood function from moment-history pairs to a set
- f sets of moment-history pairs. This can be interpreted as the
mental image the agent entertains at that moment.
◮ V is a valuation function assigning to each atomic formula a set
SLIDE 36 Semantics - Truth
We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)
◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B
SLIDE 37 Semantics - Truth
We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)
◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A
SLIDE 38 Semantics - Truth
We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)
◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A
SLIDE 39 Semantics - Truth
We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)
◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A ◮ M, (m, h) A ⇔ ||A|| ∈ N((m, h))
SLIDE 40 Semantics - Truth
We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)
◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A ◮ M, (m, h) A ⇔ ||A|| ∈ N((m, h)) ◮ M, (m, h) Bel(B|A) ⇔ ∀X ∈ N((m, h)) : X ∩ ||A|| = ∅ OR ∃Y ∈
N((m, h)) : Y ∩ ||A|| ∅ and Y ⊆ ||A ⊃ B|| (adapted from [Girlando et al., 2016], instead of worlds, moment-history pairs)
SLIDE 41 Semantics - Truth
We define ||A||M = {(m, h)|M, (m, h) A} (and omit the superscript if the model is clear)
◮ M, (m, h) p ⇔ (m, h) ∈ V(p) for atomic p ◮ M, (m, h) ¬A ⇔ M, (m, h) A ◮ M, (m, h) A ∧ B ⇔ M, (m, h) A and M, (m, h) B ◮ M, (m, h) SA ⇔ ∀h′ ∈ Hm : M, (m, h′) A ◮ M, (m, h) [c]A ⇔ ∀h′ ∈ Choicem(h) : M, (m, h′) A ◮ M, (m, h) A ⇔ ||A|| ∈ N((m, h)) ◮ M, (m, h) Bel(B|A) ⇔ ∀X ∈ N((m, h)) : X ∩ ||A|| = ∅ OR ∃Y ∈
N((m, h)) : Y ∩ ||A|| ∅ and Y ⊆ ||A ⊃ B|| (adapted from [Girlando et al., 2016], instead of worlds, moment-history pairs) It follows that the truth-condition for IA([c]A ∧ ¬SA) is
◮ M, (m, h) IA ⇔
- 1. ∀h′ ∈ Choicem(h) : AM ∈ N((m, h′))&
- 2. ∃h′ ∈ Hm : AM N((m, h′))
Interpretation: the agent deliberately chooses to make A its mental image, it’s a genuine choice.
SLIDE 42
Validity
A formula is valid in M iff. A is true at every moment-history pair in M and A is valid iff. A is valid in every sa-doxastic stit imagination model.
SLIDE 43
Semantics - N
Advantage of neighborhood function determining a’s imagination is avoiding logical omniscience for imagination operator I. Restrict N with the following conditions, which can be seen as (minimal) rationality requirements : non-emptiness ∀X ∈ N : X ∅ ([Girlando et al., 2016]) Closure under finite intersection If X, Y ∈ N, then X ∩ Y ∈ N
SLIDE 44
Semantics - (In)validites
I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs.
SLIDE 45
Semantics - (In)validites
I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?)
SLIDE 46
Semantics - (In)validites
I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h.
SLIDE 47
Semantics - (In)validites
I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h. Remarks: Then ⊤, where ⊤ is any tautology. Yet I⊤ b/c condition ii) fails (∃h ∈ Hm : ||⊤|| N((m, h))). So the additional condition seems prima facie ok.
SLIDE 48
Semantics - (In)validites
I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h. Remarks: Then ⊤, where ⊤ is any tautology. Yet I⊤ b/c condition ii) fails (∃h ∈ Hm : ||⊤|| N((m, h))). So the additional condition seems prima facie ok. Aggregation Bel(B|A), Bel(C|A) Bel(B ∧ C|A) holds if we add Nesting: ∀X, Y ∈ N((m, h)) : X ⊆ Y or Y ⊆ X
SLIDE 49
Semantics - (In)validites
I − Bel (IA ∧ (IA ⊃ B)) ⊃ Bel(B|A) Interpretation: if setting A as initial premise implies B is in agent’s mental image, then a is justified in Bel(B|A) And B ⊃ Bel(B|A), given A ⊤, ⊥. So the imagination does a job in acquiring conditional beliefs. K-Axiom fails for I and . Interpretation: logical omniscience for mental imagery (plausible?) Success Bel(A|A) holds if N contains the unit [Pacuit, 2017]: {(m′, h′)|m′ ∈ Tree, h′ ∈ Hm} ∈ N((m, h)) for every m, h. Remarks: Then ⊤, where ⊤ is any tautology. Yet I⊤ b/c condition ii) fails (∃h ∈ Hm : ||⊤|| N((m, h))). So the additional condition seems prima facie ok. Aggregation Bel(B|A), Bel(C|A) Bel(B ∧ C|A) holds if we add Nesting: ∀X, Y ∈ N((m, h)) : X ⊆ Y or Y ⊆ X Remark: Then whenever IA, then all that is in a’s imagery is either implied by A or implies it. (plausible?)
SLIDE 50
Semantics - (in)validities
Simplification Bel(B ∧ C|A) ⊃ Bel(B|A), Bel(B ∧ C|A) ⊃ Bel(C|A) Non-Monotonicity Bel(B|A) ⊃ Bel(B|A ∧ C)
SLIDE 51
Need to check
Cautious monotonicity (Bel(B|A) ∧ Bel(C|A)) ⊃ Bel(C|A ∧ B) Cautious Transitivity n (Bel(B|A) ∧ Bel(C|A ∧ B)) ⊃ Bel(C|A) The conditions on N from [Girlando et al., 2016] (adjusted notation) are sufficient (which ones aren’t necessary?-not checked yet): non-emptiness as before nesting as before strong closure under intersection If S ⊆ N((m, h)) and S ∅, then S ∈ N((m, h)) total reflexivity ∃X ∈ N((m, h)) s.t. (m, h) ∈ X Interpretation : some proposition in mental image at a time contains that time (reality-preservation in mental image?) Interpretation/Consequences for I: ?
SLIDE 52
Need to check
Cautious monotonicity (Bel(B|A) ∧ Bel(C|A)) ⊃ Bel(C|A ∧ B) Cautious Transitivity n (Bel(B|A) ∧ Bel(C|A ∧ B)) ⊃ Bel(C|A) The conditions on N from [Girlando et al., 2016] (adjusted notation) are sufficient (which ones aren’t necessary?-not checked yet): non-emptiness as before nesting as before strong closure under intersection If S ⊆ N((m, h)) and S ∅, then S ∈ N((m, h)) total reflexivity ∃X ∈ N((m, h)) s.t. (m, h) ∈ X Interpretation : some proposition in mental image at a time contains that time (reality-preservation in mental image?) Interpretation/Consequences for I: ? Local absoluteness If X ∈ N((m, h)) and (m′, h′) ∈ X, then N((m, h)) = N((m′, h′)) Interpretation : If a proposition in mental image at a time contains another time, then mental images of both times are equal. (implausible?)
SLIDE 53
Need to check
Cautious monotonicity (Bel(B|A) ∧ Bel(C|A)) ⊃ Bel(C|A ∧ B) Cautious Transitivity n (Bel(B|A) ∧ Bel(C|A ∧ B)) ⊃ Bel(C|A) The conditions on N from [Girlando et al., 2016] (adjusted notation) are sufficient (which ones aren’t necessary?-not checked yet): non-emptiness as before nesting as before strong closure under intersection If S ⊆ N((m, h)) and S ∅, then S ∈ N((m, h)) total reflexivity ∃X ∈ N((m, h)) s.t. (m, h) ∈ X Interpretation : some proposition in mental image at a time contains that time (reality-preservation in mental image?) Interpretation/Consequences for I: ? Local absoluteness If X ∈ N((m, h)) and (m′, h′) ∈ X, then N((m, h)) = N((m′, h′)) Interpretation : If a proposition in mental image at a time contains another time, then mental images of both times are equal. (implausible?)
SLIDE 54 Issues
◮ Imagination and belief use the same neighborhood function ◮ Is conditional belief belief in (B, given A)? [Leitgeb, 2007] ◮ Relation belief in subjunctive conditionals vs. conditional belief? ◮ Formal semantics for conditional belief capture what we mean
philosophically by “conditional belief”?
◮ Other (in)validities we might (not) want? ◮ More/other conditions on N? ◮ Interpretation of ? ◮ if Bel(B|⊤) := BelB, then B implies BelB, which seems weird if
interpreting as mental imagery, so instead it’s implicit belief?
◮ Interpretation of IA ⊃ SB as reliable unfolding mechanism very
bold (dynamics?)
◮
SLIDE 55 Further research
◮ address issues ◮ imagining contradictions or impossibilities? ◮ separate neighborhood function for the conditional belief
◮ hyperintensionality ◮ jstit, epistemic stit, instead of cstit (or dstit), etc.? ◮ bridge principles not based on subset relations? ◮ arguments for doxastic responsibility, given imagination is an
action and can justify beliefs?
SLIDE 56
Thank You!
SLIDE 57
References I
Dorsch, F . (2007). Imagination and the Will. PhD thesis, University College London. http://discovery.ucl.ac.uk/1300296/. Girlando, M., Negri, S., Olivetti, N., and Risch, V. (2016). The Logic of Conditional Beliefs: Neighbourhood Semantics and Sequent Calculus. In Beklemishev, L., Demri, S., and Máté, A., editors, Advances in Modal Logic, pages 322–341. College Publications. Kind, A. (2013). The heterogeneity of the imagination. Erkenntnis, 78:141–159. Langland-Hassan, P . (2016). On choosing what to imagine. In Kind, A. and Kung, P ., editors, Knowledge through Imagination, pages x–y. Oxford University Press.
SLIDE 58
References II
Leitgeb, H. (2007). Beliefs in Conditionals vs. Conditional Beliefs. Topoi, 26:115–132. Olkhovikov, G. K. and Wansing, H. (2017). An axiomatic system and a tableau calculus for STIT imagination logic. Journal of Philosophical Logic. Pacuit, E. (2017). Neighborhood semantics for modal logic. Springer. van Leeuwen, N. (2016). The imaginative agent. In The Routledge Handbook of Philosophy of Imagination, pages 85–109. Routledge.
SLIDE 59
References III
Wansing, H. (2015). Remarks on the logic of imagination. a step towards understanding doxastic control through agency. Synthese, pages 1–19. Williamson, T. (2016). Knowing by imagining. In Kind, A. and Kung, P ., editors, Knowledge through Imagination, pages 113–124. Oxford University Press.