Paul D. Thorn HHU Dsseldorf, DCLPS, DFG SPP 1516 High rational - - PowerPoint PPT Presentation

paul d thorn
SMART_READER_LITE
LIVE PREVIEW

Paul D. Thorn HHU Dsseldorf, DCLPS, DFG SPP 1516 High rational - - PowerPoint PPT Presentation

Paul D. Thorn HHU Dsseldorf, DCLPS, DFG SPP 1516 High rational personal probability (0.5 < r < 1) is a necessary condition for rational belief. Degree of probability is not generally preserved when one aggregates propositions. 2


slide-1
SLIDE 1

Paul D. Thorn HHU Düsseldorf, DCLPS, DFG SPP 1516

slide-2
SLIDE 2

 High rational personal probability (0.5 < r < 1) is a

necessary condition for rational belief.

 Degree of probability is not generally preserved when

  • ne aggregates propositions.

2

slide-3
SLIDE 3

 Leitgeb (2013, 2014) demonstrated the ‘formal

possibility’ of relating rational personal probability to rational belief, in such a way that: (LT) having a rational personal probability of at least r (0.5 < r < 1) is a necessary condition for rational belief, and (DC) rational belief sets are closed under deductive consequences.

 In light of Leitgeb’s result, I here endeavor to illustrate

another problem with deductive closure.

3

slide-4
SLIDE 4

 Discounting inappropriate applications of (LT) and

some other extreme views, the combination of (LT) and (DC) leads to violations of a highly plausible principle concerning rational belief, which I’ll call the “relevant factors principle”.

 Since (LT) is obviously correct, we have good reason

to think that rational belief sets are not closed under deductive consequences.

 Note that Leitgeb’s theory is not the primary or

exclusive target of the argument.

4

slide-5
SLIDE 5

 The following factors are sufficient to determine whether a

respective agent’s belief in a given proposition, , is rational:

 (I) the agent’s relevant evidence bearing on ,  (II) the process that generated the agent’s belief that ,  (III) the agent’s degree of doxastic cautiousness, as

represented by a probability threshold, s,

 (IV) the features of the agent’s practical situation to which

belief in  are relevant, and

 (V) the evidential standards applicable to the agent’s belief

that , deriving the social context in which the agent entertains .

5

slide-6
SLIDE 6

 It is intended that (I) through (V) outline a range of

factors upon which facts about rational belief supervene.

 To be slightly more precise, I propose (for each

proposition ) that:

 For any two possible agents who both believe ,

no difference in factors (I) through (V) implies no difference in the status of the respective beliefs, as rational or not.

slide-7
SLIDE 7

 I regard (DC) as expressing the following claim:

For all possible agents, A, with unlimited deductive abilities, the set of propositions that it is rational for A to believe is closed under deductive consequences.

 I adopt the convention of saying that it is rational for

an agent, A, to believe a proposition, , just in case there are grounds immediately available to A such that if A were to believe that  and base her belief on those grounds, then A’s belief that  would be rational.

7

slide-8
SLIDE 8

One may consistently hold that having a rational personal probability of at least r (r < 1) is a necessary condition for rational belief, while also holding that a rational personal probability of one is a necessary condition for rational belief.

 In order to exclude the preceding possibility, I propose

to treat the application of (LT) in characterizing a theory of rational belief as appropriate just in case the theory admits cases where the rational personal probability for some proposition is r, and it is rational to believe that proposition.

8

slide-9
SLIDE 9

 For reductio, assume (LT) and (DC).  Consider two agents A1 and A2, whose total evidence,

and rational personal probability functions, PROB1 and PROB2, exclusively concern two disjoint domains D1 and D2, describable by the following propositional atoms: p1, …, pn for D1, and q1, …, qm for D2.

 Suppose that all of A1’s and A2’s beliefs are rational.  Let 1 be the strongest proposition believed by A1, and

2 be the strongest proposition believed by A2.

 Assume that PROB1(1) = PROB2(2) = r, and r is the

minimum ‘cautiousness threshold’ for A1 and A2.

9

slide-10
SLIDE 10

 Suppose that A1 and A2 will proceed within the

respective domains D1 and D2 via actions that are specific to each domain, and the result of performing various actions are immediate payoffs in units of utility.

 Suppose that both A1 and A2 engage in appropriate

practical deliberations.

 Finally, suppose that contexts in which A1 and A2

entertain 1 and 2 are thoroughly asocial.

slide-11
SLIDE 11

 Now observe that it is possible to form a probability

function, PROB12, which is defined over truth functional combinations of p1, …, pn, q1, …, qm, which: (i) agrees with PROB1 regarding propositions that exclusively concern D1, (ii) agrees with PROB2 regarding propositions that exclusively concern D2, and (iii) assigns probabilities to other propositions by treating propositions that exclusively concern D1 as being probabilistically independent of propositions that exclusively concern D2.

11

slide-12
SLIDE 12

 Consider the sets of possible worlds with respect to D1

and D2, respectively, which may be identified with propositions of the form ()p1 …()pn for D1, and ()q1 …()qm for D2.

 These possible worlds may be listed as: wP1, …, wP2n,

and wQ1, …, wQ2m.

 The set of possible worlds with respect to the joint

domain of D1 and D2 may be identified with the set propositions of the form: wPiwQk.

 Let PROB12(wPiwQk) = PROB1(wPi)PROB2(wQk), for

all such combinations.

12

slide-13
SLIDE 13

 It is apparent that adopting the probability

function PROB12 would be a rational response (though perhaps not uniquely so) for an agent whose total evidence is the aggregate of A1’s and A2’s evidence (but see below).

 Consider an agent, A12, whose total evidence is the

aggregate of A1’s and A2’s, and who rationally adopts PROB12.

slide-14
SLIDE 14

 Suppose that A12 believes exactly the same

propositions concerning domain D1 as A1, forming them by type identical processes to the ones that produced A1’s beliefs (and similarly for D2 and A2)

 Suppose that A12’s degree of doxastic cautiousness

is identical to that of A1 and A2.

slide-15
SLIDE 15

 Suppose that the language of D1 and D2 concern

mutually remote parts of A12 environment, and this fact is transparent to A12.

 Suppose that the actions available to A12 with respect

to D1 are identical to the ones available to A1, where A12’s payoff for performing respective actions under various D1 conditions are identical to the payoffs for A1 (and similarly for D2 and A2).

slide-16
SLIDE 16

 Given the preceding, it would be appropriate (though

perhaps not uniquely so) for A12 to negotiate D1 by engaging in deliberations that are identical to the ones employed by A1 (and similarly for D2 and A2).

 Assume that A12 proceeds in this manner.  In that case, it’s clear that the features of A12’s practical

situation to which her belief that 1 is relevant are identical to the features of the A1’s practical situation to which her belief that 1 is relevant (and similarly for A2 and 2).

 As with A1 and A2, assume that the context in which A12

entertains 1 and 2 is thoroughly asocial.

slide-17
SLIDE 17

 There is no difference in factors (I) through (V)

regarding A12’s and A1’s belief that 1.

 So A12’s belief that 1 is rational [by the relevant

factors principle].

 Similarly, A12’s belief that 2 is rational.  So it is rational for A12 to believe 12 (by (DC)).  But it is not rational for A12 to believe 12 (by (LT),

since PROB12(12) = r2 < r).

 Thus, by reductio, not (LT) or not (DC).  So not (DC), since (LT).

slide-18
SLIDE 18

(a) One can modify the above example so that PROB1 and PROB2 are both defined over the joint domain of D1 and D2, assuming the rational permissibility of suspending belief regarding propositions about which one has no evidence,

  • r the rational permissibility of imprecise personal

probabilities. (b) One need not accept that PROB12 is rational given the aggregate of A1’s and A2’s evidence. …

slide-19
SLIDE 19

The End. Thanks for your attention.