Learning & Value Change J. Dmitri Gallow Modality & Method - - PowerPoint PPT Presentation

learning amp value change
SMART_READER_LITE
LIVE PREVIEW

Learning & Value Change J. Dmitri Gallow Modality & Method - - PowerPoint PPT Presentation

Learning & Value Change J. Dmitri Gallow Modality & Method Workshop Center for Formal Epistemology Carneigie Mellon University June 910, 2017 Please interrupt when I stop making sense. 0 Daniel & Melissa If theres a


slide-1
SLIDE 1

Learning & Value Change

  • J. Dmitri Gallow

Modality & Method Workshop Center for Formal Epistemology Carneigie Mellon University June 9–10, 2017

slide-2
SLIDE 2

Please interrupt when I stop making sense.

slide-3
SLIDE 3

Daniel & Melissa

If there’s a Democratic scandal, Daniel is disposed to believe that the Democrat’s actions were permissible. Melissa is disposed to have the same reaction to political scandals, whether the politicians are Democrats or Republicans.

1

slide-4
SLIDE 4

Daniel & Melissa

If there’s a Democratic scandal, Daniel is disposed to believe that the Democrat’s actions were permissible. Melissa is disposed to have the same reaction to political scandals, whether the politicians are Democrats or Republicans.

1

slide-5
SLIDE 5

Daniel & Melissa

If there’s a Republican scandal, Daniel is disposed to believe that the Republican’s actions were impermissible. Melissa is disposed to have the same reaction to political scandals, whether the politicians are Democrats or Republicans.

1

slide-6
SLIDE 6

Daniel & Melissa

He doesn’t think that the actions of Democrats and Republicans give evidence of moral permissibility. Melissa is disposed to have the same reaction to political scandals, whether the politicians are Democrats or Republicans.

1

slide-7
SLIDE 7

Daniel & Melissa

He doesn’t think that the actions of Democrats and Republicans give evidence of moral permissibility. Melissa is disposed to have the same reaction to political scandals, whether the politicians are Democrats or Republicans.

1

slide-8
SLIDE 8

Daniel & Melissa

He doesn’t think that the actions of Democrats and Republicans give evidence of moral permissibility. Melissa is disposed to have the same reaction to political scandals, whether the politicians are Democrats or Republicans.

1

slide-9
SLIDE 9

Daniel & Melissa

He doesn’t think that the actions of Democrats and Republicans give evidence of moral permissibility. She sometimes thinks that the actions of Democrats are impermissible, and sometimes thinks that the actions of Republicans are impermissible.

1

slide-10
SLIDE 10

Daniel & Melissa

  • Daniel is irrational.
  • Melissa is more rational than Daniel.
  • But suppose that Daniel’s beliefs are all true, whereas many
  • f Melissa’s are false.
  • Even so, Melissa is more rational than Daniel; rational

belief need not be true, nor need true belief be rational.

  • But there still should be some connection between

rationality and truth.

  • It is tempting to say: Melissa’s beliefs are more likely to be

true than Daniel’s.

2

slide-11
SLIDE 11

Daniel & Melissa

  • Daniel is irrational.
  • Melissa is more rational than Daniel.
  • But suppose that Daniel’s beliefs are all true, whereas many
  • f Melissa’s are false.
  • Even so, Melissa is more rational than Daniel; rational

belief need not be true, nor need true belief be rational.

  • But there still should be some connection between

rationality and truth.

  • It is tempting to say: Melissa’s beliefs are more likely to be

true than Daniel’s.

2

slide-12
SLIDE 12

Daniel & Melissa

  • Daniel is irrational.
  • Melissa is more rational than Daniel.
  • But suppose that Daniel’s beliefs are all true, whereas many
  • f Melissa’s are false.
  • Even so, Melissa is more rational than Daniel; rational

belief need not be true, nor need true belief be rational.

  • But there still should be some connection between

rationality and truth.

  • It is tempting to say: Melissa’s beliefs are more likely to be

true than Daniel’s.

2

slide-13
SLIDE 13

Daniel & Melissa

  • Daniel is irrational.
  • Melissa is more rational than Daniel.
  • But suppose that Daniel’s beliefs are all true, whereas many
  • f Melissa’s are false.
  • Even so, Melissa is more rational than Daniel; rational

belief need not be true, nor need true belief be rational.

  • But there still should be some connection between

rationality and truth.

  • It is tempting to say: Melissa’s beliefs are more likely to be

true than Daniel’s.

2

slide-14
SLIDE 14

Daniel & Melissa

  • Daniel is irrational.
  • Melissa is more rational than Daniel.
  • But suppose that Daniel’s beliefs are all true, whereas many
  • f Melissa’s are false.
  • Even so, Melissa is more rational than Daniel; rational

belief need not be true, nor need true belief be rational.

  • But there still should be some connection between

rationality and truth.

  • It is tempting to say: Melissa’s beliefs are more likely to be

true than Daniel’s.

2

slide-15
SLIDE 15

Daniel & Melissa

  • Daniel is irrational.
  • Melissa is more rational than Daniel.
  • But suppose that Daniel’s beliefs are all true, whereas many
  • f Melissa’s are false.
  • Even so, Melissa is more rational than Daniel; rational

belief need not be true, nor need true belief be rational.

  • But there still should be some connection between

rationality and truth.

  • It is tempting to say: Melissa’s beliefs are more likely to be

true than Daniel’s.

2

slide-16
SLIDE 16

Accuracy-fjrst epistemology

  • According to Accuracy-fjrst epistemology, to be rational is

to rationally pursue truth.

  • Accuracy fjrsters:
  • Melissa adopts those beliefs which she expects to be most

accurate.

  • Daniel adopts beliefs which he expects to be less accurate

than other beliefs he could have adopted instead.

3

slide-17
SLIDE 17

Accuracy-fjrst epistemology

  • According to Accuracy-fjrst epistemology, to be rational is

to rationally pursue truth.

  • Accuracy fjrsters:
  • Melissa adopts those beliefs which she expects to be most

accurate.

  • Daniel adopts beliefs which he expects to be less accurate

than other beliefs he could have adopted instead.

3

slide-18
SLIDE 18

Accuracy-fjrst epistemology

  • According to Accuracy-fjrst epistemology, to be rational is

to rationally pursue truth.

  • Accuracy fjrsters:
  • Melissa adopts those beliefs which she expects to be most

accurate.

  • Daniel adopts beliefs which he expects to be less accurate

than other beliefs he could have adopted instead.

3

slide-19
SLIDE 19

Accuracy-fjrst epistemology

  • According to Accuracy-fjrst epistemology, to be rational is

to rationally pursue truth.

  • Accuracy fjrsters:
  • Melissa adopts those beliefs which she expects to be most

accurate.

  • Daniel adopts beliefs which he expects to be less accurate

than other beliefs he could have adopted instead.

3

slide-20
SLIDE 20

Accuracy-fjrst epistemology

  • Accuracy-fjrst epistemology seeks to derive all evidential

norms from norms of pragmatic rationality together with the sole axiological claim that beliefs are better the more accurate they are.

  • It is therefore a form of epistemic consequentialism, with the

sole epistemic good of accuracy.

4

slide-21
SLIDE 21

Accuracy-fjrst epistemology

  • Accuracy-fjrst epistemology seeks to derive all evidential

norms from norms of pragmatic rationality together with the sole axiological claim that beliefs are better the more accurate they are.

  • It is therefore a form of epistemic consequentialism, with the

sole epistemic good of accuracy.

4

slide-22
SLIDE 22

Looking forward

  • Existing Accuracy-fjrst approaches to rational learning

presuppose substantive evidential norms, and so fail to elucidate the connection between rationality and truth.

  • Alternative approaches are needed.
  • I have one to ofger.

5

slide-23
SLIDE 23

Looking forward

  • Existing Accuracy-fjrst approaches to rational learning

presuppose substantive evidential norms, and so fail to elucidate the connection between rationality and truth.

  • Alternative approaches are needed.
  • I have one to ofger.

5

slide-24
SLIDE 24

Looking forward

  • Existing Accuracy-fjrst approaches to rational learning

presuppose substantive evidential norms, and so fail to elucidate the connection between rationality and truth.

  • Alternative approaches are needed.
  • I have one to ofger.

5

slide-25
SLIDE 25

Table of contents

  • 1. Bayesianism
  • 2. Epistemic Value
  • 3. Conditionalization & Accuracy
  • 4. Epistemic Value Change
  • 5. In Summation

6

slide-26
SLIDE 26

Bayesianism

slide-27
SLIDE 27

Bayesian Rationality

  • Tie Bayesian has a diagnosis of what’s gone wrong with

Daniel.

  • Either Daniel’s opinions are not probabilistically coherent,
  • r Daniel is not a conditionalizer.

7

slide-28
SLIDE 28

Bayesian Rationality

  • Tie Bayesian has a diagnosis of what’s gone wrong with

Daniel.

  • Either Daniel’s opinions are not probabilistically coherent,
  • r Daniel is not a conditionalizer.

7

slide-29
SLIDE 29

Credal States

  • At any time t, your opinions are representable with a credal

state < W , A , ct >

  • W = {w1, w2, . . . , wN} is a fjnite set of doxastically

possible worlds;

  • A ⊆ W is a proposition;
  • A ⊆ ℘(W ) is the set of propositions about which you are
  • pinionated;
  • ct : A → [0, 1] is your time t credence function.

8

slide-30
SLIDE 30

Credal States

  • At any time t, your opinions are representable with a credal

state < W , A , ct >

  • W = {w1, w2, . . . , wN} is a fjnite set of doxastically

possible worlds;

  • A ⊆ W is a proposition;
  • A ⊆ ℘(W ) is the set of propositions about which you are
  • pinionated;
  • ct : A → [0, 1] is your time t credence function.

8

slide-31
SLIDE 31

Credal States

  • At any time t, your opinions are representable with a credal

state < W , A , ct >

  • W = {w1, w2, . . . , wN} is a fjnite set of doxastically

possible worlds;

  • A ⊆ W is a proposition;
  • A ⊆ ℘(W ) is the set of propositions about which you are
  • pinionated;
  • ct : A → [0, 1] is your time t credence function.

8

slide-32
SLIDE 32

Credal States

  • At any time t, your opinions are representable with a credal

state < W , A , ct >

  • W = {w1, w2, . . . , wN} is a fjnite set of doxastically

possible worlds;

  • A ⊆ W is a proposition;
  • A ⊆ ℘(W ) is the set of propositions about which you are
  • pinionated;
  • ct : A → [0, 1] is your time t credence function.

8

slide-33
SLIDE 33

Credal States

  • At any time t, your opinions are representable with a credal

state < W , A , ct >

  • W = {w1, w2, . . . , wN} is a fjnite set of doxastically

possible worlds;

  • A ⊆ W is a proposition;
  • A ⊆ ℘(W ) is the set of propositions about which you are
  • pinionated;
  • ct : A → [0, 1] is your time t credence function.

8

slide-34
SLIDE 34

Credal States

  • At any time t, your opinions are representable with a credal

state < W , A , ct >

  • W = {w1, w2, . . . , wN} is a fjnite set of doxastically

possible worlds;

  • A ⊆ W is a proposition;
  • A ⊆ ℘(W ) is the set of propositions about which you are
  • pinionated;
  • ct : A → [0, 1] is your time t credence function.

8

slide-35
SLIDE 35

Bayesianism

Probabilism

slide-36
SLIDE 36

Probabilism

Probabilism At all times t, ct should be a probability function.

9

slide-37
SLIDE 37

Bayesianism

Conditionalization

slide-38
SLIDE 38

Conditionalization

  • Use ‘ct,E’ for the credence function you are disposed to

adopt, at t, upon receiving the total evidence E. Conditionalization Tiere should be some credence function c such that, for all times t and all A, E ∈ A such that E could be your total time t evidence, ct,E(A) = c(A | E)

10

slide-39
SLIDE 39

Conditionalization

  • Use ‘cE’ for the credence function you are disposed to

adopt, at t, upon receiving the total evidence E. Conditionalization Tiere should be some credence function c such that, for all times t and all A, E ∈ A such that E could be your total time t evidence, ct,E(A) = c(A | E)

10

slide-40
SLIDE 40

Conditionalization

  • Use ‘ct,E’ for the credence function you are disposed to

adopt, at t, upon receiving the total evidence E. Conditionalization Tiere should be some credence function c such that, for all times t and all A, E ∈ A such that E could be your total time t evidence, ct,E(A) = c(A | E)

10

slide-41
SLIDE 41

Bayesianism

  • Tie Bayesian account of rational learning: you should be a

probabilistic conditionalizer.

11

slide-42
SLIDE 42

Daniel is not a probabilistic conditionalizer

  • 1. c a Dem. φ-ed( φ-ing is wrong ) is low.
  • 2. c a Rep. φ-ed( φ-ing is wrong ) is high.

If Daniel were a conditionalizer, then c(φ-ing is wrong | a Dem. φ-ed) is low and c(φ-ing is wrong | a Rep. φ-ed) is high But Daniel thinks whether φ-ing is wrong is independent of whether a Dem. or a Rep. φ-ed. So c(φ-ing is wrong) is low and c(φ-ing is wrong) is high So Daniel isn’t probabilistic

12

slide-43
SLIDE 43

Daniel is not a probabilistic conditionalizer

  • 1. c a Dem. φ-ed( φ-ing is wrong ) is low.
  • 2. c a Rep. φ-ed( φ-ing is wrong ) is high.

If Daniel were a conditionalizer, then c(φ-ing is wrong | a Dem. φ-ed) is low and c(φ-ing is wrong | a Rep. φ-ed) is high But Daniel thinks whether φ-ing is wrong is independent of whether a Dem. or a Rep. φ-ed. So c(φ-ing is wrong) is low and c(φ-ing is wrong) is high So Daniel isn’t probabilistic

12

slide-44
SLIDE 44

Daniel is not a probabilistic conditionalizer

  • 1. c a Dem. φ-ed( φ-ing is wrong ) is low.
  • 2. c a Rep. φ-ed( φ-ing is wrong ) is high.

If Daniel were a conditionalizer, then c(φ-ing is wrong | a Dem. φ-ed) is low and c(φ-ing is wrong | a Rep. φ-ed) is high But Daniel thinks whether φ-ing is wrong is independent of whether a Dem. or a Rep. φ-ed. So c(φ-ing is wrong) is low and c(φ-ing is wrong) is high So Daniel isn’t probabilistic

12

slide-45
SLIDE 45

Daniel is not a probabilistic conditionalizer

  • 1. c a Dem. φ-ed( φ-ing is wrong ) is low.
  • 2. c a Rep. φ-ed( φ-ing is wrong ) is high.

If Daniel were a conditionalizer, then c(φ-ing is wrong | a Dem. φ-ed) is low and c(φ-ing is wrong | a Rep. φ-ed) is high But Daniel thinks whether φ-ing is wrong is independent of whether a Dem. or a Rep. φ-ed. So c(φ-ing is wrong) is low and c(φ-ing is wrong) is high So Daniel isn’t probabilistic

12

slide-46
SLIDE 46

Daniel is not a probabilistic conditionalizer

  • 1. c a Dem. φ-ed( φ-ing is wrong ) is low.
  • 2. c a Rep. φ-ed( φ-ing is wrong ) is high.

If Daniel were a conditionalizer, then c(φ-ing is wrong | a Dem. φ-ed) is low and c(φ-ing is wrong | a Rep. φ-ed) is high But Daniel thinks whether φ-ing is wrong is independent of whether a Dem. or a Rep. φ-ed. So c(φ-ing is wrong) is low and c(φ-ing is wrong) is high So Daniel isn’t probabilistic

12

slide-47
SLIDE 47

Daniel is not a probabilistic conditionalizer

  • Tie accuracy-fjrster likes this diagnosis of Daniel’s

irrationality.

  • Tiey wish to show that Probabilism and

Conditionalization follow from:

  • the axiological claim that accuracy is the sole epistemic

good;

  • a claim about how to properly value accuracy; and
  • the consequentialist deontic norm that it is rational to

maximize expected epistemic value.

13

slide-48
SLIDE 48

Daniel is not a probabilistic conditionalizer

  • Tie accuracy-fjrster likes this diagnosis of Daniel’s

irrationality.

  • Tiey wish to show that Probabilism and

Conditionalization follow from:

  • the axiological claim that accuracy is the sole epistemic

good;

  • a claim about how to properly value accuracy; and
  • the consequentialist deontic norm that it is rational to

maximize expected epistemic value.

13

slide-49
SLIDE 49

Daniel is not a probabilistic conditionalizer

  • Tie accuracy-fjrster likes this diagnosis of Daniel’s

irrationality.

  • Tiey wish to show that Probabilism and

Conditionalization follow from:

  • the axiological claim that accuracy is the sole epistemic

good;

  • a claim about how to properly value accuracy; and
  • the consequentialist deontic norm that it is rational to

maximize expected epistemic value.

13

slide-50
SLIDE 50

Daniel is not a probabilistic conditionalizer

  • Tie accuracy-fjrster likes this diagnosis of Daniel’s

irrationality.

  • Tiey wish to show that Probabilism and

Conditionalization follow from:

  • the axiological claim that accuracy is the sole epistemic

good;

  • a claim about how to properly value accuracy; and
  • the consequentialist deontic norm that it is rational to

maximize expected epistemic value.

13

slide-51
SLIDE 51

Daniel is not a probabilistic conditionalizer

  • Tie accuracy-fjrster likes this diagnosis of Daniel’s

irrationality.

  • Tiey wish to show that Probabilism and

Conditionalization follow from:

  • the axiological claim that accuracy is the sole epistemic

good;

  • a claim about how to properly value accuracy; and
  • the consequentialist deontic norm that it is rational to

maximize expected epistemic value.

13

slide-52
SLIDE 52

Epistemic Value

slide-53
SLIDE 53

Epistemic Value

  • Write the epistemic value of a credence function c, under

the supposition that w is actual, as: V(c, w)

  • For the accuracy-fjrster, V(c, w) is entirely a function of the

accuracy of c in w.

  • E.g., one accuracy measure is the quadratic or ‘Brier’

measure, Q Q(c, w)

def

= − ∑

A∈A

(νA(w) − c(A))2

14

slide-54
SLIDE 54

Epistemic Value

  • Write the epistemic value of a credence function c, under

the supposition that w is actual, as: V(c, w)

  • For the accuracy-fjrster, V(c, w) is entirely a function of the

accuracy of c in w.

  • E.g., one accuracy measure is the quadratic or ‘Brier’

measure, Q Q(c, w)

def

= − ∑

A∈A

(νA(w) − c(A))2

14

slide-55
SLIDE 55

Epistemic Value

  • Write the epistemic value of a credence function c, under

the supposition that w is actual, as: V(c, w)

  • For the accuracy-fjrster, V(c, w) is entirely a function of the

accuracy of c in w.

  • E.g., one accuracy measure is the quadratic or ‘Brier’

measure, Q Q(c, w)

def

= − ∑

A∈A

(νA(w) − c(A))2

14

slide-56
SLIDE 56

Epistemic Value

  • Write the epistemic value of a credence function c, under

the supposition that w is actual, as: V(c, w)

  • For the accuracy-fjrster, V(c, w) is entirely a function of the

accuracy of c in w.

  • E.g., one accuracy measure is the quadratic or ‘Brier’

measure, Q Q(c, w)

def

= − ∑

A∈A

(νA(w) − c(A))2

14

slide-57
SLIDE 57

Epistemic Value

  • Write the epistemic value of a credence function c, under

the supposition that w is actual, as: V(c, w)

  • For the accuracy-fjrster, V(c, w) is entirely a function of the

accuracy of c in w.

  • E.g., one accuracy measure is the quadratic or ‘Brier’

measure, Q Q(c, w)

def

= − ∑

A∈A

(νA(w) − c(A))2

14

slide-58
SLIDE 58

Epistemic Value

  • Write the epistemic value of a credence function c, under

the supposition that w is actual, as: V(c, w)

  • For the accuracy-fjrster, V(c, w) is entirely a function of the

accuracy of c in w.

  • E.g., one accuracy measure is the quadratic or ‘Brier’

measure, Q Q(c, w)

def

= − ∑

A∈A

(νA(w) − c(A))2

14

slide-59
SLIDE 59

Epistemic Value

  • Write the epistemic value of a credence function c, under

the supposition that w is actual, as: V(c, w)

  • For the accuracy-fjrster, V(c, w) is entirely a function of the

accuracy of c in w.

  • E.g., one accuracy measure is the quadratic or ‘Brier’

measure, Q Q(c, w)

def

= − ∑

A∈A

(νA(w) − c(A))2

14

slide-60
SLIDE 60

Other Accuracy Measures

  • Tie Euclidean distance measure

E(c, w)

def

= −

A∈A

(νA(w) − c(A))2

  • Tie Absolute Value measure

A(c, w)

def

= ∑

A∈A

| νA(w) − c(A) |

  • Tie Logarithmic measure

L(c, w)

def

= ∑

A∈A

ln [| (1 − νA(w)) − c(A) |]

15

slide-61
SLIDE 61

Other Accuracy Measures

  • Tie Euclidean distance measure

E(c, w)

def

= −

A∈A

(νA(w) − c(A))2

  • Tie Absolute Value measure

A(c, w)

def

= ∑

A∈A

| νA(w) − c(A) |

  • Tie Logarithmic measure

L(c, w)

def

= ∑

A∈A

ln [| (1 − νA(w)) − c(A) |]

15

slide-62
SLIDE 62

Other Accuracy Measures

  • Tie Euclidean distance measure

E(c, w)

def

= −

A∈A

(νA(w) − c(A))2

  • Tie Absolute Value measure

A(c, w)

def

= ∑

A∈A

| νA(w) − c(A) |

  • Tie Logarithmic measure

L(c, w)

def

= ∑

A∈A

ln [| (1 − νA(w)) − c(A) |]

15

slide-63
SLIDE 63

Evaluating credence functions

  • Use ‘Vc(c∗)’ to represent how valuable the credence

function c∗ is, according to the credence function c.

  • Leitgeb & Pettigrew: if your credence function is a

probability, p, then, for all c, Vp(c) should be p’s expectation

  • f c’s epistemic value.

Vp(c)

!

= ∑

w∈W

V(c, w) · p(w)

  • Tiis is a general decision-theoretic norm: epistemic acts

are choiceworthy to the degree that they maximize expected value.

16

slide-64
SLIDE 64

Evaluating credence functions

  • Use ‘Vc(c∗)’ to represent how valuable the credence

function c∗ is, according to the credence function c.

  • Leitgeb & Pettigrew: if your credence function is a

probability, p, then, for all c, Vp(c) should be p’s expectation

  • f c’s epistemic value.

Vp(c)

!

= ∑

w∈W

V(c, w) · p(w)

  • Tiis is a general decision-theoretic norm: epistemic acts

are choiceworthy to the degree that they maximize expected value.

16

slide-65
SLIDE 65

Evaluating credence functions

  • Use ‘Vc(c∗)’ to represent how valuable the credence

function c∗ is, according to the credence function c.

  • Leitgeb & Pettigrew: if your credence function is a

probability, p, then, for all c, Vp(c) should be p’s expectation

  • f c’s epistemic value.

Vp(c)

!

= ∑

w∈W

V(c, w) · p(w)

  • Tiis is a general decision-theoretic norm: epistemic acts

are choiceworthy to the degree that they maximize expected value.

16

slide-66
SLIDE 66

Evaluating credence functions

  • Use ‘Vc(c∗)’ to represent how valuable the credence

function c∗ is, according to the credence function c.

  • Leitgeb & Pettigrew: if your credence function is a

probability, p, then, for all c, Vp(c) should be p’s expectation

  • f c’s epistemic value.

Vp(c)

!

= ∑

w∈W

V(c, w) · p(w)

  • Tiis is a general decision-theoretic norm: epistemic acts

are choiceworthy to the degree that they maximize expected value.

16

slide-67
SLIDE 67

Evaluating credence functions

  • Use ‘Vc(c∗)’ to represent how valuable the credence

function c∗ is, according to the credence function c.

  • Leitgeb & Pettigrew: if your credence function is a

probability, p, then, for all c, Vp(c) should be p’s expectation

  • f c’s epistemic value.

Vp(c)

!

= ∑

w∈W

V(c, w) · p(w)

  • Tiis is a general decision-theoretic norm: epistemic acts

are choiceworthy to the degree that they maximize causal? expected value.

16

slide-68
SLIDE 68

Evaluating credence functions

  • Use ‘Vc(c∗)’ to represent how valuable the credence

function c∗ is, according to the credence function c.

  • Leitgeb & Pettigrew: if your credence function is a

probability, p, then, for all c, Vp(c) should be p’s expectation

  • f c’s epistemic value.

Vp(c)

!

= ∑

w∈W

V(c, w) · p(w)

  • Tiis is a general decision-theoretic norm: epistemic acts

are choiceworthy to the degree that they maximize expected value.

16

slide-69
SLIDE 69

Valuing Accuracy Properly

Propriety Tie epistemic value function V is proper ifg, for every probability p and every credence function c p, Vp(c) < Vp(p)

  • Q is proper ,
  • L is proper ,
  • A is not proper /
  • E is not proper /

17

slide-70
SLIDE 70

Valuing Accuracy Properly

Propriety Tie epistemic value function V is proper ifg, for every probability p and every credence function c p, Vp(c) < Vp(p)

  • Q is proper ,
  • L is proper ,
  • A is not proper /
  • E is not proper /

17

slide-71
SLIDE 71

Valuing Accuracy Properly

Propriety Tie epistemic value function V is proper ifg, for every probability p and every credence function c p, Vp(c) < Vp(p)

  • Q is proper ,
  • L is proper ,
  • A is not proper /
  • E is not proper /

17

slide-72
SLIDE 72

Valuing Accuracy Properly

Propriety Tie epistemic value function V is proper ifg, for every probability p and every credence function c p, Vp(c) < Vp(p)

  • Q is proper ,
  • L is proper ,
  • A is not proper /
  • E is not proper /

17

slide-73
SLIDE 73

Valuing Accuracy Properly

Propriety Tie epistemic value function V is proper ifg, for every probability p and every credence function c p, Vp(c) < Vp(p)

  • Q is proper ,
  • L is proper ,
  • A is not proper /
  • E is not proper /

17

slide-74
SLIDE 74

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

18

slide-75
SLIDE 75

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

18

slide-76
SLIDE 76

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

18

slide-77
SLIDE 77

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

18

slide-78
SLIDE 78

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

18

slide-79
SLIDE 79

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

19

slide-80
SLIDE 80

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

19

slide-81
SLIDE 81

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

19

slide-82
SLIDE 82

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

19

slide-83
SLIDE 83

Propriety & Probabilism

  • If V is a proper measure of accuracy, then every

non-probabilistic credence function is accuracy dominated by a probabilistic credence function, and no probabilistic credence function is so dominated. (Predd et al, 2009)

  • Assuming that accuracy domination is irrational and that

V is a proper measure of accuracy, Probabilism follows.

20

slide-84
SLIDE 84

Propriety & Probabilism

  • If V is a proper measure of accuracy, then every

non-probabilistic credence function is accuracy dominated by a probabilistic credence function, and no probabilistic credence function is so dominated. (Predd et al, 2009)

  • Assuming that accuracy domination is irrational and that

V is a proper measure of accuracy, Probabilism follows.

20

slide-85
SLIDE 85

Conditionalization & Accuracy

slide-86
SLIDE 86

Conditionalization & Accuracy

Take 1

slide-87
SLIDE 87

Propriety & Conditionalization

  • Leitgeb & Pettigrew (2010): Upon learning that E, you

should be disposed to adopt a new credence function which maximizes your expected epistemic value in all possibilities consistent with E. pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

21

slide-88
SLIDE 88

Propriety & Conditionalization

  • Leitgeb & Pettigrew (2010): Upon learning that E, you

should be disposed to adopt a new credence function which maximizes your expected epistemic value in all possibilities consistent with E. pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

21

slide-89
SLIDE 89

Propriety & Conditionalization

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) } Tieorem 1 (Generalized from Leitgeb & Pettigrew, 2010) If V is a proper accuracy measure, then, for any probability p and any proposition E, arg max

c

{

w∈E

p(w) · V(c, w) }

= p(− | E)

If V is a proper accuracy measure, then pE

!

= p(− | E).

22

slide-90
SLIDE 90

Propriety & Conditionalization

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) } Tieorem 1 (Generalized from Leitgeb & Pettigrew, 2010) If V is a proper accuracy measure, then, for any probability p and any proposition E, arg max

c

{

w∈E

p(w) · V(c, w) }

= p(− | E)

If V is a proper accuracy measure, then pE

!

= p(− | E).

22

slide-91
SLIDE 91

Propriety & Conditionalization

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) } Tieorem 1 (Generalized from Leitgeb & Pettigrew, 2010) If V is a proper accuracy measure, then, for any probability p and any proposition E, arg max

c

{

w∈E

p(w) · V(c, w) }

= p(− | E)

If V is a proper accuracy measure, then pE

!

= p(− | E).

22

slide-92
SLIDE 92

Why only E-possibilities?

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

  • We should attempt to maximize expected epistemic value,

but this is not an expectation; why should it be maximized?

23

slide-93
SLIDE 93

Why only E-possibilities?

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

  • We should attempt to maximize expected epistemic value,

but this is not an expectation; why should it be maximized?

23

slide-94
SLIDE 94

Why only E-possibilities?

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

  • A 2-stage theory of rational learning:
  • Stage 1: upon learning E, you eliminate worlds

incompatible with E from W ;

  • Stage 2: use your prior (no longer probabilistic) credences

to pick a posterior which maximizes expected epistemic value in the remaining worlds.

24

slide-95
SLIDE 95

Why only E-possibilities?

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

  • A 2-stage theory of rational learning:
  • Stage 1: upon learning E, you eliminate worlds

incompatible with E from W ;

  • Stage 2: use your prior (no longer probabilistic) credences

to pick a posterior which maximizes expected epistemic value in the remaining worlds.

24

slide-96
SLIDE 96

Why only E-possibilities?

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

  • A 2-stage theory of rational learning:
  • Stage 1: upon learning E, you eliminate worlds

incompatible with E from W ;

  • Stage 2: use your prior (no longer probabilistic) credences

to pick a posterior which maximizes expected epistemic value in the remaining worlds.

24

slide-97
SLIDE 97

Why only E-possibilities?

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

  • A 2-stage theory of rational learning:
  • Stage 1: upon learning E, you eliminate worlds

incompatible with E from W ;

  • Stage 2: use your prior (no longer probabilistic) credences

to pick a posterior which maximizes expected epistemic value in the remaining worlds.

24

slide-98
SLIDE 98

Why only E-possibilities?

pE

!

= arg max

c

{

w∈E

p(w) · V(c, w) }

  • A 2-stage theory of rational learning:
  • Stage 1: upon learning E, you eliminate worlds

incompatible with E from W ;

  • Stage 2: use your prior (no longer probabilistic) credences

to pick a posterior which maximizes expected epistemic value in the remaining worlds.

24

slide-99
SLIDE 99

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Because they are incompatible with your evidence.
  • Tiis answer relies upon a norm like “do not treat a world as

epistemically possible if it is incompatible with your evidence”

  • Tiis is a distinctively evidential norm
  • It has not been justifjed in terms of the rational pursuit of

accuracy alone.

  • Moreover, no such justifjcation is possible, if we assume

that accuracy is properly measured.

25

slide-100
SLIDE 100

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Because they are incompatible with your evidence.
  • Tiis answer relies upon a norm like “do not treat a world as

epistemically possible if it is incompatible with your evidence”

  • Tiis is a distinctively evidential norm
  • It has not been justifjed in terms of the rational pursuit of

accuracy alone.

  • Moreover, no such justifjcation is possible, if we assume

that accuracy is properly measured.

25

slide-101
SLIDE 101

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Because they are incompatible with your evidence.
  • Tiis answer relies upon a norm like “do not treat a world as

epistemically possible if it is incompatible with your evidence”

  • Tiis is a distinctively evidential norm
  • It has not been justifjed in terms of the rational pursuit of

accuracy alone.

  • Moreover, no such justifjcation is possible, if we assume

that accuracy is properly measured.

25

slide-102
SLIDE 102

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Because they are incompatible with your evidence.
  • Tiis answer relies upon a norm like “do not treat a world as

epistemically possible if it is incompatible with your evidence”

  • Tiis is a distinctively evidential norm
  • It has not been justifjed in terms of the rational pursuit of

accuracy alone.

  • Moreover, no such justifjcation is possible, if we assume

that accuracy is properly measured.

25

slide-103
SLIDE 103

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Because they are incompatible with your evidence.
  • Tiis answer relies upon a norm like “do not treat a world as

epistemically possible if it is incompatible with your evidence”

  • Tiis is a distinctively evidential norm
  • It has not been justifjed in terms of the rational pursuit of

accuracy alone.

  • Moreover, no such justifjcation is possible, if we assume

that accuracy is properly measured.

25

slide-104
SLIDE 104

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Because they are incompatible with your evidence.
  • Tiis answer relies upon a norm like “do not treat a world as

epistemically possible if it is incompatible with your evidence”

  • Tiis is a distinctively evidential norm
  • It has not been justifjed in terms of the rational pursuit of

accuracy alone.

  • Moreover, no such justifjcation is possible, if we assume

that accuracy is properly measured.

25

slide-105
SLIDE 105

Accuracy-fjrst?

  • Suppose V is proper, your prior is p, and cE is any credence

function which assigns credence zero to worlds incompatible with E.

  • Tien,

Vp(cE) < Vp(p)

  • If you care about accuracy and accuracy alone, you

evaluate credal state by their expected accuracy, and you measure accuracy properly, then you will never learn from experience.

26

slide-106
SLIDE 106

Accuracy-fjrst?

  • Suppose V is proper, your prior is p, and cE is any credence

function which assigns credence zero to worlds incompatible with E.

  • Tien,

Vp(cE) < Vp(p)

  • If you care about accuracy and accuracy alone, you

evaluate credal state by their expected accuracy, and you measure accuracy properly, then you will never learn from experience.

26

slide-107
SLIDE 107

Accuracy-fjrst?

  • Suppose V is proper, your prior is p, and cE is any credence

function which assigns credence zero to worlds incompatible with E.

  • Tien,

Vp(cE) < Vp(p)

  • If you care about accuracy and accuracy alone, you

evaluate credal state by their expected accuracy, and you measure accuracy properly, then you will never learn from experience.

26

slide-108
SLIDE 108

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Tiis is just a brute psychological fact; it is not rationally

evaluable.

  • To say this is to deny that it’s irrational to become certain

that climate change is a hoax perpetrated by the Chinese afuer a snowfall.

27

slide-109
SLIDE 109

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Tiis is just a brute psychological fact; it is not rationally

evaluable.

  • To say this is to deny that it’s irrational to become certain

that climate change is a hoax perpetrated by the Chinese afuer a snowfall.

27

slide-110
SLIDE 110

Accuracy-fjrst?

  • Why eliminate worlds at stage 1?
  • Tiis is just a brute psychological fact; it is not rationally

evaluable.

  • To say this is to deny that it’s irrational to become certain

that climate change is a hoax perpetrated by the Chinese afuer a snowfall.

27

slide-111
SLIDE 111

Conditionalization & Accuracy

Take 2

slide-112
SLIDE 112

Meeting Evidential Constraints

  • Leitgeb & Pettigrew: upon learning that E, you should

be disposed to adopt a new credence function which maximizes your expected epistemic value amongst those credence functions consistent with your evidence. pE

!

= arg max

c : c(E)=1, c(¬E)=0

{

w∈W

p(w) · V(c, w) } Tieorem 2 (Leitgeb & Pettigrew, 2010) If V = Q, then the solution to the maximization problem above is: p(A || E) = p(AE) + ||AE||

||E|| · [1 − p(E)]

28

slide-113
SLIDE 113

Meeting Evidential Constraints

  • Leitgeb & Pettigrew: upon learning that E, you should

be disposed to adopt a new credence function which maximizes your expected epistemic value amongst those credence functions consistent with your evidence. pE

!

= arg max

c : c(E)=1, c(¬E)=0

{

w∈W

p(w) · V(c, w) } Tieorem 2 (Leitgeb & Pettigrew, 2010) If V = Q, then the solution to the maximization problem above is: p(A || E) = p(AE) + ||AE||

||E|| · [1 − p(E)]

28

slide-114
SLIDE 114

Meeting Evidential Constraints

  • Leitgeb & Pettigrew: upon learning that E, you should

be disposed to adopt a new credence function which maximizes your expected epistemic value amongst those credence functions consistent with your evidence. pE

!

= arg max

c : c(E)=1, c(¬E)=0

{

w∈W

p(w) · V(c, w) } Tieorem 2 (Leitgeb & Pettigrew, 2010) If V = Q, then the solution to the maximization problem above is: p(A || E) = p(AE) + ||AE||

||E|| · [1 − p(E)]

28

slide-115
SLIDE 115

Meeting Evidential Constraints

[

L R G

3% 1%

¬G

72% 24% ]

− →

[

L R G

0% 38.5%

¬G

0% 61.5% ]

29

slide-116
SLIDE 116

Meeting Evidential Constraints

[

L R G

3% 1%

¬G

72% 24% ]

− →

[

L R G

0% 38.5%

¬G

0% 61.5% ]

29

slide-117
SLIDE 117

Meeting Evidential Constraints

[

L R G

3% 1%

¬G

72% 24% ]

− →

[

L R G

0% 38.5%

¬G

0% 61.5% ]

29

slide-118
SLIDE 118

Meeting Evidential Constraints

  • Levinstein (2012): We should favor the evidential

constraint approach, but we should not use the quadratic

  • Q. Instead, we should use the logarithmic

L′(c, w)

def

= ln[c(w)]

  • Tien, it turns out that

arg max

c : c(E)=1, c(¬E)=0

{

w∈W

L′(c, w) · p(w) }

= p(− | E)

30

slide-119
SLIDE 119

Meeting Evidential Constraints

  • Levinstein (2012): We should favor the evidential

constraint approach, but we should not use the quadratic

  • Q. Instead, we should use the logarithmic

L′(c, w)

def

= ln[c(w)]

  • Tien, it turns out that

arg max

c : c(E)=1, c(¬E)=0

{

w∈W

L′(c, w) · p(w) }

= p(− | E)

30

slide-120
SLIDE 120

Meeting Evidential Constraints

  • All probability functions are (at least weakly)

L′-dominated by non-probability functions.

  • Consider ‘the credulous function’

, c† which gives credence 1 to every world. At every world, this function gets an L′-value of 0, which is as high as L′-value goes.

∀w

L′(c†, w) = ln[1] = 0

31

slide-121
SLIDE 121

Meeting Evidential Constraints

  • All probability functions are (at least weakly)

L′-dominated by non-probability functions.

  • Consider ‘the credulous function’

, c† which gives credence 1 to every world. At every world, this function gets an L′-value of 0, which is as high as L′-value goes.

∀w

L′(c†, w) = ln[1] = 0

31

slide-122
SLIDE 122

Meeting Evidential Constraints

  • What is a proper epistemic value function is this:

L(c, wi)

def

= ∑

wj∈W

ln[| (1 − δij) − c(wj) |]

  • But this epistemic value function no longer vindicates
  • Conditionalization. And what it does vindicate is not

epistemically defensible.

32

slide-123
SLIDE 123

Meeting Evidential Constraints

  • What is a proper epistemic value function is this:

L(c, wi)

def

= ∑

wj∈W

ln[| (1 − δij) − c(wj) |]

  • But this epistemic value function no longer vindicates
  • Conditionalization. And what it does vindicate is not

epistemically defensible.

32

slide-124
SLIDE 124

Meeting Evidential Constraints

[

L R G

3% 1%

¬G

72% 24% ]

− →

[

L R G

0%

≈ 59%

¬G

0%

≈ 41%

]

33

slide-125
SLIDE 125

Meeting Evidential Constraints

[

L R G

3% 1%

¬G

72% 24% ]

− →

[

L R G

0%

≈ 59%

¬G

0%

≈ 41%

]

33

slide-126
SLIDE 126

Meeting Evidential Constraints

[

L R G

3% 1%

¬G

72% 24% ]

− →

[

L R G

0%

≈ 59%

¬G

0%

≈ 41%

]

33

slide-127
SLIDE 127

Epistemic Value Change

slide-128
SLIDE 128

What went wrong?

  • Leitgeb & Pettigrew give a model of rational belief with

three components:

  • a credal state;
  • an epistemic value function; and
  • a dynamical law—rational credences travel in the direction
  • f highest expected accuracy
  • If the epistemic value function is proper, then this model

will always be in equilibrium.

  • So, if there is to be a rational change of belief, then there

must be an exogenous change to one of these three components.

34

slide-129
SLIDE 129

What went wrong?

  • Leitgeb & Pettigrew give a model of rational belief with

three components:

  • a credal state;
  • an epistemic value function; and
  • a dynamical law—rational credences travel in the direction
  • f highest expected accuracy
  • If the epistemic value function is proper, then this model

will always be in equilibrium.

  • So, if there is to be a rational change of belief, then there

must be an exogenous change to one of these three components.

34

slide-130
SLIDE 130

What went wrong?

  • Leitgeb & Pettigrew give a model of rational belief with

three components:

  • a credal state;
  • an epistemic value function; and
  • a dynamical law—rational credences travel in the direction
  • f highest expected accuracy
  • If the epistemic value function is proper, then this model

will always be in equilibrium.

  • So, if there is to be a rational change of belief, then there

must be an exogenous change to one of these three components.

34

slide-131
SLIDE 131

What went wrong?

  • Leitgeb & Pettigrew give a model of rational belief with

three components:

  • a credal state;
  • an epistemic value function; and
  • a dynamical law—rational credences travel in the direction
  • f highest expected accuracy
  • If the epistemic value function is proper, then this model

will always be in equilibrium.

  • So, if there is to be a rational change of belief, then there

must be an exogenous change to one of these three components.

34

slide-132
SLIDE 132

What went wrong?

  • Leitgeb & Pettigrew give a model of rational belief with

three components:

  • a credal state;
  • an epistemic value function; and
  • a dynamical law—rational credences travel in the direction
  • f highest expected accuracy
  • If the epistemic value function is proper, then this model

will always be in equilibrium.

  • So, if there is to be a rational change of belief, then there

must be an exogenous change to one of these three components.

34

slide-133
SLIDE 133

What went wrong?

  • Leitgeb & Pettigrew give a model of rational belief with

three components:

  • a credal state;
  • an epistemic value function; and
  • a dynamical law—rational credences travel in the direction
  • f highest expected accuracy
  • If the epistemic value function is proper, then this model

will always be in equilibrium.

  • So, if there is to be a rational change of belief, then there

must be an exogenous change to one of these three components.

34

slide-134
SLIDE 134

Exogenous Change to Credal State?

  • Either the change to the credal state is rationally evaluable
  • r it is not.
  • If it is not, then we take on counterintuitive consequences.
  • It is not irrational to become certain that climate change is

a hoax perpetrated by the Chinese upon seeing snow.

  • If it is, then the accuracy-fjrst project has failed.
  • there are norms governing changes in credal states which

are not and cannot be justifjed in terms of the single-minded pursuit of accuracy.

35

slide-135
SLIDE 135

Exogenous Change to Credal State?

  • Either the change to the credal state is rationally evaluable
  • r it is not.
  • If it is not, then we take on counterintuitive consequences.
  • It is not irrational to become certain that climate change is

a hoax perpetrated by the Chinese upon seeing snow.

  • If it is, then the accuracy-fjrst project has failed.
  • there are norms governing changes in credal states which

are not and cannot be justifjed in terms of the single-minded pursuit of accuracy.

35

slide-136
SLIDE 136

Exogenous Change to Credal State?

  • Either the change to the credal state is rationally evaluable
  • r it is not.
  • If it is not, then we take on counterintuitive consequences.
  • It is not irrational to become certain that climate change is

a hoax perpetrated by the Chinese upon seeing snow.

  • If it is, then the accuracy-fjrst project has failed.
  • there are norms governing changes in credal states which

are not and cannot be justifjed in terms of the single-minded pursuit of accuracy.

35

slide-137
SLIDE 137

Exogenous Change to Credal State?

  • Either the change to the credal state is rationally evaluable
  • r it is not.
  • If it is not, then we take on counterintuitive consequences.
  • It is not irrational to become certain that climate change is

a hoax perpetrated by the Chinese upon seeing snow.

  • If it is, then the accuracy-fjrst project has failed.
  • there are norms governing changes in credal states which

are not and cannot be justifjed in terms of the single-minded pursuit of accuracy.

35

slide-138
SLIDE 138

Exogenous Change to Credal State?

  • Either the change to the credal state is rationally evaluable
  • r it is not.
  • If it is not, then we take on counterintuitive consequences.
  • It is not irrational to become certain that climate change is

a hoax perpetrated by the Chinese upon seeing snow.

  • If it is, then the accuracy-fjrst project has failed.
  • there are norms governing changes in credal states which

are not and cannot be justifjed in terms of the single-minded pursuit of accuracy.

35

slide-139
SLIDE 139

Exogenous Change to the Dynamics?

  • For instance, while most of the time, rational believers

attempt to maximize the accuracy of their beliefs—sometimes, they attempt to meet the constraints placed upon them by their evidence.

  • To say this is to abandon the accuracy-fjrst project of

accounting for all evidential norms in terms of the rational pursuit of accuracy.

  • Moreover, the existing implementations of this idea lead to

epistemically indefensible recommendations.

36

slide-140
SLIDE 140

Exogenous Change to the Dynamics?

  • For instance, while most of the time, rational believers

attempt to maximize the accuracy of their beliefs—sometimes, they attempt to meet the constraints placed upon them by their evidence.

  • To say this is to abandon the accuracy-fjrst project of

accounting for all evidential norms in terms of the rational pursuit of accuracy.

  • Moreover, the existing implementations of this idea lead to

epistemically indefensible recommendations.

36

slide-141
SLIDE 141

Exogenous Change to the Dynamics?

  • For instance, while most of the time, rational believers

attempt to maximize the accuracy of their beliefs—sometimes, they attempt to meet the constraints placed upon them by their evidence.

  • To say this is to abandon the accuracy-fjrst project of

accounting for all evidential norms in terms of the rational pursuit of accuracy.

  • Moreover, the existing implementations of this idea lead to

epistemically indefensible recommendations.

36

slide-142
SLIDE 142

Exogenous Change to the Epistemic Value Function?

  • In general, an expected accuracy maximizer will not value

accuracy at all worlds equally.

  • Tie accuracy of c at world w, V(c, w), is weighted by your

credence that w is actual, p(w).

  • Afuer a learning experience, you come to value accuracy at

worlds difgerently.

  • You will now weight the accuracy of c at world w, V(c, w)

by your updated credence that w is actual, p′(w).

  • On the standard way of thinking about things, this change

in the degree to which you value accuracy at various worlds is the result of rational learning.

37

slide-143
SLIDE 143

Exogenous Change to the Epistemic Value Function?

  • In general, an expected accuracy maximizer will not value

accuracy at all worlds equally.

  • Tie accuracy of c at world w, V(c, w), is weighted by your

credence that w is actual, p(w).

  • Afuer a learning experience, you come to value accuracy at

worlds difgerently.

  • You will now weight the accuracy of c at world w, V(c, w)

by your updated credence that w is actual, p′(w).

  • On the standard way of thinking about things, this change

in the degree to which you value accuracy at various worlds is the result of rational learning.

37

slide-144
SLIDE 144

Exogenous Change to the Epistemic Value Function?

  • In general, an expected accuracy maximizer will not value

accuracy at all worlds equally.

  • Tie accuracy of c at world w, V(c, w), is weighted by your

credence that w is actual, p(w).

  • Afuer a learning experience, you come to value accuracy at

worlds difgerently.

  • You will now weight the accuracy of c at world w, V(c, w)

by your updated credence that w is actual, p′(w).

  • On the standard way of thinking about things, this change

in the degree to which you value accuracy at various worlds is the result of rational learning.

37

slide-145
SLIDE 145

Exogenous Change to the Epistemic Value Function?

  • In general, an expected accuracy maximizer will not value

accuracy at all worlds equally.

  • Tie accuracy of c at world w, V(c, w), is weighted by your

credence that w is actual, p(w).

  • Afuer a learning experience, you come to value accuracy at

worlds difgerently.

  • You will now weight the accuracy of c at world w, V(c, w)

by your updated credence that w is actual, p′(w).

  • On the standard way of thinking about things, this change

in the degree to which you value accuracy at various worlds is the result of rational learning.

37

slide-146
SLIDE 146

Exogenous Change to the Epistemic Value Function?

  • In general, an expected accuracy maximizer will not value

accuracy at all worlds equally.

  • Tie accuracy of c at world w, V(c, w), is weighted by your

credence that w is actual, p(w).

  • Afuer a learning experience, you come to value accuracy at

worlds difgerently.

  • You will now weight the accuracy of c at world w, V(c, w)

by your updated credence that w is actual, p′(w).

  • On the standard way of thinking about things, this change

in the degree to which you value accuracy at various worlds is the result of rational learning.

37

slide-147
SLIDE 147

Exogenous Change to the Epistemic Value Function?

  • In general, an expected accuracy maximizer will not value

accuracy at all worlds equally.

  • Tie accuracy of c at world w, V(c, w), is weighted by your

credence that w is actual, p(w).

  • Afuer a learning experience, you come to value accuracy at

worlds difgerently.

  • You will now weight the accuracy of c at world w, V(c, w)

by your updated credence that w is actual, p′(w).

  • On the standard way of thinking about things, this change

in the degree to which you value accuracy at various worlds is the result of rational learning.

37

slide-148
SLIDE 148

Exogenous Change to the Epistemic Value Function?

  • My proposal is to reverse the order of explanation.
  • You don’t rationally stop valuing accuracy at

¬E-possibilities because it is rational for you to become

certain of E.

  • Rather, it is rational for you to become certain of E because

it is rational for you to stop valuing accuracy at

¬E-possibilities.

38

slide-149
SLIDE 149

Exogenous Change to the Epistemic Value Function?

  • My proposal is to reverse the order of explanation.
  • You don’t rationally stop valuing accuracy at

¬E-possibilities because it is rational for you to become

certain of E.

  • Rather, it is rational for you to become certain of E because

it is rational for you to stop valuing accuracy at

¬E-possibilities.

38

slide-150
SLIDE 150

Exogenous Change to the Epistemic Value Function?

  • My proposal is to reverse the order of explanation.
  • You don’t rationally stop valuing accuracy at

¬E-possibilities because it is rational for you to become

certain of E.

  • Rather, it is rational for you to become certain of E because

it is rational for you to stop valuing accuracy at

¬E-possibilities.

38

slide-151
SLIDE 151

Experience and Value Change

  • In general, experience can rationalize shifus in value.
  • E.g., your aesthetic values and moral values may rationally

change in response to the right kinds of experiences.

  • Tie proposal is that, just so, a learning experience may

rationalize a shifu in epistemic value.

  • E.g., an experience of my hand can rationalize not valuing

accuracy at worlds where I have no hand.

39

slide-152
SLIDE 152

Experience and Value Change

  • In general, experience can rationalize shifus in value.
  • E.g., your aesthetic values and moral values may rationally

change in response to the right kinds of experiences.

  • Tie proposal is that, just so, a learning experience may

rationalize a shifu in epistemic value.

  • E.g., an experience of my hand can rationalize not valuing

accuracy at worlds where I have no hand.

39

slide-153
SLIDE 153

Experience and Value Change

  • In general, experience can rationalize shifus in value.
  • E.g., your aesthetic values and moral values may rationally

change in response to the right kinds of experiences.

  • Tie proposal is that, just so, a learning experience may

rationalize a shifu in epistemic value.

  • E.g., an experience of my hand can rationalize not valuing

accuracy at worlds where I have no hand.

39

slide-154
SLIDE 154

Experience and Value Change

  • In general, experience can rationalize shifus in value.
  • E.g., your aesthetic values and moral values may rationally

change in response to the right kinds of experiences.

  • Tie proposal is that, just so, a learning experience may

rationalize a shifu in epistemic value.

  • E.g., an experience of my hand can rationalize not valuing

accuracy at worlds where I have no hand.

39

slide-155
SLIDE 155

Epistemic Value Change

Conditionalization

slide-156
SLIDE 156

Rational Value Change and Conditionalization

  • Suppose that learning E rationalizes not caring at all about

accuracy at worlds w E

  • if ‘VE’ is the epistemic value function which is rational afuer

learning that E, then VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • κw is some constant.
  • So, at w E, you value accurate credences as much as you

value inaccurate ones.

  • Tiat is just to say: you don’t value accuracy at w E.

40

slide-157
SLIDE 157

Rational Value Change and Conditionalization

  • Suppose that learning E rationalizes not caring at all about

accuracy at worlds w E

  • if ‘VE’ is the epistemic value function which is rational afuer

learning that E, then VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • κw is some constant.
  • So, at w E, you value accurate credences as much as you

value inaccurate ones.

  • Tiat is just to say: you don’t value accuracy at w E.

40

slide-158
SLIDE 158

Rational Value Change and Conditionalization

  • Suppose that learning E rationalizes not caring at all about

accuracy at worlds w E

  • if ‘VE’ is the epistemic value function which is rational afuer

learning that E, then VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • κw is some constant.
  • So, at w E, you value accurate credences as much as you

value inaccurate ones.

  • Tiat is just to say: you don’t value accuracy at w E.

40

slide-159
SLIDE 159

Rational Value Change and Conditionalization

  • Suppose that learning E rationalizes not caring at all about

accuracy at worlds w E

  • if ‘VE’ is the epistemic value function which is rational afuer

learning that E, then VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • κw is some constant.
  • So, at w E, you value accurate credences as much as you

value inaccurate ones.

  • Tiat is just to say: you don’t value accuracy at w E.

40

slide-160
SLIDE 160

Rational Value Change and Conditionalization

  • Suppose that learning E rationalizes not caring at all about

accuracy at worlds w E

  • if ‘VE’ is the epistemic value function which is rational afuer

learning that E, then VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • κw is some constant.
  • So, at w E, you value accurate credences as much as you

value inaccurate ones.

  • Tiat is just to say: you don’t value accuracy at w E.

40

slide-161
SLIDE 161

Rational Value Change and Conditionalization

VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • Tien,

VE

p (c) = ∑ w∈W

p(w) · VE(c, w)

= ∑

w∈E

p(w) · VE(c, w) + ∑

wE

p(w) · VE(c, w)

41

slide-162
SLIDE 162

Rational Value Change and Conditionalization

VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • Tien,

VE

p (c) = ∑ w∈W

p(w) · VE(c, w)

= ∑

w∈E

p(w) · VE(c, w) + ∑

wE

p(w) · VE(c, w)

41

slide-163
SLIDE 163

Rational Value Change and Conditionalization

VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • Tien,

VE

p (c) = ∑ w∈W

p(w) · VE(c, w)

= ∑

w∈E

p(w) · VE(c, w) + ∑

wE

p(w) · VE(c, w)

41

slide-164
SLIDE 164

Rational Value Change and Conditionalization

VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • Tien,

VE

p (c) = ∑ w∈W

p(w) · VE(c, w)

= ∑

w∈E

p(w) · VE(c, w) + ∑

wE

p(w) · VE(c, w)

41

slide-165
SLIDE 165

Rational Value Change and Conditionalization

VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • Tien,

VE

p (c) = ∑ w∈W

p(w) · VE(c, w)

= ∑

w∈E

p(w) · V(c, w) + ∑

wE

p(w) · VE(c, w)

41

slide-166
SLIDE 166

Rational Value Change and Conditionalization

VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • Tien,

VE

p (c) = ∑ w∈W

p(w) · VE(c, w)

= ∑

w∈E

p(w) · V(c, w) + ∑

wE

p(w) · VE(c, w)

41

slide-167
SLIDE 167

Rational Value Change and Conditionalization

VE(c, w) = { V(c, w) if w ∈ E κw if w E

  • Tien,

VE

p (c) = ∑ w∈W

p(w) · VE(c, w)

= ∑

w∈E

p(w) · V(c, w) + ∑

wE

p(w) · κw

41

slide-168
SLIDE 168

Rational Value Change and Conditionalization

VE

p (c) = ∑ w∈E

p(w) · V(c, w) + ∑

wE

p(w) · κw

  • Tien Tieorem 1 assures us that, so long as V is proper, the

function VE

p will be maximized by p(− | E).

  • Note: the updated value function VE will not be proper.

42

slide-169
SLIDE 169

Rational Value Change and Conditionalization

VE

p (c) = ∑ w∈E

p(w) · V(c, w) + ∑

wE

p(w) · κw

  • Tien Tieorem 1 assures us that, so long as V is proper, the

function VE

p will be maximized by p(− | E).

  • Note: the updated value function VE will not be proper.

42

slide-170
SLIDE 170

Epistemic Value Change

Propriety

slide-171
SLIDE 171

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

43

slide-172
SLIDE 172

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

43

slide-173
SLIDE 173

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

43

slide-174
SLIDE 174

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

43

slide-175
SLIDE 175

Why Propriety? Epistemic Conservativism

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P2. If another credence function c is at least as valuable as

your own, then it is permissible to adopt c as your credence function, even without receiving any evidence.

  • P3. It is impermissible to change your credences without

receiving evidence.

  • C1. So, epistemic value must be proper.

43

slide-176
SLIDE 176

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

44

slide-177
SLIDE 177

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

44

slide-178
SLIDE 178

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

44

slide-179
SLIDE 179

Why Propriety? Immodesty

  • P1. For any probability p, there is some evidence you could

have that would make it permissible to have p as your credence function.

  • P4. Rationality requires you to think that your own credences

are epistemically better than any other credences you could have held instead.

  • C1. So, epistemic value must be proper.

44

slide-180
SLIDE 180

Why Propriety?

  • Tie existing arguments for propriety are invalid.
  • Tiey rely upon the assumption that your epistemic values

may not change.

  • So, these arguments do not give us a reason to worry about

VE not being proper.

  • However, neither do they give us a reason for thinking that

the ur-prior value function V should be proper.

45

slide-181
SLIDE 181

Why Propriety?

  • Tie existing arguments for propriety are invalid.
  • Tiey rely upon the assumption that your epistemic values

may not change.

  • So, these arguments do not give us a reason to worry about

VE not being proper.

  • However, neither do they give us a reason for thinking that

the ur-prior value function V should be proper.

45

slide-182
SLIDE 182

Why Propriety?

  • Tie existing arguments for propriety are invalid.
  • Tiey rely upon the assumption that your epistemic values

may not change.

  • So, these arguments do not give us a reason to worry about

VE not being proper.

  • However, neither do they give us a reason for thinking that

the ur-prior value function V should be proper.

45

slide-183
SLIDE 183

Why Propriety?

  • Tie existing arguments for propriety are invalid.
  • Tiey rely upon the assumption that your epistemic values

may not change.

  • So, these arguments do not give us a reason to worry about

VE not being proper.

  • However, neither do they give us a reason for thinking that

the ur-prior value function V should be proper.

45

slide-184
SLIDE 184

Why Ur-Propriety?

  • Tiere are arguments for holding that, e.g., the quadratic

measure is the uniquely best measure of accuracy (cf. Pettigrew, 2016).

  • Tiese arguments are not shown to be invalid by the current

proposal, and could serve its needs.

46

slide-185
SLIDE 185

Why Ur-Propriety?

  • Tiere are arguments for holding that, e.g., the quadratic

measure is the uniquely best measure of accuracy (cf. Pettigrew, 2016).

  • Tiese arguments are not shown to be invalid by the current

proposal, and could serve its needs.

46

slide-186
SLIDE 186

In Summation

slide-187
SLIDE 187

Daniel & Melissa

Daniel is either not valuing accuracy rationally or not pursuing accuracy rationally. Melissa is valuing accuracy rationally and pursuing it rationally.

47

slide-188
SLIDE 188

Daniel & Melissa

Daniel is either not valuing accuracy rationally or not pursuing accuracy rationally. Melissa is valuing accuracy rationally and pursuing it rationally.

47

slide-189
SLIDE 189

Daniel & Melissa

Daniel is either not valuing accuracy rationally or not pursuing accuracy rationally. Melissa is valuing accuracy rationally and pursuing it rationally.

47

slide-190
SLIDE 190

Questions?

47