Epistemic Game Theory Lecture 2 ESSLLI12, Opole Eric Pacuit - - PowerPoint PPT Presentation

epistemic game theory
SMART_READER_LITE
LIVE PREVIEW

Epistemic Game Theory Lecture 2 ESSLLI12, Opole Eric Pacuit - - PowerPoint PPT Presentation

Epistemic Game Theory Lecture 2 ESSLLI12, Opole Eric Pacuit Olivier Roy TiLPS, Tilburg University MCMP, LMU Munich ai.stanford.edu/~epacuit http://olivier.amonbofis.net August 7, 2012 Eric Pacuit and Olivier Roy 1 Plan for the week 1.


slide-1
SLIDE 1

Epistemic Game Theory

Lecture 2

ESSLLI’12, Opole

Eric Pacuit Olivier Roy TiLPS, Tilburg University MCMP, LMU Munich ai.stanford.edu/~epacuit http://olivier.amonbofis.net August 7, 2012

Eric Pacuit and Olivier Roy 1

slide-2
SLIDE 2

Plan for the week

  • 1. Monday Basic Concepts.
  • 2. Tuesday Epistemics.
  • Relating dominance reasoning with maximizing expected utility
  • Probabilistic/graded models of beliefs, knowledge and

higher-order attitudes.

  • Logical/qualitative models of beliefs, knowledge and

higher-order attitudes.

  • 3. Wednesday Fundamentals of Epistemic Game Theory.
  • 4. Thursday Puzzles and Paradoxes.
  • 5. Friday Extensions and New Directions.

Eric Pacuit and Olivier Roy 2

slide-3
SLIDE 3

Dominance vs MEU

Bob Ann

U L R U

3,3 0,0

U D

0,0 1,1

U

Ann’s beliefs: pA ∈ ∆({L, R}) with pA(L) = 1/6 Bob’s beliefs: pB ∈ ∆({U, D}) with pB(U) = 3/4. EUA(U, pA) = pA(L) · uA(U, L) + pA(R)uA(U, R) EUA(D, pA) = pA(L) · uA(D, L) + pA(R)uA(D, R) EUB(L, pB) = pB(U) · uA(U, L) + pB(D)uA(D, R) EUB(R, pB) = pB(U) · uA(U, R) + pB(D)uA(D, R)

Eric Pacuit and Olivier Roy 3

slide-4
SLIDE 4

Dominance vs MEU

Bob Ann

U L R U

3,3 0,0

U D

0,0 1,1

U

◮ Ann’s beliefs: pA ∈ ∆({L, R}) with pA(L) = 1/6

Bob’s beliefs: pB ∈ ∆({U, D}) with pB(U) = 3/4. EUA(U, pA) = pA(L) · uA(U, L) + pA(R)uA(U, R) EUA(D, pA) = pA(L) · uA(D, L) + pA(R)uA(D, R) EUB(L, pB) = pB(U) · uA(U, L) + pB(D)uA(D, R) EUB(R, pB) = pB(U) · uA(U, R) + pB(D)uA(D, R)

Eric Pacuit and Olivier Roy 3

slide-5
SLIDE 5

Dominance vs MEU

Bob Ann

U L R U

3,3 0,0

U D

0,0 1,1

U

◮ Ann’s beliefs: pA ∈ ∆({L, R}) with pA(L) = 1/6

Bob’s beliefs: pB ∈ ∆({U, D}) with pB(U) = 3/4. EU(U, pA) = pA(L) · uA(U, L) + pA(R) · uA(U, R) EU(D, pA) = pA(L) · uA(D, L) + pA(R) · uA(D, R) EU(L, pB) = pB(U) · uA(U, L) + pB(D) · uA(D, R) EU(R, pB) = pB(U) · uA(U, R) + pB(D) · uA(D, R)

Eric Pacuit and Olivier Roy 3

slide-6
SLIDE 6

Dominance vs MEU

Bob Ann

U L R U

3,3 0,0

U D

0,0 1,1

U

◮ Ann’s beliefs: pA ∈ ∆({L, R}) with pA(L) = 1/6

Bob’s beliefs: pB ∈ ∆({U, D}) with pB(U) = 3/4. EU(U, pA) = 1/6 · 3 + 5/6 · 0 = 0.5 EU(D, pA) = 1/6 · 0 + 5/6 · 1 = 0.833 EU(L, pB) = pB(U) · uA(U, L) + pB(D) · uA(D, R) EU(R, pB) = pB(U) · uA(U, R) + pB(D) · uA(D, R)

Eric Pacuit and Olivier Roy 3

slide-7
SLIDE 7

Dominance vs MEU

Bob Ann

U L R U

3,3 0,0

U D

0,0 1,1

U

◮ Ann’s beliefs: pA ∈ ∆({L, R}) with pA(L) = 1/6

Bob’s beliefs: pB ∈ ∆({U, D}) with pB(U) = 3/4. EU(U, pA) = 1/6 · 3 + 5/6 · 0 = 0.5 EU(D, pA) = 1/6 · 0 + 5/6 · 1 = 0.833 EU(L, pB) = pB(U) · uB(U, L) + pB(D) · uB(D, R) EU(R, pB) = pB(U) · uB(U, R) + pB(D) · uB(D, R)

Eric Pacuit and Olivier Roy 3

slide-8
SLIDE 8

Dominance vs MEU

Bob Ann

U L R U

3,3 0,0

U D

0,0 1,1

U

◮ Ann’s beliefs: pA ∈ ∆({L, R}) with pA(L) = 1/6

Bob’s beliefs: pB ∈ ∆({U, D}) with pB(U) = 3/4. EU(U, pA) = 1/6 · 3 + 5/6 · 0 = 0.5 EU(D, pA) = 1/6 · 0 + 5/6 · 1 = 0.833 EU(L, pB) = 3/4 · 3 + 1/4 · 0 = 2.25 EU(R, pB) = 3/4 · 0 + 1/4 · 1 = 0.25

Eric Pacuit and Olivier Roy 3

slide-9
SLIDE 9

Dominance vs MEU

Bob Ann

U L R U

3,3 0,0

U D

0,0 1,1

U

◮ Ann’s beliefs: pA ∈ ∆({L, R}) with pA(L) = 1/6

Bob’s beliefs: pB ∈ ∆({U, D}) with pB(U) = 3/4. EU(U, pA) = 1/6 · 3 + 5/6 · 0 = 0.5 EU(D, pA) = 1/6 · 0 + 5/6 · 1 = 0.833 EU(L, pB) = 3/4 · 3 + 1/4 · 0 = 2.25 EU(R, pB) = 3/4 · 0 + 1/4 · 1 = 0.25

Eric Pacuit and Olivier Roy 3

slide-10
SLIDE 10

Dominance vs MEU

Comparing Dominance Reasoning and MEU

G = N, {Si}i∈N, {ui}i∈N X ⊆ S−i (a set of strategy profiles for all players except i)

Eric Pacuit and Olivier Roy 4

slide-11
SLIDE 11

Dominance vs MEU

Comparing Dominance Reasoning and MEU

G = N, {Si}i∈N, {ui}i∈N X ⊆ S−i (a set of strategy profiles for all players except i) s, s′ ∈ Si, s strictly dominates s′ with respect to X provided ∀s−i ∈ X, ui(s, s−i) > ui(s′, s−i)

Eric Pacuit and Olivier Roy 4

slide-12
SLIDE 12

Dominance vs MEU

Comparing Dominance Reasoning and MEU

G = N, {Si}i∈N, {ui}i∈N X ⊆ S−i (a set of strategy profiles for all players except i) s, s′ ∈ Si, s strictly dominates s′ with respect to X provided ∀s−i ∈ X, ui(s, s−i) > ui(s′, s−i) s, s′ ∈ Si, s weakly dominates s′ with respect to X provided ∀s−i ∈ X, ui(s, s−i) ≥ ui(s′, s−i) and ∃s−i ∈ X, ui(s, s−i) > ui(s′, s−i)

Eric Pacuit and Olivier Roy 4

slide-13
SLIDE 13

Dominance vs MEU

Comparing Dominance Reasoning and MEU

G = N, {Si}i∈N, {ui}i∈N X ⊆ S−i (a set of strategy profiles for all players except i) s, s′ ∈ Si, s strictly dominates s′ with respect to X provided ∀s−i ∈ X, ui(s, s−i) > ui(s′, s−i) s, s′ ∈ Si, s weakly dominates s′ with respect to X provided ∀s−i ∈ X, ui(s, s−i) ≥ ui(s′, s−i) and ∃s−i ∈ X, ui(s, s−i) > ui(s′, s−i) p ∈ ∆(X), s is a best response to p with respect to X provided ∀s′ ∈ Si, EU(s, p) ≥ EU(s′, p)

Eric Pacuit and Olivier Roy 4

slide-14
SLIDE 14

Dominance vs MEU

Strict Dominance and MEU

  • Fact. Suppose that G = N, {Si}i∈N, {ui}i∈N is a strategic game

and X ⊆ S−i. A strategy si ∈ Si is strictly dominated (possibly by a mixed strategy) with respect to X iff there is no probability measure p ∈ ∆(X) such that si is a best response to p.

Eric Pacuit and Olivier Roy 5

slide-15
SLIDE 15

Dominance vs MEU

Suppose that G = N, {Si}i∈N, {ui}i∈N is a finite strategic game.

Eric Pacuit and Olivier Roy 6

slide-16
SLIDE 16

Dominance vs MEU

Suppose that G = N, {Si}i∈N, {ui}i∈N is a finite strategic game. Suppose that si ∈ Si is strictly dominated with respect to X: ∃s′

i ∈ Si, ∀s−i ∈ X,

ui(s′

i, s−i) > ui(si, s−i)

Eric Pacuit and Olivier Roy 6

slide-17
SLIDE 17

Dominance vs MEU

Suppose that G = N, {Si}i∈N, {ui}i∈N is a finite strategic game. Suppose that si ∈ Si is strictly dominated with respect to X: ∃s′

i ∈ Si, ∀s−i ∈ X,

ui(s′

i, s−i) > ui(si, s−i)

Let p ∈ ∆(X) be any probability measure. Then, ∀s−i ∈ X, p(s−i) · ui(s′

i, s−i) ≥ p(s−i) · ui(si, s−i)

∃s−i ∈ X, p(s−i) · ui(s′

i, s−i) > p(s−i) · ui(si, s−i)

Eric Pacuit and Olivier Roy 6

slide-18
SLIDE 18

Dominance vs MEU

Suppose that G = N, {Si}i∈N, {ui}i∈N is a finite strategic game. Suppose that si ∈ Si is strictly dominated with respect to X: ∃s′

i ∈ Si, ∀s−i ∈ X,

ui(s′

i, s−i) > ui(si, s−i)

Let p ∈ ∆(X) be any probability measure. Then, ∀s−i ∈ X, p(s−i) · ui(s′

i, s−i) ≥ p(s−i) · ui(si, s−i)

∃s−i ∈ X, p(s−i) · ui(s′

i, s−i) > p(s−i) · ui(si, s−i)

Hence,

  • s−i∈S−i

p(s−i) · ui(s′

i, s−i) >

  • s−i∈S−i

p(s−i) · ui(si, s−i)

Eric Pacuit and Olivier Roy 6

slide-19
SLIDE 19

Dominance vs MEU

Suppose that G = N, {Si}i∈N, {ui}i∈N is a finite strategic game. Suppose that si ∈ Si is strictly dominated with respect to X: ∃s′

i ∈ Si, ∀s−i ∈ X,

ui(s′

i, s−i) > ui(si, s−i)

Let p ∈ ∆(X) be any probability measure. Then, ∀s−i ∈ X, p(s−i) · ui(s′

i, s−i) ≥ p(s−i) · ui(si, s−i)

∃s−i ∈ X, p(s−i) · ui(s′

i, s−i) > p(s−i) · ui(si, s−i)

Hence,

  • s−i∈S−i

p(s−i) · ui(s′

i, s−i) >

  • s−i∈S−i

p(s−i) · ui(si, s−i) So, EU(s′

i, p) > EU(si, p): si is not a best response to p.

Eric Pacuit and Olivier Roy 6

slide-20
SLIDE 20

Dominance vs MEU

For the converse direction, we sketch the proof for two player games and where X = S−i. 1

1The proof of the more general statement uses the supporting hyperplane

theorem from convex analysis.

Eric Pacuit and Olivier Roy 7

slide-21
SLIDE 21

Dominance vs MEU

For the converse direction, we sketch the proof for two player games and where X = S−i. 1 Let G = S1, S2, u1, u2 be a two-player game. (Let Ui : ∆(S1) × ∆(S2) → R be the expected utility for i)

1The proof of the more general statement uses the supporting hyperplane

theorem from convex analysis.

Eric Pacuit and Olivier Roy 7

slide-22
SLIDE 22

Dominance vs MEU

For the converse direction, we sketch the proof for two player games and where X = S−i. 1 Let G = S1, S2, u1, u2 be a two-player game. (Let Ui : ∆(S1) × ∆(S2) → R be the expected utility for i) Suppose that α ∈ ∆(S1) is not a best response to any p ∈ ∆(S2). ∀p ∈ ∆(S2) ∃q ∈ ∆(S1), U1(q, p) > U1(α, p)

1The proof of the more general statement uses the supporting hyperplane

theorem from convex analysis.

Eric Pacuit and Olivier Roy 7

slide-23
SLIDE 23

Dominance vs MEU

For the converse direction, we sketch the proof for two player games and where X = S−i. 1 Let G = S1, S2, u1, u2 be a two-player game. (Let Ui : ∆(S1) × ∆(S2) → R be the expected utility for i) Suppose that α ∈ ∆(S1) is not a best response to any p ∈ ∆(S2). ∀p ∈ ∆(S2) ∃q ∈ ∆(S1), U1(q, p) > U1(α, p) We can define a function b : ∆(S2) → ∆(S1) where, for each p ∈ ∆(S2), U1(b(p), p) > U1(α, p).

1The proof of the more general statement uses the supporting hyperplane

theorem from convex analysis.

Eric Pacuit and Olivier Roy 7

slide-24
SLIDE 24

Dominance vs MEU

Consider the game G ′ = S1, S2, u1, u2 where u1(s1, s2) = u1(s1, s2) − U1(α, s2) and u2(s1, s2) = −u1(s1, s2)

Eric Pacuit and Olivier Roy 8

slide-25
SLIDE 25

Dominance vs MEU

Consider the game G ′ = S1, S2, u1, u2 where u1(s1, s2) = u1(s1, s2) − U1(α, s2) and u2(s1, s2) = −u1(s1, s2) By the minimax theorem, there is a Nash equilibrium (p∗

1, p∗ 2) such

that for all m ∈ ∆(S2), U(p∗

1, m) ≥ U1(p∗ 1, p∗ 2) ≥ U1(b(p∗ 2), p∗ 2)

Eric Pacuit and Olivier Roy 8

slide-26
SLIDE 26

Dominance vs MEU

Consider the game G ′ = S1, S2, u1, u2 where u1(s1, s2) = u1(s1, s2) − U1(α, s2) and u2(s1, s2) = −u1(s1, s2) By the minimax theorem, there is a Nash equilibrium (p∗

1, p∗ 2) such

that for all m ∈ ∆(S2), U(p∗

1, m) ≥ U1(p∗ 1, p∗ 2) ≥ U1(b(p∗ 2), p∗ 2)

We now prove that U1(b(p∗

2), p∗ 2) > 0:

Eric Pacuit and Olivier Roy 8

slide-27
SLIDE 27

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)U1(α, p∗ 2)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-28
SLIDE 28

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)U1(α, p∗ 2)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-29
SLIDE 29

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)U1(α, p∗ 2)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-30
SLIDE 30

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)U1(α, p∗ 2)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-31
SLIDE 31

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)U1(α, p∗ 2)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-32
SLIDE 32

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(α, p∗

2) − x∈S1 b(p∗ 2)(x) y∈S2 p∗ 2(y)U1(α, y)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)U1(α, p∗ 2)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-33
SLIDE 33

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(α, p∗

2) − x∈S1 b(p∗ 2)(x) y∈S2 p∗ 2(y)U1(α, y)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-34
SLIDE 34

Dominance vs MEU

U1(b(p∗

2), p∗ 2)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)[u1(x, y) − U1(α, y)]

=

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)u1(x, y)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(b(p∗

2), p∗ 2)

  • x∈S1
  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

> U1(α, p∗

2) − x∈S1

  • y∈S2 b(p∗

2)(x)p∗ 2(y)U1(α, y)

= U1(α, p∗

2) − x∈S1 b(p∗ 2)(x) y∈S2 p∗ 2(y)U1(α, y)

= U1(α, p∗

2) − U1(α, p∗ 2) · x∈S1 b(p∗ 2)(x)

= U1(α, p∗

2) − U1(α, p∗ 2) = 0

Eric Pacuit and Olivier Roy 9

slide-35
SLIDE 35

Dominance vs MEU

Hence, for all m ∈ ∆(S2) we have U(p∗

1, m) ≥ U1(p∗ 1, p∗ 2) ≥ U1(b(p∗ 2), p∗ 2) > 0

Eric Pacuit and Olivier Roy 10

slide-36
SLIDE 36

Dominance vs MEU

Hence, for all m ∈ ∆(S2) we have U(p∗

1, m) ≥ U1(p∗ 1, p∗ 2) ≥ U1(b(p∗ 2), p∗ 2) > 0

which implies for all m ∈ ∆(S2), U1(p∗

1, m) > U1(α, m), and so α

is strictly dominated by p∗

1.

Eric Pacuit and Olivier Roy 10

slide-37
SLIDE 37

Dominance vs MEU

Important Issue: Correlated Beliefs

x l r u 1,1,3 1,0,3 d 0,1,0 0,0,0 y l r u 1,1,2 1,0,0 d 0,1,0 1,1,2 z l r u 1,1,0 1,0,0 d 0,1,3 0,0,3

Eric Pacuit and Olivier Roy 11

slide-38
SLIDE 38

Dominance vs MEU

Important Issue: Correlated Beliefs

x l r u 1,1,3 1,0,3 d 0,1,0 0,0,0 y l r u 1,1,2 1,0,0 d 0,1,0 1,1,2 z l r u 1,1,0 1,0,0 d 0,1,3 0,0,3

◮ Note that y is not strictly dominated for Charles.

Eric Pacuit and Olivier Roy 11

slide-39
SLIDE 39

Dominance vs MEU

Important Issue: Correlated Beliefs

x l r u 1,1,3 1,0,3 d 0,1,0 0,0,0 y l r u 1,1,2 1,0,0 d 0,1,0 1,1,2 z l r u 1,1,0 1,0,0 d 0,1,3 0,0,3

◮ Note that y is not strictly dominated for Charles. ◮ It is easy to find a probability measure p ∈ ∆(SA × SB) such

that y is a best response to p. Suppose that p(u, l) = p(d, r) = 1

  • 2. Then, EU(x, p) = EU(z, p) = 1.5 while

EU(y, p) = 2.

Eric Pacuit and Olivier Roy 11

slide-40
SLIDE 40

Dominance vs MEU

Important Issue: Correlated Beliefs

x l r u 1,1,3 1,0,3 d 0,1,0 0,0,0 y l r u 1,1,2 1,0,0 d 0,1,0 1,1,2 z l r u 1,1,0 1,0,0 d 0,1,3 0,0,3

◮ Note that y is not strictly dominated for Charles. ◮ It is easy to find a probability measure p ∈ ∆(SA × SB) such

that y is a best response to p. Suppose that p(u, l) = p(d, r) = 1

  • 2. Then, EU(x, p) = EU(z, p) = 1.5 while

EU(y, p) = 2.

◮ However, there is no probability measure p ∈ ∆(SA × SB)

such that y is a best response to p and p(u, l) = p(u) · p(l).

Eric Pacuit and Olivier Roy 11

slide-41
SLIDE 41

Dominance vs MEU

x l r u 1,1,3 1,0,3 d 0,1,0 0,0,0 y l r u 1,1,2 1,0,0 d 0,1,0 1,1,2 z l r u 1,1,0 1,0,0 d 0,1,3 0,0,3

◮ To see this, suppose that a is the probability assigned to u

and b is the probability assigned to l. Then, we have:

  • The expected utility of y is 2ab + 2(1 − a)(1 − b);
  • The expected utility of x is

3ab + 3a(1 − b) = 3a(b + (1 − b)) = 3a; and

  • The expected utility of z is

3(1 − a)b + 3(1 − a)(1 − b) = 3(1 − a)(b + (1 − b)) = 3(1 − a).

Eric Pacuit and Olivier Roy 12

slide-42
SLIDE 42

Dominance vs MEU

Weak Dominance and MEU

  • Fact. Suppose that G = N, {Si}i∈N, {ui}i∈N is a strategic game

and X ⊆ S−i. A strategy si ∈ Si is weakly dominated (possibly by a mixed strategy) with respect to X iff there is no full support probability measure p ∈ ∆>0(X) such that si is a best response to p.

Eric Pacuit and Olivier Roy 13

slide-43
SLIDE 43

Dominance vs MEU

Some preliminary remarks

Eric Pacuit and Olivier Roy 14

slide-44
SLIDE 44

Dominance vs MEU

Propositional Attitudes

◮ We will talk about so-called propositional attitudes. These

are attitudes (like knowledge, beliefs, desires, intentions, etc...) that take propositions as objects.

Eric Pacuit and Olivier Roy 15

slide-45
SLIDE 45

Dominance vs MEU

Propositional Attitudes

◮ We will talk about so-called propositional attitudes. These

are attitudes (like knowledge, beliefs, desires, intentions, etc...) that take propositions as objects.

◮ Proposition will be taken to be element of a given algebra.

I.e. measurable subsets of a state space (sigma- and/or power-set algebra), formulas in a given language (abstract Boolean algebra)...

Eric Pacuit and Olivier Roy 15

slide-46
SLIDE 46

Dominance vs MEU

All-out vs graded attitudes

◮ A propositional attitude A is all-out when, for any proposition

p, the agent can only be in three states of that attitude regarding p:

  • 1. Ap: the agent “believes” that p.
  • 2. A¬p: the agent “disbelieve” that p.
  • 3. ¬Ap ∧ ¬A¬p: the agent “suspends judgment” about p.

Eric Pacuit and Olivier Roy 16

slide-47
SLIDE 47

Dominance vs MEU

All-out vs graded attitudes

◮ A propositional attitude A is all-out when, for any proposition

p, the agent can only be in three states of that attitude regarding p:

  • 1. Ap: the agent “believes” that p.
  • 2. A¬p: the agent “disbelieve” that p.
  • 3. ¬Ap ∧ ¬A¬p: the agent “suspends judgment” about p.

◮ A propositional attitude A is graded when, for any

proposition p, the states of that attitude that the agent be in w.r.t. a proposition p can be compared according to their strength on a given scale. pi P ¬P A 1/8 3/8

Eric Pacuit and Olivier Roy 16

slide-48
SLIDE 48

Knowledge and beliefs in games

Hard and Soft Attitudes

◮ Hard attitudes:

  • Truthful.
  • Unrevisable.
  • Fully introspective.

◮ Soft attitudes:

  • Can be false / mistaken.
  • Revisable / can be reversed.
  • Not fully introspective.

Eric Pacuit and Olivier Roy 17

slide-49
SLIDE 49

Knowledge and beliefs in games

Models of graded beliefs

Eric Pacuit and Olivier Roy 18

slide-50
SLIDE 50

Models of graded beliefs

Harsanyi Type Space

Based on the work of John Harsanyi on games with incomplete information, game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality: A type is everything a player knows privately at the beginning

  • f the game which could affect his beliefs about payoffs and

about all other players’ possible types. Each type is assigned a joint probability over the space of types and actions λi : Ti → ∆(T−i × S−i) The other players’ types

Eric Pacuit and Olivier Roy 19

slide-51
SLIDE 51

Models of graded beliefs

Harsanyi Type Space

Based on the work of John Harsanyi on games with incomplete information, game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality:

◮ A type is everything a player knows privately at the beginning

  • f the game which could affect his beliefs about payoffs and

about all other players’ possible types. Each type is assigned a joint probability over the space of types and actions λi : Ti → ∆(T−i × S−i) The other players’ types

Eric Pacuit and Olivier Roy 19

slide-52
SLIDE 52

Models of graded beliefs

Harsanyi Type Space

Based on the work of John Harsanyi on games with incomplete information, game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality:

◮ A type is everything a player knows privately at the beginning

  • f the game which could affect his beliefs about payoffs and

about all other players’ possible types.

◮ Each type is assigned a joint probability over the space of

types and actions λi : Ti → ∆(T−i × S−i) The other players’ types

Eric Pacuit and Olivier Roy 19

slide-53
SLIDE 53

Models of graded beliefs

Harsanyi Type Space

Based on the work of John Harsanyi on games with incomplete information, game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality:

◮ A type is everything a player knows privately at the beginning

  • f the game which could affect his beliefs about payoffs and

about all other players’ possible types.

◮ Each type is assigned a joint probability over the space of

types and actions λi : Ti → ∆(T−i × S−i) Player i’s types

Eric Pacuit and Olivier Roy 19

slide-54
SLIDE 54

Models of graded beliefs

Harsanyi Type Space

Based on the work of John Harsanyi on games with incomplete information, game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality:

◮ A type is everything a player knows privately at the beginning

  • f the game which could affect his beliefs about payoffs and

about all other players’ possible types.

◮ Each type is assigned a joint probability over the space of

types and actions λi : Ti → ∆(T−i × S−i) The set of all probability distributions

Eric Pacuit and Olivier Roy 19

slide-55
SLIDE 55

Models of graded beliefs

Harsanyi Type Space

Based on the work of John Harsanyi on games with incomplete information, game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality:

◮ A type is everything a player knows privately at the beginning

  • f the game which could affect his beliefs about payoffs and

about all other players’ possible types.

◮ Each type is assigned a joint probability over the space of

types and actions λi : Ti → ∆(T−i × S−i) The other players’ types

Eric Pacuit and Olivier Roy 19

slide-56
SLIDE 56

Models of graded beliefs

Harsanyi Type Space

Based on the work of John Harsanyi on games with incomplete information, game theorists have developed an elegant formalism that makes precise talk about beliefs, knowledge and rationality:

◮ A type is everything a player knows privately at the beginning

  • f the game which could affect his beliefs about payoffs and

about all other players’ possible types.

◮ Each type is assigned a joint probability over the space of

types and actions λi : Ti → ∆(T−i × S−i) The other players’ choices

Eric Pacuit and Olivier Roy 19

slide-57
SLIDE 57

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1 One type for Ann (tA) and two types for Bob (tB, uB) A state is a tuple of choices and types: (M, tA, M, uB) Calculate expected utility in the usual way... tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-58
SLIDE 58

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1

◮ One type for Ann (tA) and two types

for Bob (tB, uB) A state is a tuple of choices and types: (M, tA, M, uB) Calculate expected utility in the usual way... tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-59
SLIDE 59

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1

◮ One type for Ann (tA) and two types

for Bob (tB, uB)

◮ A state is a tuple of choices and

types: (M, M, tA, tB) Calculate expected utility in the usual way... tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-60
SLIDE 60

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1

◮ One type for Ann (tA) and two types

for Bob (tB, uB)

◮ A state is a tuple of choices and

types: (M, tA, M, uB)

◮ Calculate expected utility in the

usual way... tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-61
SLIDE 61

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1 Ann (tA) is rational 0 · 0.5 + 1 · 0 ≥ 3 · 0.5 + 0 · 0.2 Bob is rational 0 · 0.5 + 1 · 0 ≥ 3 · 0.5 + 0 · 0.2 Bob thinks Ann is irrational PB(Irrat(Ann)) = 0.xx tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-62
SLIDE 62

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1

◮ M is rational for Ann (tA)

0 · 0.2 + 1 · 0.8 ≥ 3 · 0.2 + 0 · 0.8 Bob is rational 0 · 0.5 + 1 · 0 ≥ 3 · 0.5 + 0 · 0.2 Bob thinks Ann is irrational PB(Irrat(Ann)) = 0.xx tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-63
SLIDE 63

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1

◮ M is rational for Ann (tA)

0 · 0.2 + 1 · 0.8 ≥ 3 · 0.2 + 0 · 0.8

◮ M is rational for Bob (tB)

0 · 0 + 1 · 1 ≥ 3 · 0 + 0 · 1 Bob thinks Ann is irrational PB(Irrat(Ann)) = 0.xx tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-64
SLIDE 64

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1

◮ M is rational for Ann (tA)

0 · 0.2 + 1 · 0.8 ≥ 3 · 0.2 + 0 · 0.8

◮ M is rational for Bob (tB)

0 · 0 + 1 · 1 ≥ 3 · 0 + 0 · 1

◮ Ann thinks Bob may be irrational

PB(Irrat(Ann)) = 0.xx tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-65
SLIDE 65

Models of graded beliefs

Returning to the Example: A Game Model

Bob Ann

U

H M H 3,3 0,0 M 0,0 1,1

◮ M is rational for Ann (tA)

0 · 0.2 + 1 · 0.8 ≥ 3 · 0.2 + 0 · 0.8

◮ M is rational for Bob (tB)

0 · 0 + 1 · 1 ≥ 3 · 0 + 0 · 1

◮ Ann thinks Bob may be irrational

PA(Irrat[B]) = 0.3, PA(Rat[B]) = 0.7 tA U H M tB 0.5 uB 0.2 0.3 tB U H M tA 1 uB U H M tA 0.4 0.6

Eric Pacuit and Olivier Roy 20

slide-66
SLIDE 66

Models of graded beliefs

Rationality

Let G = N, {Si}i∈N, {ui}i∈N be a strategic game and T = {Ti}i∈N, {λi}i∈N, S a type space for G.

Eric Pacuit and Olivier Roy 21

slide-67
SLIDE 67

Models of graded beliefs

Rationality

Let G = N, {Si}i∈N, {ui}i∈N be a strategic game and T = {Ti}i∈N, {λi}i∈N, S a type space for G. For each ti ∈ Ti, we can define a probability measure pti ∈ ∆(S−i): pti(s−i) =

  • t−i∈T−i

λi(ti)(s−i, t−i)

Eric Pacuit and Olivier Roy 21

slide-68
SLIDE 68

Models of graded beliefs

Rationality

Let G = N, {Si}i∈N, {ui}i∈N be a strategic game and T = {Ti}i∈N, {λi}i∈N, S a type space for G. For each ti ∈ Ti, we can define a probability measure pti ∈ ∆(S−i): pti(s−i) =

  • t−i∈T−i

λi(ti)(s−i, t−i) The set of states (pairs of strategy profiles and type profiles) where player i chooses rationally is: Rati := {(si, ti) | si is a best response to pti} The event that all players are rational is Rat = {(s, t) | for all i, (si, ti) ∈ Rati}.

Eric Pacuit and Olivier Roy 21

slide-69
SLIDE 69

Models of graded beliefs

Common “knowledge” of rationality

In much of this literature, “full belief” or sometimes “knowledge” is identified with probability 1. (This is not a philosophical commitment, but rather a term of art!)

Eric Pacuit and Olivier Roy 22

slide-70
SLIDE 70

Models of graded beliefs

Common knowledge of rationality

Define Rn

i by induction on n:

Eric Pacuit and Olivier Roy 23

slide-71
SLIDE 71

Models of graded beliefs

Common knowledge of rationality

Define Rn

i by induction on n:

Let R1

i = Rati.

Eric Pacuit and Olivier Roy 23

slide-72
SLIDE 72

Models of graded beliefs

Common knowledge of rationality

Define Rn

i by induction on n:

Let R1

i = Rati.

Suppose that for each i, Rn

i has been defined.

Define Rn

−i as follows:

Rn

−i = {(s, t) | s ∈ S−i, t ∈ T−j, and for each j = i, (sj, tj) ∈ Rn j }.

Eric Pacuit and Olivier Roy 23

slide-73
SLIDE 73

Models of graded beliefs

Common knowledge of rationality

Define Rn

i by induction on n:

Let R1

i = Rati.

Suppose that for each i, Rn

i has been defined.

Define Rn

−i as follows:

Rn

−i = {(s, t) | s ∈ S−i, t ∈ T−j, and for each j = i, (sj, tj) ∈ Rn j }.

For each n > 1, Rn+1

i

= {(s, t) | (s, t) ∈ Rn

i and λi(t) assigns probability 1 to Rn −i}

Eric Pacuit and Olivier Roy 23

slide-74
SLIDE 74

Models of graded beliefs

Common knowledge of rationality

Define Rn

i by induction on n:

Let R1

i = Rati.

Suppose that for each i, Rn

i has been defined.

Define Rn

−i as follows:

Rn

−i = {(s, t) | s ∈ S−i, t ∈ T−j, and for each j = i, (sj, tj) ∈ Rn j }.

For each n > 1, Rn+1

i

= {(s, t) | (s, t) ∈ Rn

i and λi(t) assigns probability 1 to Rn −i}

Common knowledge of rationality is:

  • n≥1

Rn

1 ×

  • n≥1

Rn

2 × · · · ×

  • n≥1

Rn

N

Eric Pacuit and Olivier Roy 23

slide-75
SLIDE 75

Models of graded beliefs

B A

U

L R U 2,2 0,0 D 0,0 1,1

◮ Consider the state (d, r, a3, b3). Both a3

and b3 correctly believe that (i.e., assign probability 1 to) the outcome is (d, r) λA(a1) L R b1 0.5 0.5 b2 b3 λA(a2) L R b1 0.5 b2 b3 0.5 λA(a3) L R b1 b2 0.5 b3 0.5 λB(b1) U D a1 0.5 a2 0.5 a3 λB(b2) U D a1 0.5 a2 a3 0.5 λB(b3) U D a1 a2 0.5 a3 0.5

Eric Pacuit and Olivier Roy 24

slide-76
SLIDE 76

Models of graded beliefs

B A

U

L R U 2,2 0,0 D 0,0 1,1

◮ This fact is not common knowledge: a3

assigns a 0.5 probability to Bob being of type b2, and type b2 assigns a 0.5 probability to Ann playing l. Ann does not know that Bob knows that she is playing r λA(a1) L R b1 0.5 0.5 b2 b3 λA(a2) L R b1 0.5 b2 b3 0.5 λA(a3) L R b1 b2 0.5 b3 0.5 λB(b1) U D a1 0.5 a2 0.5 a3 λB(b2) U D a1 0.5 a2 a3 0.5 λB(b3) U D a1 a2 0.5 a3 0.5

Eric Pacuit and Olivier Roy 24

slide-77
SLIDE 77

Models of graded beliefs

B A

U

L R U 2,2 0,0 D 0,0 1,1

◮ Furthermore, while it is true that both

Ann and Bob are rational, it is not common knowledge that they are rational. λA(a1) L R b1 0.5 0.5 b2 b3 λA(a2) L R b1 0.5 b2 b3 0.5 λA(a3) L R b1 b2 0.5 b3 0.5 λB(b1) U D a1 0.5 a2 0.5 a3 λB(b2) U D a1 0.5 a2 a3 0.5 λB(b3) U D a1 a2 0.5 a3 0.5

Eric Pacuit and Olivier Roy 24

slide-78
SLIDE 78

Models of graded beliefs

General Comments

◮ Suppressed mathematical details about probabilities

(σ-algebra, etc.)

Eric Pacuit and Olivier Roy 25

slide-79
SLIDE 79

Models of graded beliefs

General Comments

◮ Suppressed mathematical details about probabilities

(σ-algebra, etc.)

◮ “Impossibility” is identified with probability 0, but it is an

important distinction (especially for infinite games)

Eric Pacuit and Olivier Roy 25

slide-80
SLIDE 80

Models of graded beliefs

General Comments

◮ Suppressed mathematical details about probabilities

(σ-algebra, etc.)

◮ “Impossibility” is identified with probability 0, but it is an

important distinction (especially for infinite games)

◮ We can model “soft” information using conditional probability

systems, lexicographic probabilities, nonstandard probabilities (more on this later).

Eric Pacuit and Olivier Roy 25

slide-81
SLIDE 81

Models of all-out attitudes.

Eric Pacuit and Olivier Roy 26

slide-82
SLIDE 82

Models of all-out attitudes

Hard Information

Eric Pacuit and Olivier Roy 27

slide-83
SLIDE 83

Models of all-out attitudes

Example

Bob Ann r l u 1, -1

  • 1, 1

d

  • 1,1

1, -1

Eric Pacuit and Olivier Roy 28

slide-84
SLIDE 84

Models of all-out attitudes

Example

Bob Ann r l u 1, -1

  • 1, 1

d

  • 1,1

1, -1

u, l

w1

d, l

w2

u, r

w3

d, r

w4

Eric Pacuit and Olivier Roy 28

slide-85
SLIDE 85

Models of all-out attitudes

Example

Bob Ann r l u 1, -1

  • 1, 1

d

  • 1,1

1, -1

u, l

w1

d, l

w2

u, r

w3

d, r

w4

Eric Pacuit and Olivier Roy 28

slide-86
SLIDE 86

Models of all-out attitudes

Example

Bob Ann r l u 1, -1

  • 1, 1

d

  • 1,1

1, -1

u, l

w1

d, l

w2

u, r

w3

d, r

w4 A A

Eric Pacuit and Olivier Roy 28

slide-87
SLIDE 87

Models of all-out attitudes

Example

Bob Ann r l u 1, -1

  • 1, 1

d

  • 1,1

1, -1

u, l

w1

d, l

w2

u, r

w3

d, r

w4 A A B B

Eric Pacuit and Olivier Roy 28

slide-88
SLIDE 88

Models of all-out attitudes

Epistemic Model

Suppose that G is a strategic game, S is the set of strategy profiles of G, and Ag is the set of players. An epistemic model based on S and Ag is a triple W , {Πi}i∈Ag, σ, where W is a nonempty set, for each i ∈ Ag, Πi is a partition2 over W and σ : W → S is a strategy function.

2A partition of W is a pairwise disjoint collection of subsets of W whose

union is all of W . Elements of a partition Π on W are called cells, and for w ∈ W , let Π(w) denote the cell of Π containing w.

Eric Pacuit and Olivier Roy 29

slide-89
SLIDE 89

Models of all-out attitudes

Epistemic Model

Suppose that G is a strategic game, S is the set of strategy profiles of G, and Ag is the set of players. An epistemic model based on S and Ag is a triple W , {Πi}i∈Ag, σ, where W is a nonempty set, for each i ∈ Ag, Πi is a partition2 over W and σ : W → S is a strategy function.

2A partition of W is a pairwise disjoint collection of subsets of W whose

union is all of W . Elements of a partition Π on W are called cells, and for w ∈ W , let Π(w) denote the cell of Π containing w.

Eric Pacuit and Olivier Roy 29

slide-90
SLIDE 90

Models of all-out attitudes

Game models

Game G

Eric Pacuit and Olivier Roy 30

slide-91
SLIDE 91

Models of all-out attitudes

Game models

Game G Strategy Space

b a Eric Pacuit and Olivier Roy 30

slide-92
SLIDE 92

Models of all-out attitudes

Game models

Game G Strategy Space Game Model

b a Eric Pacuit and Olivier Roy 30

slide-93
SLIDE 93

Models of all-out attitudes

Game models

Game G Strategy Space Game Model

b a Eric Pacuit and Olivier Roy 30

slide-94
SLIDE 94

Models of all-out attitudes

Game models

Game G Strategy Space Game Model

b a Eric Pacuit and Olivier Roy 30

slide-95
SLIDE 95

Models of all-out attitudes

Kripke Model for S5

Prop is a given set of atomic propositions and Ag is a set of

  • agents. An epistemic model based on Prop and Ag is a triple

W , {Πi}i∈Ag, V , where W is a nonempty set, for each i ∈ Ag, Πi is a partition over W and V : W → P(Prop) is a valuation function.

Eric Pacuit and Olivier Roy 31

slide-96
SLIDE 96

Models of all-out attitudes

Example

A S r l u 1, -1

  • 1, 1

d

  • 1,1

1, -1

u, l

w1

d, l

w2

u, r

w3

d, r

w4 A A S S

Eric Pacuit and Olivier Roy 32

slide-97
SLIDE 97

Models of all-out attitudes

Example

u, l

w1

d, l

w2

u, r

w3

d, r

w4 A A S S

Eric Pacuit and Olivier Roy 33

slide-98
SLIDE 98

Models of all-out attitudes

Example

u, l

w1

d, l

w2

u, r

w3

d, r

w4 A A S S

◮ M, w |

= Kiϕ iff for all w′ ∈ πi(w), Mw′ | = ϕ.

Eric Pacuit and Olivier Roy 33

slide-99
SLIDE 99

Models of all-out attitudes

Example

U

u, l

w1

d, l

w2

u, r

w3

d, r

w4 A A S S

◮ M, w |

= Kiϕ iff for all w′ ∈ πi(w), Mw′ | = ϕ.

◮ One assumption: Ex-interim condition.

  • If w ′ ∈ πi(w) then σ(w)i = σ(w ′)i.

Eric Pacuit and Olivier Roy 33

slide-100
SLIDE 100

Models of all-out attitudes

Hard Information, Axiomatically

  • 1. Closed under known implication (K):

Ki(ϕ → ψ) → (Kiϕ → Kiψ)

  • 2. Logical truth are known (NEC): If |

= ϕ then | = Kiϕ

  • 3. Truthful, (T): Kiϕ → ϕ
  • 4. Positive introspection (4): Kiϕ → KiKiϕ
  • 5. Negative introspection (5): ¬Kiϕ → Ki¬Kiϕ

Eric Pacuit and Olivier Roy 34

slide-101
SLIDE 101

Models of all-out attitudes

Soft Information

Eric Pacuit and Olivier Roy 35

slide-102
SLIDE 102

Models of all-out attitudes

Modeling soft attitudes

P

w

¬P

v Ann does not know that P, but she believes that ¬P is true to degree r.

Eric Pacuit and Olivier Roy 36

slide-103
SLIDE 103

Models of all-out attitudes

Modeling soft attitudes

P

w

¬P

v Ann does not know that P, but she believes that ¬P is true to degree r.

Eric Pacuit and Olivier Roy 36

slide-104
SLIDE 104

Models of all-out attitudes

Plausibility Models

Let Prop be a countable set of propositions and Ag a set of agents. A plausibility model M is a tuple W , {i}i∈Ag, V where:

Eric Pacuit and Olivier Roy 37

slide-105
SLIDE 105

Models of all-out attitudes

Plausibility Models

Let Prop be a countable set of propositions and Ag a set of agents. A plausibility model M is a tuple W , {i}i∈Ag, V where:

◮ W is a non-empty set of states.

Eric Pacuit and Olivier Roy 37

slide-106
SLIDE 106

Models of all-out attitudes

Plausibility Models

Let Prop be a countable set of propositions and Ag a set of agents. A plausibility model M is a tuple W , {i}i∈Ag, V where:

◮ W is a non-empty set of states. ◮ for each i ∈ Ag, i is a well-founded pre-order on W . ◮ V : W −

→ P(Prop) is a valuation function. For all ϕ, write ||ϕ|| for {w|M, w | = ϕ}

Eric Pacuit and Olivier Roy 37

slide-107
SLIDE 107

Models of all-out attitudes

Plausibility Models

Let Prop be a countable set of propositions and Ag a set of agents. A plausibility model M is a tuple W , {i}i∈Ag, V where:

◮ W is a non-empty set of states. ◮ for each i ∈ Ag, i is a well-founded pre-order on W . ◮ V : W −

→ P(Prop) is a valuation function. For all ϕ, write ||ϕ|| for {w|M, w | = ϕ}

◮ Maximum plausibility in a given set X:

  • maxi(X) = {w ∈ X : for all w ′ ∈ X, w ′ i w}

◮ Hard information defined:

  • w ∼i w ′ iff either w ′ i w or w i w ′.
  • Let πi(w) = {w ′ : w ∼i w ′}. Then {πi(w) : w ∈ W } is a

partition of W .

Eric Pacuit and Olivier Roy 37

slide-108
SLIDE 108

Models of all-out attitudes

Example - Tweety is a penguin

P, b, ¬F

w2

¬P, b, F

w1 1

Eric Pacuit and Olivier Roy 38

slide-109
SLIDE 109

Models of all-out attitudes

Example - Tweety is a penguin

P, b, ¬F

w2

¬P, b, F

w1 1 2

Eric Pacuit and Olivier Roy 39

slide-110
SLIDE 110

Models of all-out attitudes

Example - Tweety is a penguin

P, b, ¬F

w2

¬P, b, F

w1

¬P, b, F

w4

P, b, ¬F

w3 2 2 1 1

Eric Pacuit and Olivier Roy 40

slide-111
SLIDE 111

Models of all-out attitudes

Final Remarks

◮ Two broad families of models of higher-order information:

  • Probabilistic/graded.
  • Logical/qualitative.

◮ This is not meant to be a sharp distinction! (See SEP entry).

Eric Pacuit and Olivier Roy 41

slide-112
SLIDE 112

Models of all-out attitudes

Final Remarks

◮ Two broad families of models of higher-order information:

  • Probabilistic/graded.
  • Logical/qualitative.

◮ This is not meant to be a sharp distinction! (See SEP entry). ◮ In both the notion of a state is crucial. A state encodes:

  • 1. The “non-epistemic facts”. Here, mostly: what the agents are

playing.

  • 2. What the agents know and/or believe about 1.
  • 3. What the agents know and/or believe about 2.
  • 4. ...

Eric Pacuit and Olivier Roy 41

slide-113
SLIDE 113

Models of all-out attitudes

Final Remarks

◮ Two broad families of models of higher-order information:

  • Probabilistic/graded.
  • Logical/qualitative.

◮ This is not meant to be a sharp distinction! (See SEP entry). ◮ In both the notion of a state is crucial. A state encodes:

  • 1. The “non-epistemic facts”. Here, mostly: what the agents are

playing.

  • 2. What the agents know and/or believe about 1.
  • 3. What the agents know and/or believe about 2.
  • 4. ...

◮ Tomorrow: we put all this machinery to work in the context of

games.

◮ Tonight: don’t miss the evening lecture.

Eric Pacuit and Olivier Roy 41