Unrestricted Information Acquisition Tommaso Denti MIT June 7, - - PowerPoint PPT Presentation

unrestricted information acquisition
SMART_READER_LITE
LIVE PREVIEW

Unrestricted Information Acquisition Tommaso Denti MIT June 7, - - PowerPoint PPT Presentation

Unrestricted Information Acquisition Tommaso Denti MIT June 7, 2016 Introduction A theory of information acquisition in games Endogenize assumptions on players information Common extra layer of strategic interaction Flexible


slide-1
SLIDE 1

Unrestricted Information Acquisition

Tommaso Denti

MIT

June 7, 2016

slide-2
SLIDE 2

Introduction

◮ A theory of information acquisition in games

◮ Endogenize assumptions on players’ information ◮ Common extra layer of strategic interaction

◮ Flexible learning about state and what others know

◮ = Bergemann & Valimaki 2002, Hellwig & Veldkamp

2009, Myatt & Wallace 2012, Yang 2015,...

◮ Expose primitive incentives to acquire information

◮ Broad assumptions on cost of information

◮ Costly to learn state and what others know ◮ Example: Shannon mutual information

◮ Applications

◮ Investment games: risk-dominance selection ◮ Games on networks: Bonacich centrality ◮ Large games: endogenous informational smallness 1 / 19

slide-3
SLIDE 3

Introduction

◮ A theory of information acquisition in games

◮ Endogenize assumptions on players’ information ◮ Common extra layer of strategic interaction

◮ Flexible learning about state and what others know

◮ = Bergemann & Valimaki 2002, Hellwig & Veldkamp

2009, Myatt & Wallace 2012, Yang 2015,...

◮ Expose primitive incentives to acquire information

◮ Broad assumptions on cost of information

◮ Costly to learn state and what others know ◮ Example: Shannon mutual information

◮ Applications

◮ Investment games: risk-dominance selection ◮ Games on networks: Bonacich centrality ◮ Large games: endogenous informational smallness 1 / 19

slide-4
SLIDE 4

Today

Investment games: bunk runs, currency crises,... Exogenous information

◮ Common knowledge: multiplicity

◮ Diamond & Dybvig 1983, Obstfeld 1996,...

◮ Global games: risk-dominance selection

◮ Carlsson & van Damme 1993, Morris & Shin 1998,...

◮ Any perturbation: any selection

◮ Weinstein & Yildiz 2007

Endogenous information w/ mutual information

◮ Flexible info acquisition about state: multiplicity

◮ Yang 2015

◮ Unrestricted info acquisition: risk-dominance selection

◮ Extend to potential games

Endogenous information w/out mutual information: next talk

2 / 19

slide-5
SLIDE 5

Outline

◮ Investment game with incomplete information

◮ Basic game: actions, states, utilities ◮ Exogenous Information structure

◮ Recap: common knowledge, global games,... ◮ Investment game with information acquisition

◮ Basic game: actions, states, utilities ◮ Information acquisition technology

◮ Flexible info acquisition about state: multiplicity ◮ Unrestricted info acquisition: risk-dominance selection

3 / 19

slide-6
SLIDE 6

Basic Game

◮ N = {1, . . . , n}: finite set of players ◮ Ai = {invest, not invest}: set of i’s actions ◮ Θ ⊆ R: closed set of states ◮ PΘ ∈ ∆(Θ): state distribution ◮ ρ(¯

a−i, θ) ∈ R: i’s non-decreasing return

◮ ¯

a−i: share of opponents who invest

◮ ρ integrable in θ w.r.t. PΘ ◮ PΘ({θ : ρ(1, θ) < 0}) > 0: dominance region ◮ PΘ({θ : ρ(0, θ) > 0}) > 0: dominance region

◮ ui(a, θ) = 1{invest}(ai)ρ(¯

a−i, θ): i’s utility Today: invest not invest invest θ, θ θ − 1, 0 not invest 0, θ − 1 0, 0

4 / 19

slide-7
SLIDE 7

Exogenous Information Structure

◮ (Ω, F, P): underlying probability space ◮ θ : Ω → Θ: random variable with θ ∼ PΘ ◮ Xi: Polish space of i’s messages ◮ xi : Ω → Xi: i’s signal, random variable

5 / 19

slide-8
SLIDE 8

Recap

Common knowledge: multiplicity

◮ xi = θ for all i ◮ θ ∈ [0, 1]: equilibrium indeterminacy

Global games: risk-dominance selection

◮ xi = θ + λǫi for all i

◮ λ > 0: scale factor ◮ ǫi: idiosyncratic noise

◮ λ → 0: perturbation of complete information ◮ 1/2: risk-dominance threshold

Any perturbation: any selection

◮ Weinstein & Yildiz 2007

6 / 19

slide-9
SLIDE 9

Information Acquisition Technology

◮ (Ω, F, P): underlying probability space ◮ θ : Ω → Θ: random variable with θ ∼ PΘ ◮ Xi: Polish space of i’s messages ◮ X i: nonempty set of i’s signals xi : Ω → Xi ◮ Ci : ∆(X × Θ) → [0, ∞]: i’s cost of information

7 / 19

slide-10
SLIDE 10

Game with Information Acquisition

Basic game + info technology = strategic form game:

◮ Set of players: N ◮ i’s strategy: signal xi ∈ X i, contingency plan si ∈ Si

◮ Si: set of all measurable si : Xi → Ai

◮ i’s payoff: E[ui(s(x), θ)] − λCi(P(x,θ))

◮ λ > 0: scale factor ◮ P(x,θ) ∈ ∆(X × Θ): joint distribution of x and θ

Solution concept: pure-strategy Nash equilibrium To ease notation: Ci(x, θ) = Ci(P(x,θ)) λ → 0: multiplicity/selection of equilibria?

8 / 19

slide-11
SLIDE 11

Flexible Info Acquisition about State

Yang 2015: for all players i Assumption 0. |Xi| ≥ |Ai|. Moreover, if random variable x′

i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.

Assumption 1. Take any x ∈ X. Then (xi ⊥ x−i)|θ. Assumption 2. Take any PXi×Θ ∈ ∆(Xi × Θ). If θ ∼ margΘ(PXi×Θ), then (xi, θ) ∼ PXi×Θ for some xi ∈ X i. Assumption 3. For all x ∈ X, Ci(x, θ) = I(xi; x−i, θ). Mutual information: for Xi finite, p p.m.f. of xi, I(xi; x−i, θ) = E

  • log p(xi|x−i, θ)

p(xi)

  • .

9 / 19

slide-12
SLIDE 12

Flexible Info Acquisition about State

Yang 2015: for all players i Assumption 0. |Xi| ≥ |Ai|. Moreover, if random variable x′

i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.

Assumption 1. Take any x ∈ X. Then (xi ⊥ x−i)|θ. Assumption 2. Take any PXi×Θ ∈ ∆(Xi × Θ). If θ ∼ margΘ(PXi×Θ), then (xi, θ) ∼ PXi×Θ for some xi ∈ X i. Assumption 3. For all x ∈ X, Ci(x, θ) = I(xi; x−i, θ). Mutual information: for Xi finite, p p.m.f. of xi, I(xi;✟✟ x−i, θ) = E

  • log p(xi|✟✟

x−i, θ) p(xi)

  • .

9 / 19

slide-13
SLIDE 13

Revelation Principle

Direct technology: Xi = Ai for all i Basic game + direct technology = strategic form game:

◮ Set of players: N ◮ i’s strategy: direct signal xi ∈ X i ◮ i’s payoff: E[ui(x, θ)] − λCi(x, θ)

Solution concept: pure-strategy Nash equilibrium Revelation principle (Yang 2015): “A0-A3 ⇒ w.l.o.g. technology and signals are direct.”

10 / 19

slide-14
SLIDE 14

Multiplicity

Theorem (Yang 2015)

Assume A0-A3. Let PΘ be abs. continuous w.r.t. Lebesgue measure. Then ∀ˆ θ ∈ [0, 1], ∃equilibria (xλ : λ > 0): ∀i ∈ N P(xi,λ = invest|θ)

a.s.

− →

  • 1

if θ ≥ ˆ θ, if θ < ˆ θ.

  • Remark. Also non-monotone equilibria

11 / 19

slide-15
SLIDE 15

Monotone Equilibria

λ > 0:

12 / 19

slide-16
SLIDE 16

Monotone Equilibria

λ > 0:

12 / 19

slide-17
SLIDE 17

Monotone Equilibria

λ > 0:

12 / 19

slide-18
SLIDE 18

Monotone Equilibria

λ > 0:

12 / 19

slide-19
SLIDE 19

Monotone Equilibria

λ → 0:

12 / 19

slide-20
SLIDE 20

Monotone Equilibria

λ > 0:

12 / 19

slide-21
SLIDE 21

Monotone Equilibria

λ > 0:

◮ i’s primitive incentive: learn {ρ(¯

x−i, θ) ≥ 0}

◮ {ρ(¯

x−i, θ) ≥ 0} = {θ ≥ ˆ θ} since Var(¯ x−i|θ) = 0

12 / 19

slide-22
SLIDE 22

Unrestricted Info Acquisition

For all players i: Assumption 0. |Xi| ≥ |Ai|. Moreover, if random variable x′

i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.

Assumption 3. For all x ∈ X, Ci(x, θ) = I(xi; x−i, θ). Assumption 4. Take any x−i ∈ X −i and PX×Θ ∈ ∆(X × Θ). If (x−i, θ) ∼ margX−i×Θ(PX×Θ), then (x, θ) ∼ PX×Θ for some xi ∈ X i.

Finite-game Construction General Construction 13 / 19

slide-23
SLIDE 23

Unrestricted Info Acquisition

For all players i: Assumption 0. |Xi| ≥ |Ai|. Moreover, if random variable x′

i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.

Assumption 3. For all x ∈ X, Ci(x, θ) = I(xi; x−i, θ). Assumption 4. Take any x−i ∈ X −i and PX×Θ ∈ ∆(X × Θ). If (x−i, θ) ∼ margX−i×Θ(PX×Θ), then (x, θ) ∼ PX×Θ for some xi ∈ X i.

Finite-game Construction General Construction 13 / 19

slide-24
SLIDE 24

Unrestricted Info Acquisition

For all players i: Assumption 0. |Xi| ≥ |Ai|. Moreover, if random variable x′

i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.

Assumption 3. For all x ∈ X, Ci(x, θ) = I(xi; x−i, θ). Assumption 4. Take any x−i ∈ X −i and PX×Θ ∈ ∆(X × Θ). If (x−i, θ) ∼ margX−i×Θ(PX×Θ), then (x, θ) ∼ PX×Θ for some xi ∈ X i.

Finite-game Construction General Construction

Revelation principle: “A0, A3, A4 ⇒ w.l.o.g. technology and signals are direct.”

13 / 19

slide-25
SLIDE 25

All Equilibria

λ > 0:

14 / 19

slide-26
SLIDE 26

All Equilibria

λ > 0: ˆ θ = 1 2 − λ log P(xi = invest) P(xi = not invest)

14 / 19

slide-27
SLIDE 27

All Equilibria

λ > 0: ˆ θ = 1 2 − λ log P(xi = invest) P(xi = not invest)

14 / 19

slide-28
SLIDE 28

All Equilibria

λ > 0: ˆ θ = 1 2 − λ log P(xi = invest) P(xi = not invest)

14 / 19

slide-29
SLIDE 29

All Equilibria

λ > 0: ˆ θ = 1 2 − λ log P(xi = invest) P(xi = not invest)

14 / 19

slide-30
SLIDE 30

All Equilibria

λ → 0:

14 / 19

slide-31
SLIDE 31

Risk-dominance Selection

Theorem

Assume A0, A3, and A4. Then ∀equilibria (xλ : λ > 0): ∀i ∈ N P(xi,λ = invest|θ)

a.s.

− →

  • 1

if θ > 1

2,

if θ < 1

2.

Theorem extends to potential games.

15 / 19

slide-32
SLIDE 32

Proof

◮ (xλ): family of equilibria ◮ v : A × Θ → R: potential s.t. for all i, ai, and a′ i

ui(ai, ·) − ui(a′

i, ·) = v(ai, ·) − v(a′ i, ·) ◮ Investment games: for all a and θ

v(a, θ) =

|a|−1

  • m=0

ρ

  • m

n − 1, θ

  • with |a| number of players who invest

◮ a∗ risk-dominant at θ: v(a∗, θ) > v(a, θ) for all a = a∗ ◮ Info acquisition w/ mutual information, λ > 0:

◮ Static 1-player: Csiszar 1974, Matejka & McKay 2016 ◮ Dynamic 1-player: previous talk ◮ Static n-player: next slide 16 / 19

slide-33
SLIDE 33

Key Lemma

P(xλ,θ): joint distribution of xλ and θ

  • 1. Quality. Almost surely,

dP(xλ,θ) d(Pθ ×i∈N Pxi,λ)(a, θ) = e

v(a,θ) λ

´

A e

v(a′,θ) λ

(×i∈NPxi,λ)(da′) .

  • 2. Quantity. Px1,λ, . . . , Pxn,λ eq. of potential game V :

V (PA1, . . . , PAn) = ˆ

Θ

log ˆ

A

e

v(a,θ) λ (×i∈NPAi)(da)

  • PΘ(dθ).

PA1, . . . , PAn: equilibrium of V

Existence

◮ There is equilibrium xλ s.t. xi,λ ∼ PAi for all i ∈ N.

17 / 19

slide-34
SLIDE 34

Key lemma: for all θ and a = a∗ dP(xλ,θ) d(Pθ ×i∈N Pxi,λ)(a, θ) = e

v(a,θ) λ

´

A e

v(a′,θ) λ

(×i∈NPxi,λ)(da′) ≤ e

v(a,θ) λ

e

v(a∗,θ) λ

(×i∈NPxi,λ)({a∗}) = 1 e

v(a∗,θ)−v(a,θ) λ

(×i∈NPxi,λ)({a∗}) → 0.

◮ Recall: v(a∗, θ) > v(a, θ) for all a = a∗ ◮ Dominance regions: lim infλ→0 Pxi,λ({a∗ i }) > 0 for all i

⇒ P(xλ = a∗|θ = θ) → 1.

Extension Infinite N 18 / 19

slide-35
SLIDE 35

Conclusion

Today: multiplicity/selection of eq. in coordination games

◮ Exogenous: info players have about others’ info ◮ Endogenous: info players want about others’ info

Unrestricted info acquisition

◮ Broad assumptions on cost of information:

A5 , A6

◮ Games on networks, Bonacich centrality ◮ Large games, endogenous informational smallness

⇒ Rich yet tractable language for info acquisition in games

19 / 19

slide-36
SLIDE 36

Appendix

slide-37
SLIDE 37

Finite-game Construction

◮ Θ and X finite ◮ (Ω, F, P): nonatomic probability space ◮ X i: all measurable functions xi : Ω → Xi

Back

xi xi1 xi2 xi1 xi2 xi1 xi2 xi1 xi2 xj xj1 xj2 xj1 xj2 θ θ1 θ2 1 Ω

slide-38
SLIDE 38

Finite-game Construction

◮ Θ and X finite ◮ (Ω, F, P): nonatomic probability space ◮ X i: all measurable functions xi : Ω → Xi

Back

xi xi1 xi2 xi1 xi2xi1 xi2 xi1 xi2 xj xj1 xj2 xj1 xj2 θ θ1 θ2 1 Ω

slide-39
SLIDE 39

Finite-game Construction

◮ Θ and X finite ◮ (Ω, F, P): nonatomic probability space ◮ X i: all measurable functions xi : Ω → Xi

Back

xi xi1 xi2xi1 xi2 xi1 xi2xi1 xi2 xj xj1 xj2 xj1 xj2 θ θ1 θ2 1 Ω

slide-40
SLIDE 40

General Construction

T: uncountable set. (Ω, F, P):

◮ ∃θ : Ω → Θ: θ ∼ PΘ. ◮ ∃zt : Ω → [0, 1], t ∈ T:

◮ θ and zt, t ∈ T, independent ◮ zt, t ∈ T, uniformly distributed

∀i ∈ N, xi ∈ X i iff ∃countable Q ⊂ T:

◮ xi measurable w.r.t. θ and zt, t ∈ Q.

Back

slide-41
SLIDE 41

Existence

◮ Ai, Θ: Polish spaces ◮ v : A × Θ → R: measurable function s.t.

−∞ < ˆ

Θ

inf

a∈A v(a, θ)PΘ(dθ) <

ˆ

Θ

sup

a∈A

v(a, θ)PΘ(dθ) < ∞

◮ V : ∆(A1) × · · · × ∆(An) → R:

V (PA1, . . . , PAn) = ˆ

Θ

log ˆ

A

ev(a,θ)(×i∈NPAi)(da)

  • PΘ(dθ)

Fact

Function V has a maximum if:

◮ Ai is compact for all i ∈ N. ◮ v is upper semi-continuous in a.

Back

slide-42
SLIDE 42

Unique Selection for Potential Games

◮ N = {1, . . . , n}: finite set of players ◮ Ai: Polish space of i’s actions ◮ Θ: Polish space of states ◮ PΘ ∈ ∆(Θ): state distribution ◮ v : A × Θ → R: potential

Integrability: ´

Θ supa∈A v(a, θ)PΘ(dθ) < ∞

slide-43
SLIDE 43

Theorem

Assume A0, A3, and A4. Take any a ∈ A s.t. ∀i ∈ N ∃Θai ⊆ Θ:

◮ PΘ(Θai) > 0 ◮ infθ∈Θai infa′

i =ai infa−i∈A−i v(a, θ) − v(a′

i, a−i, θ) > 0

Then ∀equilibria (xλ : λ > 0): almost surely v(a, θ) > sup

a′=a

v(a′, θ) ⇒ lim

λ→0 P(xλ = a|θ = θ) = 1.

Back

slide-44
SLIDE 44

Infinite N: Multiplicity

Theorem

Assume A0, A3, and A4. Let N = {1, 2, . . .}. Let PΘ be abs. continuous w.r.t. Lebesgue measure. Then ∀ˆ θ ∈ [0, r], ∃equilibria (xλ : λ > 0): ∀i ∈ N

◮ (xi ⊥ x−i)|θ ◮ P(xi,λ = invest|θ) a.s.

− →

  • 1

if θ ≥ ˆ θ if θ < ˆ θ

slide-45
SLIDE 45

Proof

◮ Assume (xi ⊥ x−i)|θ for all i ∈ N. ◮ Law of large numbers: Var(¯

x|θ) = 0.

◮ Hence, Var(ρ(¯

x, θ)|θ) = 0.

◮ Independent information acquisition is optimal. ◮ Multiplicity as in Yang 2015.

Back

slide-46
SLIDE 46

Assumption 5

Take any i ∈ N, x−i ∈ X −i, and xi, x′

i ∈ X i.

Assume that (x′

i ⊥ (x−i, θ))|xi.

Then Ci(xi, x−i, θ) ≥ Ci(x′

i, x−i, θ).

Equality holds only if (xi ⊥ (x−i, θ))|x′

i.

Back

slide-47
SLIDE 47

Assumption 6

Take any i ∈ N, x−i ∈ X −i, and xi, x′

i ∈ X i.

Assume there is measurable f : X−i × Θ → Z:

◮ (xi, f (x−i, θ)) ∼ (x′ i, f (x−i, θ)) ◮ (x′ i ⊥ (x−i, θ))|f (x−i, θ)

Then Ci(xi, x−i, θ) ≥ Ci(x′

i, x−i, θ).

Equality holds only if (xi ⊥ (x−i, θ))|f (x−i, θ).

Back

Proof

Set z = f (x−i, θ): I(x′

i; x−i, θ) = I(x′ i; z) = I(xi; z) ≤ I(xi; x−i, θ).