Unrestricted Information Acquisition
Tommaso Denti
MIT
Unrestricted Information Acquisition Tommaso Denti MIT June 7, - - PowerPoint PPT Presentation
Unrestricted Information Acquisition Tommaso Denti MIT June 7, 2016 Introduction A theory of information acquisition in games Endogenize assumptions on players information Common extra layer of strategic interaction Flexible
MIT
◮ A theory of information acquisition in games
◮ Endogenize assumptions on players’ information ◮ Common extra layer of strategic interaction
◮ Flexible learning about state and what others know
◮ = Bergemann & Valimaki 2002, Hellwig & Veldkamp
◮ Expose primitive incentives to acquire information
◮ Broad assumptions on cost of information
◮ Costly to learn state and what others know ◮ Example: Shannon mutual information
◮ Applications
◮ Investment games: risk-dominance selection ◮ Games on networks: Bonacich centrality ◮ Large games: endogenous informational smallness 1 / 19
◮ A theory of information acquisition in games
◮ Endogenize assumptions on players’ information ◮ Common extra layer of strategic interaction
◮ Flexible learning about state and what others know
◮ = Bergemann & Valimaki 2002, Hellwig & Veldkamp
◮ Expose primitive incentives to acquire information
◮ Broad assumptions on cost of information
◮ Costly to learn state and what others know ◮ Example: Shannon mutual information
◮ Applications
◮ Investment games: risk-dominance selection ◮ Games on networks: Bonacich centrality ◮ Large games: endogenous informational smallness 1 / 19
◮ Common knowledge: multiplicity
◮ Diamond & Dybvig 1983, Obstfeld 1996,...
◮ Global games: risk-dominance selection
◮ Carlsson & van Damme 1993, Morris & Shin 1998,...
◮ Any perturbation: any selection
◮ Weinstein & Yildiz 2007
◮ Flexible info acquisition about state: multiplicity
◮ Yang 2015
◮ Unrestricted info acquisition: risk-dominance selection
◮ Extend to potential games
2 / 19
◮ Investment game with incomplete information
◮ Basic game: actions, states, utilities ◮ Exogenous Information structure
◮ Recap: common knowledge, global games,... ◮ Investment game with information acquisition
◮ Basic game: actions, states, utilities ◮ Information acquisition technology
◮ Flexible info acquisition about state: multiplicity ◮ Unrestricted info acquisition: risk-dominance selection
3 / 19
◮ N = {1, . . . , n}: finite set of players ◮ Ai = {invest, not invest}: set of i’s actions ◮ Θ ⊆ R: closed set of states ◮ PΘ ∈ ∆(Θ): state distribution ◮ ρ(¯
◮ ¯
◮ ρ integrable in θ w.r.t. PΘ ◮ PΘ({θ : ρ(1, θ) < 0}) > 0: dominance region ◮ PΘ({θ : ρ(0, θ) > 0}) > 0: dominance region
◮ ui(a, θ) = 1{invest}(ai)ρ(¯
4 / 19
◮ (Ω, F, P): underlying probability space ◮ θ : Ω → Θ: random variable with θ ∼ PΘ ◮ Xi: Polish space of i’s messages ◮ xi : Ω → Xi: i’s signal, random variable
5 / 19
◮ xi = θ for all i ◮ θ ∈ [0, 1]: equilibrium indeterminacy
◮ xi = θ + λǫi for all i
◮ λ > 0: scale factor ◮ ǫi: idiosyncratic noise
◮ λ → 0: perturbation of complete information ◮ 1/2: risk-dominance threshold
◮ Weinstein & Yildiz 2007
6 / 19
◮ (Ω, F, P): underlying probability space ◮ θ : Ω → Θ: random variable with θ ∼ PΘ ◮ Xi: Polish space of i’s messages ◮ X i: nonempty set of i’s signals xi : Ω → Xi ◮ Ci : ∆(X × Θ) → [0, ∞]: i’s cost of information
7 / 19
◮ Set of players: N ◮ i’s strategy: signal xi ∈ X i, contingency plan si ∈ Si
◮ Si: set of all measurable si : Xi → Ai
◮ i’s payoff: E[ui(s(x), θ)] − λCi(P(x,θ))
◮ λ > 0: scale factor ◮ P(x,θ) ∈ ∆(X × Θ): joint distribution of x and θ
8 / 19
i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.
9 / 19
i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.
9 / 19
◮ Set of players: N ◮ i’s strategy: direct signal xi ∈ X i ◮ i’s payoff: E[ui(x, θ)] − λCi(x, θ)
10 / 19
a.s.
11 / 19
12 / 19
12 / 19
12 / 19
12 / 19
12 / 19
12 / 19
◮ i’s primitive incentive: learn {ρ(¯
◮ {ρ(¯
12 / 19
i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.
Finite-game Construction General Construction 13 / 19
i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.
Finite-game Construction General Construction 13 / 19
i : Ω → Xi is measurable w.r.t. some xi ∈ X i, then x′ i ∈ X i.
Finite-game Construction General Construction
13 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
a.s.
2,
2.
15 / 19
◮ (xλ): family of equilibria ◮ v : A × Θ → R: potential s.t. for all i, ai, and a′ i
i, ·) = v(ai, ·) − v(a′ i, ·) ◮ Investment games: for all a and θ
|a|−1
◮ a∗ risk-dominant at θ: v(a∗, θ) > v(a, θ) for all a = a∗ ◮ Info acquisition w/ mutual information, λ > 0:
◮ Static 1-player: Csiszar 1974, Matejka & McKay 2016 ◮ Dynamic 1-player: previous talk ◮ Static n-player: next slide 16 / 19
v(a,θ) λ
A e
v(a′,θ) λ
Θ
A
v(a,θ) λ (×i∈NPAi)(da)
Existence
◮ There is equilibrium xλ s.t. xi,λ ∼ PAi for all i ∈ N.
17 / 19
v(a,θ) λ
A e
v(a′,θ) λ
v(a,θ) λ
v(a∗,θ) λ
v(a∗,θ)−v(a,θ) λ
◮ Recall: v(a∗, θ) > v(a, θ) for all a = a∗ ◮ Dominance regions: lim infλ→0 Pxi,λ({a∗ i }) > 0 for all i
Extension Infinite N 18 / 19
◮ Exogenous: info players have about others’ info ◮ Endogenous: info players want about others’ info
◮ Broad assumptions on cost of information:
A5 , A6
◮ Games on networks, Bonacich centrality ◮ Large games, endogenous informational smallness
19 / 19
◮ Θ and X finite ◮ (Ω, F, P): nonatomic probability space ◮ X i: all measurable functions xi : Ω → Xi
Back
◮ Θ and X finite ◮ (Ω, F, P): nonatomic probability space ◮ X i: all measurable functions xi : Ω → Xi
Back
◮ Θ and X finite ◮ (Ω, F, P): nonatomic probability space ◮ X i: all measurable functions xi : Ω → Xi
Back
◮ ∃θ : Ω → Θ: θ ∼ PΘ. ◮ ∃zt : Ω → [0, 1], t ∈ T:
◮ θ and zt, t ∈ T, independent ◮ zt, t ∈ T, uniformly distributed
◮ xi measurable w.r.t. θ and zt, t ∈ Q.
Back
◮ Ai, Θ: Polish spaces ◮ v : A × Θ → R: measurable function s.t.
Θ
a∈A v(a, θ)PΘ(dθ) <
Θ
a∈A
◮ V : ∆(A1) × · · · × ∆(An) → R:
Θ
A
◮ Ai is compact for all i ∈ N. ◮ v is upper semi-continuous in a.
Back
◮ N = {1, . . . , n}: finite set of players ◮ Ai: Polish space of i’s actions ◮ Θ: Polish space of states ◮ PΘ ∈ ∆(Θ): state distribution ◮ v : A × Θ → R: potential
Θ supa∈A v(a, θ)PΘ(dθ) < ∞
◮ PΘ(Θai) > 0 ◮ infθ∈Θai infa′
i =ai infa−i∈A−i v(a, θ) − v(a′
i, a−i, θ) > 0
a′=a
λ→0 P(xλ = a|θ = θ) = 1.
Back
◮ (xi ⊥ x−i)|θ ◮ P(xi,λ = invest|θ) a.s.
◮ Assume (xi ⊥ x−i)|θ for all i ∈ N. ◮ Law of large numbers: Var(¯
◮ Hence, Var(ρ(¯
◮ Independent information acquisition is optimal. ◮ Multiplicity as in Yang 2015.
Back
i ∈ X i.
i ⊥ (x−i, θ))|xi.
i, x−i, θ).
i.
Back
i ∈ X i.
◮ (xi, f (x−i, θ)) ∼ (x′ i, f (x−i, θ)) ◮ (x′ i ⊥ (x−i, θ))|f (x−i, θ)
i, x−i, θ).
Back
i; x−i, θ) = I(x′ i; z) = I(xi; z) ≤ I(xi; x−i, θ).