uniform folk theorems in repeated anonymous random
play

Uniform Folk Theorems in Repeated Anonymous Random Matching Games - PowerPoint PPT Presentation

Uniform Folk Theorems in Repeated Anonymous Random Matching Games Joyee Deb 1 az 2 ome Renault 3 Julio Gonz alez-D J er 1 Stern School of Business, New York University 2 Dept. of Statistics & Operations Research, University of


  1. Introduction Model and Preliminaries Results Conclusion What do we do in this paper? Preview of Results: • Three key results about cooperation in this setting: • A folk theorem for two communities. • A folk theorem for more than two communities. • A folk theorem for two communities with imperfect monitoring. • Show that it is possible to some players to get payoffs outside the feasible, individually rational payoff set. • Highlight opportunity for correlated punishments. • Byproduct of analysis is a result about uniform equilibrium and strongly uniform equilibrium in general games.

  2. Introduction Model and Preliminaries Results Conclusion Outline Introduction 1 Model and Preliminaries 2 Results 3 Conclusion 4

  3. Introduction Model and Preliminaries Results Conclusion Model: Basics • Finite set of communities C = { 1 , . . . , | C |} with finite number of players in each community M = { 1 , . . . , | M |} . • Finite action sets: A 1 , . . . , A | C | , A= � c ∈ C A c . • Stage-game utility: u 1 , . . . , u | I | • Per-period utility: g i : A M → R denotes the expected (across all possible matchings) payoff of i , given profile a ∈ A M . R is an upper bound on g i . • Repeated game utility: γ T 1 , . . . , γ T | C | :=(expected) undiscounted average utilities up to period T . γ δ 1 , . . . , γ δ | I | the (expected) discounted average utilities. • We consider symmetric strategies • Feasible set: F = conv( u ( A )).

  4. Introduction Model and Preliminaries Results Conclusion Model: Basics • In each period, players randomly matched (one from each community) to play the stage-game • Matching is independent and uniform over time. • A player can observe only the action profiles in the games she is personally engaged in. In particular, • Does not observe identities of her opponents. • Does not observe action profiles of any other pair of players.

  5. Introduction Model and Preliminaries Results Conclusion Model: Minmax Payoffs

  6. Introduction Model and Preliminaries Results Conclusion Model: Minmax Payoffs • Independent minmax: For each i ∈ N , v i = min s − i ∈ � j ∈ I \{ ic } ∆( A j ) max a i ∈ A i u i ( a i , s − i ) . IR = { x ∈ R | C | : x ≥ v } F IR = F ∩ IR . and

  7. Introduction Model and Preliminaries Results Conclusion Model: Minmax Payoffs • Independent minmax: For each i ∈ N , v i = min s − i ∈ � j ∈ I \{ ic } ∆( A j ) max a i ∈ A i u i ( a i , s − i ) . IR = { x ∈ R | C | : x ≥ v } F IR = F ∩ IR . and • Correlated minmax: � w i = min s − i ∈ ∆( A − i ) max a i ∈ A i a − i ∈ A − i s − i ( a − i ) u i ( a i , a − i ) . IRC = { x ∈ R | C | : x ≥ w } F IRC = F ∩ IRC . and

  8. Introduction Model and Preliminaries Results Conclusion Model: Minmax Payoffs • Independent minmax: For each i ∈ N , v i = min s − i ∈ � j ∈ I \{ ic } ∆( A j ) max a i ∈ A i u i ( a i , s − i ) . IR = { x ∈ R | C | : x ≥ v } F IR = F ∩ IR . and • Correlated minmax: � w i = min s − i ∈ ∆( A − i ) max a i ∈ A i a − i ∈ A − i s − i ( a − i ) u i ( a i , a − i ) . IRC = { x ∈ R | C | : x ≥ w } F IRC = F ∩ IRC . and • Repeated game minmax: • Player i ∈ N can be forced to x i ∈ R if, for each ε > 0, ∃ σ − i ∈ Σ − i and T 0 ∈ N such that, for each τ i ∈ Σ i and T ≥ T 0 , γ T i ( τ i , σ − i ) ≤ x i + ε. For each i ∈ N , v ∞ = inf { x i : i can be forced to x i } . i IR ∞ = { x ∈ R | C | : x ≥ v ∞ } F IR ∞ = F ∩ IR ∞ . and

  9. Introduction Model and Preliminaries Results Conclusion Model: Minmax Payoffs • Independent minmax: For each i ∈ N , v i = min s − i ∈ � j ∈ I \{ ic } ∆( A j ) max a i ∈ A i u i ( a i , s − i ) . IR = { x ∈ R | C | : x ≥ v } F IR = F ∩ IR . and • Correlated minmax: � w i = min s − i ∈ ∆( A − i ) max a i ∈ A i a − i ∈ A − i s − i ( a − i ) u i ( a i , a − i ) . IRC = { x ∈ R | C | : x ≥ w } F IRC = F ∩ IRC . and • Repeated game minmax: • Player i ∈ N can be forced to x i ∈ R if, for each ε > 0, ∃ σ − i ∈ Σ − i and T 0 ∈ N such that, for each τ i ∈ Σ i and T ≥ T 0 , γ T i ( τ i , σ − i ) ≤ x i + ε. For each i ∈ N , v ∞ = inf { x i : i can be forced to x i } . i IR ∞ = { x ∈ R | C | : x ≥ v ∞ } F IR ∞ = F ∩ IR ∞ . and • Clearly, w i ≤ v ∞ ≤ v i . With two communities, w i = v ∞ = v i . i i

  10. Introduction Model and Preliminaries Results Conclusion Solution Concept Strongly Uniform Equilibrium Definition A strategy profile σ is a strongly uniform equilibrium (SUE) with payoff x ∈ R N t ∈ N , each ˆ h ∈ ˆ H ¯ if lim T →∞ γ T ( σ ) = x and, for each ¯ t , and each i ∈ N , the following holds. T +¯ t 1 � P σ ( · | ˆ g i ( a t ) = x i lim h )-a.s. 1 T T →∞ t =¯ t T +¯ t 1 � g i ( a t ) ≤ x i P τ i ,σ − i ( · | ˆ For each τ i ∈ A i , lim sup h )-a.s. 2 T T →∞ t =¯ t For each ε > 0, there is T 0 ∈ N such that, for each T ≥ T 0 and each 3 τ i ∈ A i , i ( τ i , σ − i | ˆ i ( σ | ˆ γ T h ) ≤ γ T h ) + ε. UE

  11. Introduction Model and Preliminaries Results Conclusion Solution Concept Strongly Uniform Equilibrium • Uniformity of σ is independent of length of game.

  12. Introduction Model and Preliminaries Results Conclusion Solution Concept Strongly Uniform Equilibrium • Uniformity of σ is independent of length of game. • Strong global stability • As T → ∞ , x is not only the expected payoff, but also, almost surely, the realized payoff. • Realized continuation payoff from any history on, is x a.s.

  13. Introduction Model and Preliminaries Results Conclusion Solution Concept Strongly Uniform Equilibrium • Uniformity of σ is independent of length of game. • Strong global stability • As T → ∞ , x is not only the expected payoff, but also, almost surely, the realized payoff. • Realized continuation payoff from any history on, is x a.s. • After every history, all deviations are almost surely non-profitable (as T goes to infinity).

  14. Introduction Model and Preliminaries Results Conclusion Solution Concept Strongly Uniform Equilibrium • Uniformity of σ is independent of length of game. • Strong global stability • As T → ∞ , x is not only the expected payoff, but also, almost surely, the realized payoff. • Realized continuation payoff from any history on, is x a.s. • After every history, all deviations are almost surely non-profitable (as T goes to infinity). • Strong notion of ε -sequential rationality: At any history h , for any ε > 0 and any deviation, if T is large enough, the expected profit is no larger than ε (regardless of beliefs).

  15. Introduction Model and Preliminaries Results Conclusion Solution Concept Discounting and Moral Costs • Stronger than Uniform Equilibrium (UE) • Stronger than what we get by introducing ε -best replies in a discounted repeated game. • If a strategy profile is an SUE, it is a sequential equilibrium in a setting with discounting and moral costs. Discounting

  16. Introduction Model and Preliminaries Results Conclusion Folk Theorem for Two Communities

  17. Introduction Model and Preliminaries Results Conclusion Folk Theorem for Two Communities Theorem Suppose | I | = 2 . Then, E = F ∩ IR ∞ .

  18. Introduction Model and Preliminaries Results Conclusion Folk Theorem for Two Communities Theorem Suppose | I | = 2 . Then, E = F ∩ IR ∞ . • We construct an SUE. • Equilibrium strategy profile simple.

  19. Introduction Model and Preliminaries Results Conclusion Folk Theorem for Two Communities Theorem Suppose | I | = 2 . Then, E = F ∩ IR ∞ . • We construct an SUE. • Equilibrium strategy profile simple. How do we sustain cooperation?

  20. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies · · · · · · ∞ t = 0

  21. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B 3 B 4 B l B l +1 · · · · · · ∞ � �� � � �� � t = 0 l 4 stages ( l +1) 4 stages • Blocks of increasing length: For each l , block B l has l 4 stages.

  22. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B 3 B 4 B l B l +1 · · · · · · ∞ � �� � � �� � t = 0 l 4 stages ( l +1) 4 stages “Grim Trigger” within a block “Restart” at start of next block a t ) t be a seq. of pure action profiles approximating the target eq. • Let (¯ � T a t = x . payoff x , i.e. , lim T →∞ 1 t =1 ¯ T a t ) t . • At start of each block, play according to sequence (¯ • Minmax opponent until the end of the block, if you observe a deviation. • In any block, a player can be in one of two states: On-path (uninfected) or off-path (infected) .

  23. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B 3 B 4 B l B l +1 · · · · · · ∞ � �� � � �� � t = 0 l 4 stages ( l +1) 4 stages “Grim Trigger” within a block “Restart” at start of next block a t ) t be a seq. of pure action profiles approximating the target eq. • Let (¯ � T a t = x . payoff x , i.e. , lim T →∞ 1 t =1 ¯ T a t ) t . • At start of each block, play according to sequence (¯ • Minmax opponent until the end of the block, if you observe a deviation. • In any block, a player can be in one of two states: On-path (uninfected) or off-path (infected) . Community enforcement in blocks

  24. Introduction Model and Preliminaries Results Conclusion Sketch of Proof Global Stability Assume (in this talk) that the minmax is in mixed actions, for both communities.

  25. Introduction Model and Preliminaries Results Conclusion Sketch of Proof Global Stability Assume (in this talk) that the minmax is in mixed actions, for both communities.

  26. Introduction Model and Preliminaries Results Conclusion Sketch of Proof Global Stability Assume (in this talk) that the minmax is in mixed actions, for both communities. • Easy to see that T →∞ γ T ( σ ) = x . lim

  27. Introduction Model and Preliminaries Results Conclusion Sketch of Proof Global Stability Assume (in this talk) that the minmax is in mixed actions, for both communities. • Easy to see that T →∞ γ T ( σ ) = x . lim t and player i . t ∈ N , history h ∈ H ¯ Now consider any period ¯

  28. Introduction Model and Preliminaries Results Conclusion Sketch of Proof Global Stability Assume (in this talk) that the minmax is in mixed actions, for both communities. • Easy to see that T →∞ γ T ( σ ) = x . lim t and player i . t ∈ N , history h ∈ H ¯ Now consider any period ¯ • Play “restarts” at the start of every block regardless of history. So, we will have strong global stability. T +¯ t 1 � P σ ( · | ˆ g i ( a t ) = x i lim h )-a.s T T →∞ t =¯ t

  29. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations

  30. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations • More subtle to check incentive to deviate after any history. Suppose player 1 deviates.

  31. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations • More subtle to check incentive to deviate after any history. Suppose player 1 deviates. • Player 2 observes a deviation, and reverts to the minmax action. This starts contagion.

  32. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations • More subtle to check incentive to deviate after any history. Suppose player 1 deviates. • Player 2 observes a deviation, and reverts to the minmax action. This starts contagion. • α := Max prob. a pure action is assigned in the minmax action. • If a player is infected, the probability that a new player gets infected in the current period is, at least, 1 − α M , (unless all players in the other community are already infected). • The probability that any player remains uninfected after l 2 stages can be bounded above using a binomial distribution (and applying Tchebychev’s 1 inequality). P ( infected players < 2 M − 2) ≤ 5 . 6 l

  33. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations • More subtle to check incentive to deviate after any history. Suppose player 1 deviates. • Player 2 observes a deviation, and reverts to the minmax action. This starts contagion. • α := Max prob. a pure action is assigned in the minmax action. • If a player is infected, the probability that a new player gets infected in the current period is, at least, 1 − α M , (unless all players in the other community are already infected). • The probability that any player remains uninfected after l 2 stages can be bounded above using a binomial distribution (and applying Tchebychev’s 1 inequality). P ( infected players < 2 M − 2) ≤ 5 . 6 l a t ) t , given any ε > 0, there is T 0 such that, for all • By definition of (¯ a t ) T ≥ T 0 , | � T g (¯ − x | ≤ ε . Consider l ≥ T 0 ε . t =1 t

  34. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations · · · · · · ∞ t = 0

  35. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations B 3 B 4 B l B l +1 · · · · · · ∞ t = 0

  36. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations B 3 B 4 B l B l +1 B l , 1 B l , 2 B l , 3 · · · · · · ∞ � �� � l 4 − 2 l 3 l 3 l 3 t = 0 ( l +1) 4 stages � �� � l 4 stages • Divide each block B l in three sub-blocks: B l , 1 , B l , 2 and B l , 3 with l 3 , ( l 4 − 2 l 3 ) and l 3 stages respectively. • We consider three cases: deviations in B l , 1 , B l , 2 and B l , 3 .

  37. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations B 3 B 4 B l B l +1 B l , 1 B l , 2 B l , 3 · · · · · · ∞ � �� � l 4 − 2 l 3 l 3 l 3 t = 0 ( l +1) 4 stages � �� � l 4 stages • Divide each block B l in three sub-blocks: B l , 1 , B l , 2 and B l , 3 with l 3 , ( l 4 − 2 l 3 ) and l 3 stages respectively. • We consider three cases: deviations in B l , 1 , B l , 2 and B l , 3 . Late (first) deviation in B l , 3 : • The realized average payoff in the last ( l 4 − l 3 ) periods is at most ε away from x . • The player can make a gain of at most R (say) for at most l 3 remaining periods of the block. • Therefore late deviations cannot give more than ( R + 1) ε gain.

  38. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations · · · · · · ∞ t = 0

  39. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations B 3 B 4 B l +1 B l B l , 1 B l , 2 B l , 3 · · · · · · ∞ � �� � l 4 − 2 l 3 l 3 l 3 t = 0 ( l +1) 4 stages � �� � l 4 stages Deviation in intermediate period ˜ t in B l , 2 :

  40. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations B 3 B 4 B l +1 B l B l , 1 B l , 2 B l , 3 · · · · · · ∞ � �� � l 4 − 2 l 3 l 3 l 3 t = 0 ( l +1) 4 stages � �� � l 4 stages Deviation in intermediate period ˜ t in B l , 2 : • In l 2 stages, with prob. (1 − 1 5 ), every player will be infected. 6 l • Moreover, punishment lasts at least l 3 − l 2 stages: So, with high R probability (at least 1 − ε 2 ( l 3 − l 2 ) ), the expected average payoff in those stages will no more than ε away from v 1 . • Therefore, with prob. at most ( 1 R 5 + ε 2 ( l 3 − l 2 ) ), a deviation can give no 6 l more than ( R + 1) ε gain.

  41. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations · · · · · · ∞ t = 0

  42. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations B 3 B 4 B l +1 B l B l , 1 B l , 2 B l , 3 · · · · · · ∞ � �� � l 4 − 2 l 3 l 3 l 3 t = 0 ( l +1) 4 stages � �� � l 4 stages Deviation in early period ˜ t in B l , 1 :

  43. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations B 3 B 4 B l +1 B l B l , 1 B l , 2 B l , 3 · · · · · · ∞ � �� � l 4 − 2 l 3 l 3 l 3 t = 0 ( l +1) 4 stages � �� � l 4 stages Deviation in early period ˜ t in B l , 1 : • In this case too, we have l 2 stages of contagion, followed by more than l 3 stages of punishment. • Therefore, we again have: With prob. at most ( 1 R 5 + ε 2 ( l 3 − l 2 ) ), a deviation 6 l can give no more than ( R + 1) ε gain.

  44. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations

  45. Introduction Model and Preliminaries Results Conclusion Sketch of Proof No profitable deviations • Player 1’s gain from deviating is more than ( R + 1) ε , with probability at R 2 1 most 5 + ε 2 ( l 3 − l 2 ) . 6 l • There is l 0 such that, with probability 1, for each l ≥ l 0 , the payoff to player 1 in block l is smaller than x i + ( R + 1) ε . � T g 1 ( a t ) • Since this is true for any ε , we must have lim T →∞ = x i t =1 T P τ 1 ,σ 2 -a.s.

  46. Introduction Model and Preliminaries Results Conclusion Sketch of Proof ε -Sequential Rationality

  47. Introduction Model and Preliminaries Results Conclusion Sketch of Proof ε -Sequential Rationality • Finally, we need to establish that for each ε > 0, there is T 0 ∈ N such that, for each T ≥ T 0 and each τ i ∈ A i , γ T i ( τ i , σ − i | h ) ≤ γ T i ( σ | h ) + ε.

  48. Introduction Model and Preliminaries Results Conclusion Sketch of Proof ε -Sequential Rationality • Finally, we need to establish that for each ε > 0, there is T 0 ∈ N such that, for each T ≥ T 0 and each τ i ∈ A i , γ T i ( τ i , σ − i | h ) ≤ γ T i ( σ | h ) + ε. • This follows quite easily from the previous analysis.

  49. Introduction Model and Preliminaries Results Conclusion Sketch of Proof ε -Sequential Rationality • Finally, we need to establish that for each ε > 0, there is T 0 ∈ N such that, for each T ≥ T 0 and each τ i ∈ A i , γ T i ( τ i , σ − i | h ) ≤ γ T i ( σ | h ) + ε. • This follows quite easily from the previous analysis. • For each block, in case of a late deviation, the payoff is essentially given by a t ) t the sequence (¯ • The payoff in case of an intermediate deviation is given by (¯ a t ) t and the minmax phase • For an early deviation, the payoff is given by the minmax phase.

  50. Introduction Model and Preliminaries Results Conclusion Sketch of Proof ε -Sequential Rationality • Finally, we need to establish that for each ε > 0, there is T 0 ∈ N such that, for each T ≥ T 0 and each τ i ∈ A i , γ T i ( τ i , σ − i | h ) ≤ γ T i ( σ | h ) + ε. • This follows quite easily from the previous analysis. • For each block, in case of a late deviation, the payoff is essentially given by a t ) t the sequence (¯ • The payoff in case of an intermediate deviation is given by (¯ a t ) t and the minmax phase • For an early deviation, the payoff is given by the minmax phase. • If minmax is in pure actions... a t ) t plays the pure minmax action very often, then, after a • If the sequence (¯ deviation, contagion may spread too slowly. • Construction needs to be modified slightly.

  51. Introduction Model and Preliminaries Results Conclusion UE and SUE Payoffs We characterized SUE payoff set for two communities. However, SUE strategies are delicate to construct, because of sequential rationality requirement at every history.

  52. Introduction Model and Preliminaries Results Conclusion UE and SUE Payoffs We characterized SUE payoff set for two communities. However, SUE strategies are delicate to construct, because of sequential rationality requirement at every history. • SUE is a refinement of Uniform Equilibrium (UE).

  53. Introduction Model and Preliminaries Results Conclusion UE and SUE Payoffs We characterized SUE payoff set for two communities. However, SUE strategies are delicate to construct, because of sequential rationality requirement at every history. • SUE is a refinement of Uniform Equilibrium (UE). • But it turns out that the refinement from UE to SUE is obtained “at no cost”. Theorem For any infinitely repeated game with finite players, actions and signals, the set of SUE payoffs is exactly the set of UE payoffs.

  54. Introduction Model and Preliminaries Results Conclusion UE and SUE Payoffs We characterized SUE payoff set for two communities. However, SUE strategies are delicate to construct, because of sequential rationality requirement at every history. • SUE is a refinement of Uniform Equilibrium (UE). • But it turns out that the refinement from UE to SUE is obtained “at no cost”. Theorem For any infinitely repeated game with finite players, actions and signals, the set of SUE payoffs is exactly the set of UE payoffs. • Not just in RARMG setting • Implication in our setting: From now on, we can construct UE and conclude that there exists an SUE with the same payoff.

  55. Introduction Model and Preliminaries Results Conclusion Folk Theorem for More than Two Communities

  56. Introduction Model and Preliminaries Results Conclusion Folk Theorem for More than Two Communities Theorem Suppose | I | > 2 . Then, E = F ∩ IR ∞ .

  57. Introduction Model and Preliminaries Results Conclusion Folk Theorem for More than Two Communities Theorem Suppose | I | > 2 . Then, E = F ∩ IR ∞ . • Proof by construction. • Construct a UE that achieves target payoff x . If x is a UE payoff, then it is also an SUE payoff.

  58. Introduction Model and Preliminaries Results Conclusion Folk Theorem for More than Two Communities Theorem Suppose | I | > 2 . Then, E = F ∩ IR ∞ . • Proof by construction. • Construct a UE that achieves target payoff x . If x is a UE payoff, then it is also an SUE payoff. Main Challenge : For punishments to be effective, need to detect deviations, and identify deviator . • Grim trigger - spreads information that a deviation has occurred. But need a way to also spread the information about the identity of the deviator. • We do this with “Communication Blocks”

  59. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies t = 0

  60. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | “Payoff sub-block” “Communication” • Blocks of L 2 + 2 L | I | stages: One “payoff sub-block,” B l and two identical “communication sub-blocks,” C l , 1 , C l , 2 .

  61. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication” • Blocks of L 2 + 2 L | I | stages: One “payoff sub-block,” B l and two identical “communication sub-blocks,” C l , 1 , C l , 2 .

  62. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication” • Blocks of L 2 + 2 L | I | stages: One “payoff sub-block,” B l and two identical “communication sub-blocks,” C l , 1 , C l , 2 . • In first block, in pay-off sub-block, B 1 (of length L 2 ) • Play according to sequence ( a t ) t that approximates x . • If L is large enough, then | 1 � t ∈ B 1 g i ( a t ) − x i | ≤ ε , ∀ i ∈ N . L • At the end of B 1 , each player i is in state s ( i , j ) ∈ { 0 , 1 } , for each community j : ⇒ i saw a deviation by anyone in community j during B 1 . s ( i , j ) = 1 ⇐

  63. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication”

  64. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication” • In first block, in communication sub-blocks C l , 1 , C l , 2 , • L stages targeting each community, one at a time . • When target is community j , each player i plays a i if s ( i , j ) = 0 and ˆ a i if s ( i , j ) = 1, i.e. Playing ˆ a i means accusing someone in community j of having deviated.

  65. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication” • In first block, in communication sub-blocks C l , 1 , C l , 2 , • L stages targeting each community, one at a time . • When target is community j , each player i plays a i if s ( i , j ) = 0 and ˆ a i if s ( i , j ) = 1, i.e. Playing ˆ a i means accusing someone in community j of having deviated. • At the end of the L stages targeting community j , each player i updates his state s ( i , j ): Set s ( i , j ) = 1 , if she started block B l with s ( i , j ) = 0, and observed players of at least 2 different communities accusing community j . Otherwise, state s ( i , j ) is unchanged. • If s ( i , j ) = 1 and s ( i , j ′ ) = 0 ∀ j ′ � = j , then we say, i is in “punish community j phase.”

  66. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication”

  67. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication” • In any later payoff sub-block B l , l ≥ 2, • A player i who starts in the “punish community j phase,” plays a strategy σ j , that punishes community j in the L 2 -period game, i.e. ¯ γ L 2 − i ) ≤ v L 2 σ i i ( τ i , ¯ , for any τ j . i • Otherwise, players play as in B 1 .

  68. Introduction Model and Preliminaries Results Conclusion Equilibrium Strategies B L C L , 1 C L , 2 B L C L , 1 C L , 2 · · · � �� � � �� � � �� � t = 0 L 2 stages L | I | L | I | . . . L such blocks of L 2 + 2 L | I | stages each “Payoff sub-block” “Communication” • In any later payoff sub-block B l , l ≥ 2, • A player i who starts in the “punish community j phase,” plays a strategy σ j , that punishes community j in the L 2 -period game, i.e. ¯ γ L 2 − i ) ≤ v L 2 σ i i ( τ i , ¯ , for any τ j . i • Otherwise, players play as in B 1 . • In any later communication sub-block C l , 1 , C l , 2 , l ≥ 2, • A player i who starts in the “punish community j phase,” communicates her state, but does not change her state. Other players communicate and update states. • At other histories, prescribe actions arbitrarily.

  69. Introduction Model and Preliminaries Results Conclusion Sketch of Equilibrium Proof Lemma Let s l ( i , j ) be the state of i regarding community j at the end of B l . For each i ∈ N, and each strategy τ i , False accusations are ignored. 1 P τ i ,σ − i ( ∀ i ′ � = i , ∀ j � = i c , s L ( i ′ , j ) = 0) = 1 and P τ i ,σ − i ( ∀ i ′ � = i , s l ( i ′ , i c ) = 0 | i has not deviated during B ) = 1 . Deviations are detected. There is f ( L , | M | ) s.t. lim L →∞ f ( L , | M | ) = 1 2 and P τ i ,σ − i ( ∀ i ′ � = i , s l ( i ′ , i c ) = 1 | i has deviated during B ) ≥ f ( L , | M | ) .

  70. Introduction Model and Preliminaries Results Conclusion Sketch of Equilibrium Proof Lemma Let s l ( i , j ) be the state of i regarding community j at the end of B l . For each i ∈ N, and each strategy τ i , False accusations are ignored. 1 P τ i ,σ − i ( ∀ i ′ � = i , ∀ j � = i c , s L ( i ′ , j ) = 0) = 1 and P τ i ,σ − i ( ∀ i ′ � = i , s l ( i ′ , i c ) = 0 | i has not deviated during B ) = 1 . Deviations are detected. There is f ( L , | M | ) s.t. lim L →∞ f ( L , | M | ) = 1 2 and P τ i ,σ − i ( ∀ i ′ � = i , s l ( i ′ , i c ) = 1 | i has deviated during B ) ≥ f ( L , | M | ) . • Accusations by only one community unsuccessful.

  71. Introduction Model and Preliminaries Results Conclusion Sketch of Equilibrium Proof Lemma Let s l ( i , j ) be the state of i regarding community j at the end of B l . For each i ∈ N, and each strategy τ i , False accusations are ignored. 1 P τ i ,σ − i ( ∀ i ′ � = i , ∀ j � = i c , s L ( i ′ , j ) = 0) = 1 and P τ i ,σ − i ( ∀ i ′ � = i , s l ( i ′ , i c ) = 0 | i has not deviated during B ) = 1 . Deviations are detected. There is f ( L , | M | ) s.t. lim L →∞ f ( L , | M | ) = 1 2 and P τ i ,σ − i ( ∀ i ′ � = i , s l ( i ′ , i c ) = 1 | i has deviated during B ) ≥ f ( L , | M | ) . • Accusations by only one community unsuccessful. • If player i deviates in B , then, n − 1 ≥ 2 players from different communities observe this deviation and inform in block C l , 1 and C l , 2 . Thus, given | M | , the probability of the states s L ( i ′ , i c ) = 1 goes to 1 as L goes to infinity.

  72. Introduction Model and Preliminaries Results Conclusion Sketch of Equilibrium Proof • The strategy σ played for T = L ( L 2 + 2 L | I | ) stages. • Let L large enough so that • f ( L , | M | ) ≥ 1 − ε . (i.e. deviations are detected) • For each player i , v L 2 ≤ v ∞ + ε . (i.e. punishment is harsh enough) i i • 2 L | I | ≤ ε L 2 . (i.e. communication blocks are negligible) • R ≤ ε L .

  73. Introduction Model and Preliminaries Results Conclusion Sketch of Equilibrium Proof • The strategy σ played for T = L ( L 2 + 2 L | I | ) stages. • Let L large enough so that • f ( L , | M | ) ≥ 1 − ε . (i.e. deviations are detected) • For each player i , v L 2 ≤ v ∞ + ε . (i.e. punishment is harsh enough) i i • 2 L | I | ≤ ε L 2 . (i.e. communication blocks are negligible) • R ≤ ε L . • If players play the eq. strategy, easy to compute that payoffs are x i .

  74. Introduction Model and Preliminaries Results Conclusion Sketch of Equilibrium Proof • The strategy σ played for T = L ( L 2 + 2 L | I | ) stages. • Let L large enough so that • f ( L , | M | ) ≥ 1 − ε . (i.e. deviations are detected) • For each player i , v L 2 ≤ v ∞ + ε . (i.e. punishment is harsh enough) i i • 2 L | I | ≤ ε L 2 . (i.e. communication blocks are negligible) • R ≤ ε L . • If players play the eq. strategy, easy to compute that payoffs are x i . • No incentive to deviate in communication blocks, since unilateral accusations are ignored.

  75. Introduction Model and Preliminaries Results Conclusion Sketch of Equilibrium Proof • The strategy σ played for T = L ( L 2 + 2 L | I | ) stages. • Let L large enough so that • f ( L , | M | ) ≥ 1 − ε . (i.e. deviations are detected) • For each player i , v L 2 ≤ v ∞ + ε . (i.e. punishment is harsh enough) i i • 2 L | I | ≤ ε L 2 . (i.e. communication blocks are negligible) • R ≤ ε L . • If players play the eq. strategy, easy to compute that payoffs are x i . • No incentive to deviate in communication blocks, since unilateral accusations are ignored. • No incentive to deviate in payoff block: Obtain bound on deviation payoff. • With high probability (1 − ε ), i ’s deviation is detected at block C ¯ l by all other players, so her continuation payoff falls to her minmax. • Three situations in which player i may have high payoffs - all low probability. (i) In current payoff block, (ii) In communication blocks (proportion of periods ≤ ε ) (iii) If her deviation is not detected (prob. ≤ ε ) Details

  76. Introduction Model and Preliminaries Results Conclusion Imperfect Monitoring within a Match

  77. Introduction Model and Preliminaries Results Conclusion Imperfect Monitoring within a Match • Are these results about cooperation robust to imperfections in monitoring?

  78. Introduction Model and Preliminaries Results Conclusion Imperfect Monitoring within a Match • Are these results about cooperation robust to imperfections in monitoring? • Consider two communities and allow imperfect monitoring . • Sets of signals Y 1 , Y 2 for communities 1 and 2 respectively, and a function: π : A → ∆( Y ), where Y = Y 1 × Y 2 . Within a match, if the action profile is ( a 1 , a 2 ), then a joint signal ( y 1 , y 2 ) in Y is drawn according to π ( a 1 , a 2 ). The player from community 1 observes y 1 and the player from community 2 observes y 2 .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend