SLIDE 1 Renegotiation and Coordination with Private Values
Yuval Heller (Bar-Ilan) and Christoph Kuzmics (Graz) Haifa University, March 2020
Heller & Kuzmics Renegotiation & Coordination March 2020 1 / 33
SLIDE 2
Outline
1
Introduction
2
Model
3
Main result
4
Efficiency and stability
5
Extensions & discussion
SLIDE 3 Illustrating Example (Goffman, 1971, Relations in Public)
Quote
Two pedestrians coming from opposite sides to a narrow pass. Agents do not want to bump into each other.
Private preferences about passing on the right or on the left.
Stylized facts:
Unlike cars, there is no uniform norm of always passing on the right. Simple behavior that relies on fast non-verbal communication. Agents almost never mis-coordinate. The coordinated outcome depends on the agents’ preferred directions.
Heller & Kuzmics Renegotiation & Coordination March 2020 3 / 33
SLIDE 4 Illustrating Example (Goffman, 1971, Relations in Public)
Quote
Two pedestrians coming from opposite sides to a narrow pass. Agents do not want to bump into each other.
Private preferences about passing on the right or on the left.
Stylized facts:
Unlike cars, there is no uniform norm of always passing on the right. Simple behavior that relies on fast non-verbal communication. Agents almost never mis-coordinate. The coordinated outcome depends on the agents’ preferred directions.
Heller & Kuzmics Renegotiation & Coordination March 2020 3 / 33
SLIDE 5 Additional Motivating Examples
Important, yet understudied, interactions: agents gain from coordination, private preferences over the coordinated outcomes, pre-play cheap-talk. Market sharing (implicit) agreements in oligopolistic market
1997 spectrum auction use of trailing digits to report preferred areas.
Cramton and Schwartz (00), Belleflamme and Bloch (04), Motta (04).
Research Joint ventures (Katz, 1986; Vonortas, 2012):
Private preferences regarding goals, methods, and knowledge sharing. E.g., Joint paper: LaTex or Word? to which journal to submit? ...
Heller & Kuzmics Renegotiation & Coordination March 2020 4 / 33
SLIDE 6 Additional Motivating Examples
Important, yet understudied, interactions: agents gain from coordination, private preferences over the coordinated outcomes, pre-play cheap-talk. Market sharing (implicit) agreements in oligopolistic market
1997 spectrum auction use of trailing digits to report preferred areas.
Cramton and Schwartz (00), Belleflamme and Bloch (04), Motta (04).
Research Joint ventures (Katz, 1986; Vonortas, 2012):
Private preferences regarding goals, methods, and knowledge sharing. E.g., Joint paper: LaTex or Word? to which journal to submit? ...
Heller & Kuzmics Renegotiation & Coordination March 2020 4 / 33
SLIDE 7 Additional Motivating Examples
Important, yet understudied, interactions: agents gain from coordination, private preferences over the coordinated outcomes, pre-play cheap-talk. Market sharing (implicit) agreements in oligopolistic market
1997 spectrum auction use of trailing digits to report preferred areas.
Cramton and Schwartz (00), Belleflamme and Bloch (04), Motta (04).
Research Joint ventures (Katz, 1986; Vonortas, 2012):
Private preferences regarding goals, methods, and knowledge sharing. E.g., Joint paper: LaTex or Word? to which journal to submit? ...
Heller & Kuzmics Renegotiation & Coordination March 2020 4 / 33
SLIDE 8 Brief Overview of Model & Results
Coordination games, private values, pre-play communication. Novel simple family of equilibria: (1) agents always coordinate,
(2) each agent states his preferred outcome (& nothing else), (3) agents coordinate on a mutually-preferred outcome (if exists).
We show that it satisfies various appealing properties:
Strategy is in this family iff it is renegotiation-proof equilibrium. Equilibrium behavior does not depend on the distribution of preferences. Interim Pareto efficiency, high ex-ante payoff. Robust to various perturbations (evolutionary & continuous stability).
Heller & Kuzmics Renegotiation & Coordination March 2020 5 / 33
SLIDE 9 Brief Overview of Model & Results
Coordination games, private values, pre-play communication. Novel simple family of equilibria: (1) agents always coordinate,
(2) each agent states his preferred outcome (& nothing else), (3) agents coordinate on a mutually-preferred outcome (if exists).
We show that it satisfies various appealing properties:
Strategy is in this family iff it is renegotiation-proof equilibrium. Equilibrium behavior does not depend on the distribution of preferences. Interim Pareto efficiency, high ex-ante payoff. Robust to various perturbations (evolutionary & continuous stability).
Heller & Kuzmics Renegotiation & Coordination March 2020 5 / 33
SLIDE 10
Outline
1
Introduction
2
Model Types, actions & payoffs Equilibrium strategies and their properties
3
Main result
4
Efficiency and stability
5
Extensions & discussion
SLIDE 11 Types, Rounds and Actions
2 Players; private type of each player is a number u ∈ U = [0,1]. Continuous distribution of types: cdf F, supp (F) = [0,1], density f .
1
Stage 1: each agent simultaneously sends message m ∈ M (4 ≤ |M| < ∞). All results in the paper are extended to k rounds of communication.
2
Stage 2: each agent chooses L or R.
Heller & Kuzmics Renegotiation & Coordination March 2020 7 / 33
SLIDE 12 Types, Rounds and Actions
2 Players; private type of each player is a number u ∈ U = [0,1]. Continuous distribution of types: cdf F, supp (F) = [0,1], density f .
1
Stage 1: each agent simultaneously sends message m ∈ M (4 ≤ |M| < ∞). All results in the paper are extended to k rounds of communication.
2
Stage 2: each agent chooses L or R.
Heller & Kuzmics Renegotiation & Coordination March 2020 7 / 33
SLIDE 13 Payoffs
Player of type u obtains
1−u for coordinating on L; u for coordinating on R; and 0 for miscoordination.
Type v Type u L R L 1−u, 1−v 0, 0 R 0, 0 u, v
Heller & Kuzmics Renegotiation & Coordination March 2020 8 / 33
SLIDE 14 Strategy
(Behavior) strategy is a pair σ = (µ,ξ), where:
µ is the message function: µ : U → ∆(M): type u chooses a random message according to µu ∈ ∆(M). ξ is the action (threshold) function: ξ : M ×M → U: ξ (m,m′) is the maximal type that chooses L.
The restriction to threshold action functions is WLOG: Any action function is dominated by a threshold action function.
Heller & Kuzmics Renegotiation & Coordination March 2020 9 / 33
SLIDE 15 Strategy
(Behavior) strategy is a pair σ = (µ,ξ), where:
µ is the message function: µ : U → ∆(M): type u chooses a random message according to µu ∈ ∆(M). ξ is the action (threshold) function: ξ : M ×M → U: ξ (m,m′) is the maximal type that chooses L.
The restriction to threshold action functions is WLOG: Any action function is dominated by a threshold action function.
Heller & Kuzmics Renegotiation & Coordination March 2020 9 / 33
SLIDE 16 Symmetric Bayes-Nash Equilibrium (Equilibrium Strategy)
σ is a symmetric Bayes-Nash Equilibrium (abbr., equilibrium strategy) if no player, after knowing his type, can gain by deviating to a different strategy (given that the opponent with her unknown type follows σ). There are many Bayes-Nash equilibria. Examples...
Heller & Kuzmics Renegotiation & Coordination March 2020 10 / 33
SLIDE 17 Examples of Equilibrium Strategies (1)
- 1. Babbling equilibria (communication is ignored):
Uniform norm: always-L, always-R. Inefficient interior threshold: play L if u is sufficiently small (u ≤ 0.5 for a symmetric F)
Heller & Kuzmics Renegotiation & Coordination March 2020 11 / 33
SLIDE 18 Examples of Equilibrium Strategies (2)
- 2. Coordination based only on ordinal preferences:
Same preferred outcome: coordinate on the jointly-preferred outcome. Different preferred outcomes - play “fallback” norm:
σL: play L. σR: play R. σC: symmetric joint lottery determines coordinated outcome.
How to implement a joint lottery with communication:
Each agents sends simultaneously a random “bit” (0 / 1). Agents play L if they sent the same bits, and play R otherwise. Can be extended to any rational p, see Aumann & Maschler (1968).
Heller & Kuzmics Renegotiation & Coordination March 2020 12 / 33
SLIDE 19 Examples of Equilibrium Strategies (2)
- 2. Coordination based only on ordinal preferences:
Same preferred outcome: coordinate on the jointly-preferred outcome. Different preferred outcomes - play “fallback” norm:
σL: play L. σR: play R. σC: symmetric joint lottery determines coordinated outcome.
How to implement a joint lottery with communication:
Each agents sends simultaneously a random “bit” (0 / 1). Agents play L if they sent the same bits, and play R otherwise. Can be extended to any rational p, see Aumann & Maschler (1968).
Heller & Kuzmics Renegotiation & Coordination March 2020 12 / 33
SLIDE 20 Examples of Equilibrium Strategies (2)
- 3. Equilibrium that depends on cardinal preferences:
Assume two rounds of communication. Round 1: Each agent sends moderate or extreme.
Uniform distribution: moderate iff u ∈ (0.25,0.75).
Round 2 and actions:
Two extremists: babble; each agent plays his preferred direction. Extremist & moderate: coordinate on the extremist’s preferred outcome. Two moderates: report preferred directions; if disagree, coordinated outcome is chosen by a joint lottery.
Heller & Kuzmics Renegotiation & Coordination March 2020 13 / 33
SLIDE 21 Examples of Equilibrium Strategies (2)
- 3. Equilibrium that depends on cardinal preferences:
Assume two rounds of communication. Round 1: Each agent sends moderate or extreme.
Uniform distribution: moderate iff u ∈ (0.25,0.75).
Round 2 and actions:
Two extremists: babble; each agent plays his preferred direction. Extremist & moderate: coordinate on the extremist’s preferred outcome. Two moderates: report preferred directions; if disagree, coordinated outcome is chosen by a joint lottery.
Heller & Kuzmics Renegotiation & Coordination March 2020 13 / 33
SLIDE 22 Three Properties of Equilibria (Satisfied by σL, σR, σC)
1 Coordinated: Players never mis-coordinate (i.e, never play (L,R)). 2 Mutual-preference consistent:
If both agents prefer the same outcome, they always play it.
3 (Essentially) binary communication:
The message of any type u < 0.5 has the same impact: maximizing the probability that the opponent plays L. The message of any type u > 0.5 has the same impact: minimizing the probability that the opponent plays L.
Formal Def. Heller & Kuzmics Renegotiation & Coordination March 2020 14 / 33
SLIDE 23 Three Properties of Equilibria (Satisfied by σL, σR, σC)
1 Coordinated: Players never mis-coordinate (i.e, never play (L,R)). 2 Mutual-preference consistent:
If both agents prefer the same outcome, they always play it.
3 (Essentially) binary communication:
The message of any type u < 0.5 has the same impact: maximizing the probability that the opponent plays L. The message of any type u > 0.5 has the same impact: minimizing the probability that the opponent plays L.
Formal Def. Heller & Kuzmics Renegotiation & Coordination March 2020 14 / 33
SLIDE 24 Three Properties of Equilibria (Satisfied by σL, σR, σC)
1 Coordinated: Players never mis-coordinate (i.e, never play (L,R)). 2 Mutual-preference consistent:
If both agents prefer the same outcome, they always play it.
3 (Essentially) binary communication:
The message of any type u < 0.5 has the same impact: maximizing the probability that the opponent plays L. The message of any type u > 0.5 has the same impact: minimizing the probability that the opponent plays L.
Formal Def. Heller & Kuzmics Renegotiation & Coordination March 2020 14 / 33
SLIDE 25 1-Dimensional Set of Strategies Satisfying the 3 Properties
Definition
The left-tendency of a strategy: the probability of coordination on L conditional
- n the players having different preferred outcomes.
Any strategy satisfying the above three properties is characterized by its left-tendency α (e.g., α = 1 ↔ σL, α = 0 ↔ σR, α = 0.5 ↔σC):
Two types u,v < 0.5: play L. Two types u,v > 0.5: play R. Two opposing types u < 0.5 < v: joint lottery, coordinate on L with probability α, and coordinate on R with probability 1−α.
Heller & Kuzmics Renegotiation & Coordination March 2020 15 / 33
SLIDE 26 1-Dimensional Set of Strategies Satisfying the 3 Properties
Definition
The left-tendency of a strategy: the probability of coordination on L conditional
- n the players having different preferred outcomes.
Any strategy satisfying the above three properties is characterized by its left-tendency α (e.g., α = 1 ↔ σL, α = 0 ↔ σR, α = 0.5 ↔σC):
Two types u,v < 0.5: play L. Two types u,v > 0.5: play R. Two opposing types u < 0.5 < v: joint lottery, coordinate on L with probability α, and coordinate on R with probability 1−α.
Heller & Kuzmics Renegotiation & Coordination March 2020 15 / 33
SLIDE 27 1-Dimensional Set of Strategies Satisfying the 3 Properties
Definition
The left-tendency of a strategy: the probability of coordination on L conditional
- n the players having different preferred outcomes.
Any strategy satisfying the above three properties is characterized by its left-tendency α (e.g., α = 1 ↔ σL, α = 0 ↔ σR, α = 0.5 ↔σC):
Two types u,v < 0.5: play L. Two types u,v > 0.5: play R. Two opposing types u < 0.5 < v: joint lottery, coordinate on L with probability α, and coordinate on R with probability 1−α.
Heller & Kuzmics Renegotiation & Coordination March 2020 15 / 33
SLIDE 28 Renegotiation-Proofness (Informal Definition)
Players may further communicate & renegotiate to a Pareto-better equilibrium after observing the pair of messages sent in the first round. Renegotiation-proofness: There is no other equilibrium that Pareto-dominates the existing equilibrium. Motivation: Otherwise, agents renegotiate to the better equilibrium
(Holmstr¨
- m & Myerson, 1983; Maskin & Farrell, 1989; Benoit & Krishna, 1993).
Evolutionary motivation: otherwise, self-enforcing secret handshake,
(Hamilton, 1964; Dawkins, 1976; Robson, 1990; equilibrium entrants - Swinkels, 1992; collaboration - Newton, 2017).
Heller & Kuzmics Renegotiation & Coordination March 2020 16 / 33
SLIDE 29 Renegotiation-Proofness (Informal Definition)
Players may further communicate & renegotiate to a Pareto-better equilibrium after observing the pair of messages sent in the first round. Renegotiation-proofness: There is no other equilibrium that Pareto-dominates the existing equilibrium. Motivation: Otherwise, agents renegotiate to the better equilibrium
(Holmstr¨
- m & Myerson, 1983; Maskin & Farrell, 1989; Benoit & Krishna, 1993).
Evolutionary motivation: otherwise, self-enforcing secret handshake,
(Hamilton, 1964; Dawkins, 1976; Robson, 1990; equilibrium entrants - Swinkels, 1992; collaboration - Newton, 2017).
Heller & Kuzmics Renegotiation & Coordination March 2020 16 / 33
SLIDE 30 Renegotiation-Proofness (Informal Definition)
Players may further communicate & renegotiate to a Pareto-better equilibrium after observing the pair of messages sent in the first round. Renegotiation-proofness: There is no other equilibrium that Pareto-dominates the existing equilibrium. Motivation: Otherwise, agents renegotiate to the better equilibrium
(Holmstr¨
- m & Myerson, 1983; Maskin & Farrell, 1989; Benoit & Krishna, 1993).
Evolutionary motivation: otherwise, self-enforcing secret handshake,
(Hamilton, 1964; Dawkins, 1976; Robson, 1990; equilibrium entrants - Swinkels, 1992; collaboration - Newton, 2017).
Heller & Kuzmics Renegotiation & Coordination March 2020 16 / 33
SLIDE 31 Induced Games with Additional Communication
Assume that both players send 1st-stage messages according to µ. Each message m ∈ supp (µ) induces a posterior probability Fm for the player’s type, conditional on the player sending message m. Each pair m,m′ ∈ supp (µ) induces a coordination game without communication Γ(Fm,Fm′) in which the players’ types are distributed according to Fm and Fm′. Let Γ(Fm,Fm′,M) be the induced game that is modified to allow the players to have an additional round of communication (called, induced game with additional communication).
Heller & Kuzmics Renegotiation & Coordination March 2020 17 / 33
SLIDE 32 Induced Games with Additional Communication
Assume that both players send 1st-stage messages according to µ. Each message m ∈ supp (µ) induces a posterior probability Fm for the player’s type, conditional on the player sending message m. Each pair m,m′ ∈ supp (µ) induces a coordination game without communication Γ(Fm,Fm′) in which the players’ types are distributed according to Fm and Fm′. Let Γ(Fm,Fm′,M) be the induced game that is modified to allow the players to have an additional round of communication (called, induced game with additional communication).
Heller & Kuzmics Renegotiation & Coordination March 2020 17 / 33
SLIDE 33 Induced Games with Additional Communication
Assume that both players send 1st-stage messages according to µ. Each message m ∈ supp (µ) induces a posterior probability Fm for the player’s type, conditional on the player sending message m. Each pair m,m′ ∈ supp (µ) induces a coordination game without communication Γ(Fm,Fm′) in which the players’ types are distributed according to Fm and Fm′. Let Γ(Fm,Fm′,M) be the induced game that is modified to allow the players to have an additional round of communication (called, induced game with additional communication).
Heller & Kuzmics Renegotiation & Coordination March 2020 17 / 33
SLIDE 34 Renegotiation-Proofness (Formal Definition)
Definition
Equilibrium (σm,σm′) of Γ(Fm,Fm′,M) Pareto-dominates equilibrium (x,x′) of Γ(Fm,Fm′) if the former induces a weakly higher payoff than the latter for each type in the support of Fm / Fm′ of each player, with a strict inequality for some types.
Definition
Equilibrium strategy σ = (µ,ξ) is renegotiation-proof (RP) if for each pair of messages m,m′ ∈ supp (µ), the equilibrium (ξ (m,m′),ξ (m′,m)) of Γ(Fm,Fm′) is not Pareto-dominated by any equilibrium of Γ(Fm,Fm′,M).
Heller & Kuzmics Renegotiation & Coordination March 2020 18 / 33
SLIDE 35 Renegotiation-Proofness (Formal Definition)
Definition
Equilibrium (σm,σm′) of Γ(Fm,Fm′,M) Pareto-dominates equilibrium (x,x′) of Γ(Fm,Fm′) if the former induces a weakly higher payoff than the latter for each type in the support of Fm / Fm′ of each player, with a strict inequality for some types.
Definition
Equilibrium strategy σ = (µ,ξ) is renegotiation-proof (RP) if for each pair of messages m,m′ ∈ supp (µ), the equilibrium (ξ (m,m′),ξ (m′,m)) of Γ(Fm,Fm′) is not Pareto-dominated by any equilibrium of Γ(Fm,Fm′,M).
Heller & Kuzmics Renegotiation & Coordination March 2020 18 / 33
SLIDE 36
Outline
1
Introduction
2
Model
3
Main result
4
Efficiency and stability
5
Extensions & discussion
SLIDE 37 Main Result
Theorem
Strategy σ∗ is renegotiation-proof equilibrium strategy iff it is
1 coordinated, 2 mutual-preference consistent, 3 binary communication. Heller & Kuzmics Renegotiation & Coordination March 2020 20 / 33
SLIDE 38 Intuition for the Main Result (RP ⇒ 3 Key Properties)
1 Coordinated: Any miscoordinated equilibrium of an induced coordination
game Γ(Fm,Fm′) can be Pareto-improved by either σL, σR, or σC.
Proof 2 Mutual-preference consistent: If not, it can be Pareto-improved by σL/σR. 3 Binary communication:
Coordinated ⇒ agent cares only for the average partner’s probability of playing L. Agent with u ≤ 0.5 sends m ∈ ML ⇒ the message m maximizes this probability. Any m,m′ ∈ ML induce the same probability of a partner who prefers R to play L.
Heller & Kuzmics Renegotiation & Coordination March 2020 21 / 33
SLIDE 39 Intuition for the Main Result (RP ⇒ 3 Key Properties)
1 Coordinated: Any miscoordinated equilibrium of an induced coordination
game Γ(Fm,Fm′) can be Pareto-improved by either σL, σR, or σC.
Proof 2 Mutual-preference consistent: If not, it can be Pareto-improved by σL/σR. 3 Binary communication:
Coordinated ⇒ agent cares only for the average partner’s probability of playing L. Agent with u ≤ 0.5 sends m ∈ ML ⇒ the message m maximizes this probability. Any m,m′ ∈ ML induce the same probability of a partner who prefers R to play L.
Heller & Kuzmics Renegotiation & Coordination March 2020 21 / 33
SLIDE 40 Intuition for the Main Result (RP ⇒ 3 Key Properties)
1 Coordinated: Any miscoordinated equilibrium of an induced coordination
game Γ(Fm,Fm′) can be Pareto-improved by either σL, σR, or σC.
Proof 2 Mutual-preference consistent: If not, it can be Pareto-improved by σL/σR. 3 Binary communication:
Coordinated ⇒ agent cares only for the average partner’s probability of playing L. Agent with u ≤ 0.5 sends m ∈ ML ⇒ the message m maximizes this probability. Any m,m′ ∈ ML induce the same probability of a partner who prefers R to play L.
Heller & Kuzmics Renegotiation & Coordination March 2020 21 / 33
SLIDE 41 Intuition for the Main Result (RP ⇒ 3 Key Properties)
1 Coordinated: Any miscoordinated equilibrium of an induced coordination
game Γ(Fm,Fm′) can be Pareto-improved by either σL, σR, or σC.
Proof 2 Mutual-preference consistent: If not, it can be Pareto-improved by σL/σR. 3 Binary communication:
Coordinated ⇒ agent cares only for the average partner’s probability of playing L. Agent with u ≤ 0.5 sends m ∈ ML ⇒ the message m maximizes this probability. Any m,m′ ∈ ML induce the same probability of a partner who prefers R to play L.
Heller & Kuzmics Renegotiation & Coordination March 2020 21 / 33
SLIDE 42 Intuition for the Main Result (RP ⇒ 3 Key Properties)
1 Coordinated: Any miscoordinated equilibrium of an induced coordination
game Γ(Fm,Fm′) can be Pareto-improved by either σL, σR, or σC.
Proof 2 Mutual-preference consistent: If not, it can be Pareto-improved by σL/σR. 3 Binary communication:
Coordinated ⇒ agent cares only for the average partner’s probability of playing L. Agent with u ≤ 0.5 sends m ∈ ML ⇒ the message m maximizes this probability. Any m,m′ ∈ ML induce the same probability of a partner who prefers R to play L.
Heller & Kuzmics Renegotiation & Coordination March 2020 21 / 33
SLIDE 43 Intuition for the Main Result (3 Key Properties ⇒ RP )
Showing that σ = (µ,ξ) satisfying the 3 properties is an equilibrium strategy:
Coordinated ⇒ Best reply in stage 2 = matching the partner’s pure action = follow ξ. Binary communication ⇒ Best reply in stage 1 = maximizing the probability to coordinate on the preferred outcome = follow µ.
Showing that σ = (µ,ξ) satisfies renegotiation-proofness:
After communicating, at least one of the players gets his maximal feasible payoff ⇒ The equilibrium of the induced game is not Pareto-dominated.
Heller & Kuzmics Renegotiation & Coordination March 2020 22 / 33
SLIDE 44 Intuition for the Main Result (3 Key Properties ⇒ RP )
Showing that σ = (µ,ξ) satisfying the 3 properties is an equilibrium strategy:
Coordinated ⇒ Best reply in stage 2 = matching the partner’s pure action = follow ξ. Binary communication ⇒ Best reply in stage 1 = maximizing the probability to coordinate on the preferred outcome = follow µ.
Showing that σ = (µ,ξ) satisfies renegotiation-proofness:
After communicating, at least one of the players gets his maximal feasible payoff ⇒ The equilibrium of the induced game is not Pareto-dominated.
Heller & Kuzmics Renegotiation & Coordination March 2020 22 / 33
SLIDE 45 Intuition for the Main Result (3 Key Properties ⇒ RP )
Showing that σ = (µ,ξ) satisfying the 3 properties is an equilibrium strategy:
Coordinated ⇒ Best reply in stage 2 = matching the partner’s pure action = follow ξ. Binary communication ⇒ Best reply in stage 1 = maximizing the probability to coordinate on the preferred outcome = follow µ.
Showing that σ = (µ,ξ) satisfies renegotiation-proofness:
After communicating, at least one of the players gets his maximal feasible payoff ⇒ The equilibrium of the induced game is not Pareto-dominated.
Heller & Kuzmics Renegotiation & Coordination March 2020 22 / 33
SLIDE 46
Outline
1
Introduction
2
Model
3
Main result
4
Efficiency and stability
5
Extensions & discussion
SLIDE 47 On Ex-Ante Optimality of σL / σR
First-best (play L iff u +v < 1) is not an equilibrium behavior.
Each agent would claim to have an extreme type.
Non-RP Equilibria with mis-coordination allow the outcome to depend on the intensity of preferences. Given some distributions of types, this may yield a higher ex-ante payoff.
Example
Proposition
Either σL or σR: Improves ex-ante payoff relative to all no-communication equilibria. Maximizes ex-ante payoff among all coordinated equilibria.
Heller & Kuzmics Renegotiation & Coordination March 2020 24 / 33
SLIDE 48 On Ex-Ante Optimality of σL / σR
First-best (play L iff u +v < 1) is not an equilibrium behavior.
Each agent would claim to have an extreme type.
Non-RP Equilibria with mis-coordination allow the outcome to depend on the intensity of preferences. Given some distributions of types, this may yield a higher ex-ante payoff.
Example
Proposition
Either σL or σR: Improves ex-ante payoff relative to all no-communication equilibria. Maximizes ex-ante payoff among all coordinated equilibria.
Heller & Kuzmics Renegotiation & Coordination March 2020 24 / 33
SLIDE 49 On Ex-Ante Optimality of σL / σR
First-best (play L iff u +v < 1) is not an equilibrium behavior.
Each agent would claim to have an extreme type.
Non-RP Equilibria with mis-coordination allow the outcome to depend on the intensity of preferences. Given some distributions of types, this may yield a higher ex-ante payoff.
Example
Proposition
Either σL or σR: Improves ex-ante payoff relative to all no-communication equilibria. Maximizes ex-ante payoff among all coordinated equilibria.
Heller & Kuzmics Renegotiation & Coordination March 2020 24 / 33
SLIDE 50 Interim Pareto-Optimality of RP Equilibrium Strategies
The definition of RP requires a mild notion of Pareto efficiency: post-communication Pareto efficiency WRT equilibrium strategies.
Proposition
Any RP equilibrium strategy satisfies interim (pre/post-communication) Pareto efficiency WRT all strategy profiles. Intuition: Any payoff increase to a positive measure of “left” types must decrease the payoff of a positive measure of “right” types.
Heller & Kuzmics Renegotiation & Coordination March 2020 25 / 33
SLIDE 51 Interim Pareto-Optimality of RP Equilibrium Strategies
The definition of RP requires a mild notion of Pareto efficiency: post-communication Pareto efficiency WRT equilibrium strategies.
Proposition
Any RP equilibrium strategy satisfies interim (pre/post-communication) Pareto efficiency WRT all strategy profiles. Intuition: Any payoff increase to a positive measure of “left” types must decrease the payoff of a positive measure of “right” types.
Heller & Kuzmics Renegotiation & Coordination March 2020 25 / 33
SLIDE 52
Ex-ante & Pre-Communication Renegotiation-Proofness
Our definition of RP is “post-communication”: agents are allowed to renegotiate only after the original communication. Alternative definitions:
Pre-communication RP: agents can renegotiate also before communicating. Ex-ante RP: agents can renegot. also before knowing their own types.
Proposition
The set of pre-communication RP = the set of post-communication RP. Either σL or σR is the unique ex-ante RP equilibrium strategy
(unless both σL and σR induce the same ex-ante payoff).
SLIDE 53
Ex-ante & Pre-Communication Renegotiation-Proofness
Our definition of RP is “post-communication”: agents are allowed to renegotiate only after the original communication. Alternative definitions:
Pre-communication RP: agents can renegotiate also before communicating. Ex-ante RP: agents can renegot. also before knowing their own types.
Proposition
The set of pre-communication RP = the set of post-communication RP. Either σL or σR is the unique ex-ante RP equilibrium strategy
(unless both σL and σR induce the same ex-ante payoff).
SLIDE 54
Ex-ante & Pre-Communication Renegotiation-Proofness
Our definition of RP is “post-communication”: agents are allowed to renegotiate only after the original communication. Alternative definitions:
Pre-communication RP: agents can renegotiate also before communicating. Ex-ante RP: agents can renegot. also before knowing their own types.
Proposition
The set of pre-communication RP = the set of post-communication RP. Either σL or σR is the unique ex-ante RP equilibrium strategy
(unless both σL and σR induce the same ex-ante payoff).
SLIDE 55
Ex-ante & Pre-Communication Renegotiation-Proofness
Our definition of RP is “post-communication”: agents are allowed to renegotiate only after the original communication. Alternative definitions:
Pre-communication RP: agents can renegotiate also before communicating. Ex-ante RP: agents can renegot. also before knowing their own types.
Proposition
The set of pre-communication RP = the set of post-communication RP. Either σL or σR is the unique ex-ante RP equilibrium strategy
(unless both σL and σR induce the same ex-ante payoff).
SLIDE 56
Ex-ante & Pre-Communication Renegotiation-Proofness
Our definition of RP is “post-communication”: agents are allowed to renegotiate only after the original communication. Alternative definitions:
Pre-communication RP: agents can renegotiate also before communicating. Ex-ante RP: agents can renegot. also before knowing their own types.
Proposition
The set of pre-communication RP = the set of post-communication RP. Either σL or σR is the unique ex-ante RP equilibrium strategy
(unless both σL and σR induce the same ex-ante payoff).
SLIDE 57 Any RP Eq. Strategy (µ,ξ) is Robust to Perturbations:
1 Neutral stability (Maynard smith & Price, 1973) ⇒
Robustness to a few experimenting agents.
Details 2
µ is weakly dominant, given the second-stage behavior ξ ⇒ Robustness to any perturbation that changes the 1st-stage behavior.
Details 3 ξ is a neighborhood invader strategy in each induced game (Cressman,
2010; a refinement of CSS ` a la Eshel & Motro, 1981) ⇒ Robustness to any sufficiently small perturbation in the 2nd-stage behavior.
Details Illustration
Some non-RP strategies may satisfy (1–3) (e.g., the uniform norm always-L satisfies (1+2), and for some distributions of types it satisfies (3).
Heller & Kuzmics Renegotiation & Coordination March 2020 27 / 33
SLIDE 58 Any RP Eq. Strategy (µ,ξ) is Robust to Perturbations:
1 Neutral stability (Maynard smith & Price, 1973) ⇒
Robustness to a few experimenting agents.
Details 2
µ is weakly dominant, given the second-stage behavior ξ ⇒ Robustness to any perturbation that changes the 1st-stage behavior.
Details 3 ξ is a neighborhood invader strategy in each induced game (Cressman,
2010; a refinement of CSS ` a la Eshel & Motro, 1981) ⇒ Robustness to any sufficiently small perturbation in the 2nd-stage behavior.
Details Illustration
Some non-RP strategies may satisfy (1–3) (e.g., the uniform norm always-L satisfies (1+2), and for some distributions of types it satisfies (3).
Heller & Kuzmics Renegotiation & Coordination March 2020 27 / 33
SLIDE 59 Any RP Eq. Strategy (µ,ξ) is Robust to Perturbations:
1 Neutral stability (Maynard smith & Price, 1973) ⇒
Robustness to a few experimenting agents.
Details 2
µ is weakly dominant, given the second-stage behavior ξ ⇒ Robustness to any perturbation that changes the 1st-stage behavior.
Details 3 ξ is a neighborhood invader strategy in each induced game (Cressman,
2010; a refinement of CSS ` a la Eshel & Motro, 1981) ⇒ Robustness to any sufficiently small perturbation in the 2nd-stage behavior.
Details Illustration
Some non-RP strategies may satisfy (1–3) (e.g., the uniform norm always-L satisfies (1+2), and for some distributions of types it satisfies (3).
Heller & Kuzmics Renegotiation & Coordination March 2020 27 / 33
SLIDE 60 Any RP Eq. Strategy (µ,ξ) is Robust to Perturbations:
1 Neutral stability (Maynard smith & Price, 1973) ⇒
Robustness to a few experimenting agents.
Details 2
µ is weakly dominant, given the second-stage behavior ξ ⇒ Robustness to any perturbation that changes the 1st-stage behavior.
Details 3 ξ is a neighborhood invader strategy in each induced game (Cressman,
2010; a refinement of CSS ` a la Eshel & Motro, 1981) ⇒ Robustness to any sufficiently small perturbation in the 2nd-stage behavior.
Details Illustration
Some non-RP strategies may satisfy (1–3) (e.g., the uniform norm always-L satisfies (1+2), and for some distributions of types it satisfies (3).
Heller & Kuzmics Renegotiation & Coordination March 2020 27 / 33
SLIDE 61
Outline
1
Introduction
2
Model
3
Main result
4
Efficiency and stability
5
Extensions & discussion
SLIDE 62 Variants and Extensions
Skip to insights
All results hold in the following extensions:
1
Multiple rounds of communication.
2
n > 2 players (positive payoff iff everyone coordinates on the same action).
3
Any (possibly asymmetric) coordination game in which the payoff-dominant action is risk dominant (i.e., U11 > U22 ⇒ 0.5(U11 +U12) > 0.5(U21 +U22).
Extreme types with dominant actions: Essentially, a unique RP eq. strategy σα∗ (= σC if the distribution of types is symmetric).
Details
More than two actions: (1) σC is a RP equilibrium strategy; and (2) any RP equilibrium strategy must be: same-message coordinated and mutual preference consistent.
Details Heller & Kuzmics Renegotiation & Coordination March 2020 29 / 33
SLIDE 63 Variants and Extensions
Skip to insights
All results hold in the following extensions:
1
Multiple rounds of communication.
2
n > 2 players (positive payoff iff everyone coordinates on the same action).
3
Any (possibly asymmetric) coordination game in which the payoff-dominant action is risk dominant (i.e., U11 > U22 ⇒ 0.5(U11 +U12) > 0.5(U21 +U22).
Extreme types with dominant actions: Essentially, a unique RP eq. strategy σα∗ (= σC if the distribution of types is symmetric).
Details
More than two actions: (1) σC is a RP equilibrium strategy; and (2) any RP equilibrium strategy must be: same-message coordinated and mutual preference consistent.
Details Heller & Kuzmics Renegotiation & Coordination March 2020 29 / 33
SLIDE 64 Variants and Extensions
Skip to insights
All results hold in the following extensions:
1
Multiple rounds of communication.
2
n > 2 players (positive payoff iff everyone coordinates on the same action).
3
Any (possibly asymmetric) coordination game in which the payoff-dominant action is risk dominant (i.e., U11 > U22 ⇒ 0.5(U11 +U12) > 0.5(U21 +U22).
Extreme types with dominant actions: Essentially, a unique RP eq. strategy σα∗ (= σC if the distribution of types is symmetric).
Details
More than two actions: (1) σC is a RP equilibrium strategy; and (2) any RP equilibrium strategy must be: same-message coordinated and mutual preference consistent.
Details Heller & Kuzmics Renegotiation & Coordination March 2020 29 / 33
SLIDE 65 Variants and Extensions
Skip to insights
All results hold in the following extensions:
1
Multiple rounds of communication.
2
n > 2 players (positive payoff iff everyone coordinates on the same action).
3
Any (possibly asymmetric) coordination game in which the payoff-dominant action is risk dominant (i.e., U11 > U22 ⇒ 0.5(U11 +U12) > 0.5(U21 +U22).
Extreme types with dominant actions: Essentially, a unique RP eq. strategy σα∗ (= σC if the distribution of types is symmetric).
Details
More than two actions: (1) σC is a RP equilibrium strategy; and (2) any RP equilibrium strategy must be: same-message coordinated and mutual preference consistent.
Details Heller & Kuzmics Renegotiation & Coordination March 2020 29 / 33
SLIDE 66 Economic Insights
Little communication (1-2 bits) significantly alters the predicted play. It is easy to credibly reveal the ordinal preferences. Players cannot credibly reveal the intensity of preferences. Implications to anti-trust policy:
Successful collusion often depend on the firms’ private preferences (e.g., market sharing agreement prefer to serve). Our findings strengthens the importance of not allowing even a brief form of explicit communication between oligopolistic competitors. E.g., 1997 series of FCC ascending auctions (Cramton & Schwartz, 2000).
Heller & Kuzmics Renegotiation & Coordination March 2020 30 / 33
SLIDE 67 Economic Insights
Little communication (1-2 bits) significantly alters the predicted play. It is easy to credibly reveal the ordinal preferences. Players cannot credibly reveal the intensity of preferences. Implications to anti-trust policy:
Successful collusion often depend on the firms’ private preferences (e.g., market sharing agreement prefer to serve). Our findings strengthens the importance of not allowing even a brief form of explicit communication between oligopolistic competitors. E.g., 1997 series of FCC ascending auctions (Cramton & Schwartz, 2000).
Heller & Kuzmics Renegotiation & Coordination March 2020 30 / 33
SLIDE 68 Related Literature (1)
Other applications of renegotiation-proofness
Repeated games (complete information), contracts with moral hazard.
Hart & Tirole (1988); van Damme (1989); Bernheim & Ray (1989); Evans & Maskin (1989); Forges (1994); Wen (1996), Maestri (2017); Strulovici (2017).
Private values in coordination games without communication
Stylized result: Inefficient interior equilibria are stable if the type’s density is U-shaped.
Details
Kreps & Fudenberg (1993); Ellison & Fudenberg (2000); Sandholm (2007); Jelnov, Tauman & Zhao (2018).
Heller & Kuzmics Renegotiation & Coordination March 2020 31 / 33
SLIDE 69 Related Literature (1)
Other applications of renegotiation-proofness
Repeated games (complete information), contracts with moral hazard.
Hart & Tirole (1988); van Damme (1989); Bernheim & Ray (1989); Evans & Maskin (1989); Forges (1994); Wen (1996), Maestri (2017); Strulovici (2017).
Private values in coordination games without communication
Stylized result: Inefficient interior equilibria are stable if the type’s density is U-shaped.
Details
Kreps & Fudenberg (1993); Ellison & Fudenberg (2000); Sandholm (2007); Jelnov, Tauman & Zhao (2018).
Heller & Kuzmics Renegotiation & Coordination March 2020 31 / 33
SLIDE 70 Related Literature (2)
Communication in coordination games with public values
Stylised result: Pareto-dominant outcome is selected if there are unused messages (secret handshake argument).
Wärneryd, (1992); Schlag (1993); Sobel (1993); Kim & Sobel (1995); Bhaskar (1998); Banerjee & Weibull (2000); Hurkens & Schlag (2003).
Communication in stag-hunt games with private values
Cheap-talk allows the Pareto-dominant outcome to be played with high probability (Baliga & Sjostrom, 2004).
Heller & Kuzmics Renegotiation & Coordination March 2020 32 / 33
SLIDE 71 Related Literature (2)
Communication in coordination games with public values
Stylised result: Pareto-dominant outcome is selected if there are unused messages (secret handshake argument).
Wärneryd, (1992); Schlag (1993); Sobel (1993); Kim & Sobel (1995); Bhaskar (1998); Banerjee & Weibull (2000); Hurkens & Schlag (2003).
Communication in stag-hunt games with private values
Cheap-talk allows the Pareto-dominant outcome to be played with high probability (Baliga & Sjostrom, 2004).
Heller & Kuzmics Renegotiation & Coordination March 2020 32 / 33
SLIDE 72 Conclusion
Novel simple family of equilibria in coordination games with private values & cheap-talk: (1) agents always coordinate, (2) each agent states his preferred outcome (& nothing else), (3) agents coordinate on a mutually-preferred outcome (if exists). We show that it satisfies various appealing properties:
1
Main result: strategy is in the family iff it is renegotiation-proof equilibrium.
2
Behavior is independent of the distribution of types.
3
Interim Pareto-efficiency.
4
Ex-ante payoff: best among coordinated eq., improves babbling equilibria.
5
Stable WRT various perturbations.
Heller & Kuzmics Renegotiation & Coordination March 2020 33 / 33
SLIDE 73 Backup Slides
Heller & Kuzmics Renegotiation & Coordination March 2020 34 / 33
SLIDE 74 Pedestrian Traffic (Erving Goffman, 1971, Relations in Public, Ch. 1, P. 6)
Back
”Take, for example, techniques that pedestrians employ in order to avoid bumping into one another. These seem of little significance. However, there are an appreciable number of such devices; they are constantly in use and they cast a pattern on street behavior. Street traffic would be a shambles without them.”
Heller & Kuzmics Renegotiation & Coordination March 2020 35 / 33
SLIDE 75 Formal Definition of σL = (µ∗,ξL) and σR = (µ∗,ξR)
Fix mL,mR ∈ M . Stage 1: Each agent states his preferred outcome: µ∗ (u) = mL u ≤ 0.5 mR u > 0.5 Stage 2:
ξL: Play L iff at least one player prefers L. ξR: Play R iff at least one player prefers R. ξL (m,m′) = R m = m′ = mR L
ξR (m,m′) = L m = m′ = mL R
Heller & Kuzmics Renegotiation & Coordination March 2020 36 / 33
SLIDE 76 Formal Definition of σL = (µ∗,ξL) and σR = (µ∗,ξR)
Fix mL,mR ∈ M . Stage 1: Each agent states his preferred outcome: µ∗ (u) = mL u ≤ 0.5 mR u > 0.5 Stage 2:
ξL: Play L iff at least one player prefers L. ξR: Play R iff at least one player prefers R. ξL (m,m′) = R m = m′ = mR L
ξR (m,m′) = L m = m′ = mL R
Heller & Kuzmics Renegotiation & Coordination March 2020 36 / 33
SLIDE 77 Binary Communication – Formal Definition:
Back
Definition
Let β σ (m) the expected probability that a player (who follows strategy σ) plays L conditional on the opponent sending message m.
Definition (Binary communication)
β σ (m) ∈
β σ (m) = β if any type u < 0.5 sends message m. β σ (m) = β if any type u > 0.5 sends message m.
Heller & Kuzmics Renegotiation & Coordination March 2020 37 / 33
SLIDE 78 Binary Communication – Formal Definition:
Back
Definition
Let β σ (m) the expected probability that a player (who follows strategy σ) plays L conditional on the opponent sending message m.
Definition (Binary communication)
β σ (m) ∈
β σ (m) = β if any type u < 0.5 sends message m. β σ (m) = β if any type u > 0.5 sends message m.
Heller & Kuzmics Renegotiation & Coordination March 2020 37 / 33
SLIDE 79 Neutral Stability
Back
Definition (Maynard Smith & Price, 1973)
Equilibrium strategy σ is neutrally stable iff it achieves a higher payoff against any best-reply strategy σ′, i.e., π (σ′,σ) = π (σ,σ) ⇒π (σ,σ′) ≥ π (σ′,σ′).
Proposition
Any renegotiation-proof Equilibrium Strategy σ = (µ,ξ) is neutrally stable.
Sketch of Proof.
σ′ = (µ′,ξ ′) is a best-reply against σ ⇒ ξ ′ = ξ, and µ′ ≈ µ: µ and µ′ may differ only WRT equivalent messages or the behavior of u = 0.5, i.e., µu (ML) = µ′
u (ML) and µu (MR) = µ′ u (MR) for any u = 0.5.
Heller & Kuzmics Renegotiation & Coordination March 2020 38 / 33
SLIDE 80 Neutral Stability
Back
Definition (Maynard Smith & Price, 1973)
Equilibrium strategy σ is neutrally stable iff it achieves a higher payoff against any best-reply strategy σ′, i.e., π (σ′,σ) = π (σ,σ) ⇒π (σ,σ′) ≥ π (σ′,σ′).
Proposition
Any renegotiation-proof Equilibrium Strategy σ = (µ,ξ) is neutrally stable.
Sketch of Proof.
σ′ = (µ′,ξ ′) is a best-reply against σ ⇒ ξ ′ = ξ, and µ′ ≈ µ: µ and µ′ may differ only WRT equivalent messages or the behavior of u = 0.5, i.e., µu (ML) = µ′
u (ML) and µu (MR) = µ′ u (MR) for any u = 0.5.
Heller & Kuzmics Renegotiation & Coordination March 2020 38 / 33
SLIDE 81 Neutral Stability
Back
Definition (Maynard Smith & Price, 1973)
Equilibrium strategy σ is neutrally stable iff it achieves a higher payoff against any best-reply strategy σ′, i.e., π (σ′,σ) = π (σ,σ) ⇒π (σ,σ′) ≥ π (σ′,σ′).
Proposition
Any renegotiation-proof Equilibrium Strategy σ = (µ,ξ) is neutrally stable.
Sketch of Proof.
σ′ = (µ′,ξ ′) is a best-reply against σ ⇒ ξ ′ = ξ, and µ′ ≈ µ: µ and µ′ may differ only WRT equivalent messages or the behavior of u = 0.5, i.e., µu (ML) = µ′
u (ML) and µu (MR) = µ′ u (MR) for any u = 0.5.
Heller & Kuzmics Renegotiation & Coordination March 2020 38 / 33
SLIDE 82 Message Function µ is Dominant (Given ξ)
Back
Proposition
Let (µ,ξ) be an RP equilibrium strategy. Then µ is a weakly dominant action, given σ, i.e., : π ((µ,σ),((µ′,σ))) ≥ π ((˜ µ,σ),((µ′,σ))) for any µ′, ˜ µ ∈ ∆(M).
Sketch of proof.
The impact of an agent’s action is its effect on the probability that the
µu maximizes this probability for any u < 0.5, and minimizes this probability for any u > 0.5 ⇒ µ is a weakly dominant action function.
Heller & Kuzmics Renegotiation & Coordination March 2020 39 / 33
SLIDE 83 Message Function µ is Dominant (Given ξ)
Back
Proposition
Let (µ,ξ) be an RP equilibrium strategy. Then µ is a weakly dominant action, given σ, i.e., : π ((µ,σ),((µ′,σ))) ≥ π ((˜ µ,σ),((µ′,σ))) for any µ′, ˜ µ ∈ ∆(M).
Sketch of proof.
The impact of an agent’s action is its effect on the probability that the
µu maximizes this probability for any u < 0.5, and minimizes this probability for any u > 0.5 ⇒ µ is a weakly dominant action function.
Heller & Kuzmics Renegotiation & Coordination March 2020 39 / 33
SLIDE 84 ξ is a Neighborhood Invader Strategy (Given µ)
Back
Definition (Cressman, 2010)
Strict equilibrium (x1,x2) of an asymmetric game is neighborhood invader strategy iff there is ε > 0, such that for each x′
1 = x1 and x′ 2 = x2, then either
π1 (x1x′
2) > π1 (x′ 1x′ 2) or π2 (x′ 1x2) > π2 (x′ 1x′ 2).
Cressman (2010) adapts to asymmetric games Apaloo’s (1997) notion of NIS (which refines the notion of CSS, Eshel & Motro, 1981).
Proposition
Let (µ,ξ) be an RP eq. strategy, and let m,m′ ∈ supp (µ). Then (ξ (m,m′),ξ (m′,m)) is a neighborhood invader strict equilibrium in Γ(Fm,Fm′).
Heller & Kuzmics Renegotiation & Coordination March 2020 40 / 33
SLIDE 85 ξ is a Neighborhood Invader Strategy (Given µ)
Back
Definition (Cressman, 2010)
Strict equilibrium (x1,x2) of an asymmetric game is neighborhood invader strategy iff there is ε > 0, such that for each x′
1 = x1 and x′ 2 = x2, then either
π1 (x1x′
2) > π1 (x′ 1x′ 2) or π2 (x′ 1x2) > π2 (x′ 1x′ 2).
Cressman (2010) adapts to asymmetric games Apaloo’s (1997) notion of NIS (which refines the notion of CSS, Eshel & Motro, 1981).
Proposition
Let (µ,ξ) be an RP eq. strategy, and let m,m′ ∈ supp (µ). Then (ξ (m,m′),ξ (m′,m)) is a neighborhood invader strict equilibrium in Γ(Fm,Fm′).
Heller & Kuzmics Renegotiation & Coordination March 2020 40 / 33
SLIDE 86 Coordination Games with no Communication
Literature
Analysis of the no-communication case is a special case of Sandholm (2007). All equilibria are based on “fixed-point” cut-offs: Each value x∗ = F (x∗) induces cut-off equilibrium: play L iff x ≤ x∗. always-L (x∗ = 1), always-R (x∗ = 0), possibly interior cut-offs x∗ ∈ (0,1). An equilibrium is dynamically stable iff f (x∗) < 1. If f (0), f (1) > 1, then only inefficient interior cut-off equilibria are dynamically stable.
Illustration Heller & Kuzmics Renegotiation & Coordination March 2020 41 / 33
SLIDE 87 Coordination Games with no Communication
Literature
Analysis of the no-communication case is a special case of Sandholm (2007). All equilibria are based on “fixed-point” cut-offs: Each value x∗ = F (x∗) induces cut-off equilibrium: play L iff x ≤ x∗. always-L (x∗ = 1), always-R (x∗ = 0), possibly interior cut-offs x∗ ∈ (0,1). An equilibrium is dynamically stable iff f (x∗) < 1. If f (0), f (1) > 1, then only inefficient interior cut-off equilibria are dynamically stable.
Illustration Heller & Kuzmics Renegotiation & Coordination March 2020 41 / 33
SLIDE 88 Example
Literature Heller & Kuzmics Renegotiation & Coordination March 2020 42 / 33
SLIDE 89 Example
Literature
With communication: π (σR,σR) = π (σL,σL) = 1.27.
Heller & Kuzmics Renegotiation & Coordination March 2020 43 / 33
SLIDE 90 Any equilibrium (x1,x2) of Γ(Fm1,Fm2) can be Pareto-improved
Back
x1,x2 ≤ 0.5 ⇒ σR Pareto dominates (x1,x2): ui > 0.5 gains because he gets his maximal feasible payoff in σR. ui < 0.5 gains because there’s a higher probability of the partner playing L & a higher probability of coordination. x1,x2 ≥ 0.5 ⇒ σL Pareto dominates (x1,x2) (analogous argument).
x1 < 0.5 < x2:
x1 < 0.5 is indifferent between the 2 actions ⇒ Agent 2 usually plays R. 0.5 < x2 is indifferent between the 2 actions ⇒ Agent 1 usually plays L. Coordination probability < 0.5 ⇒ payoff of (x1,x2) < 0.5 ⇒ σC Pareto dominates (x1,x2).
SLIDE 91 Any equilibrium (x1,x2) of Γ(Fm1,Fm2) can be Pareto-improved
Back
x1,x2 ≤ 0.5 ⇒ σR Pareto dominates (x1,x2): ui > 0.5 gains because he gets his maximal feasible payoff in σR. ui < 0.5 gains because there’s a higher probability of the partner playing L & a higher probability of coordination. x1,x2 ≥ 0.5 ⇒ σL Pareto dominates (x1,x2) (analogous argument).
x1 < 0.5 < x2:
x1 < 0.5 is indifferent between the 2 actions ⇒ Agent 2 usually plays R. 0.5 < x2 is indifferent between the 2 actions ⇒ Agent 1 usually plays L. Coordination probability < 0.5 ⇒ payoff of (x1,x2) < 0.5 ⇒ σC Pareto dominates (x1,x2).
SLIDE 92 Any equilibrium (x1,x2) of Γ(Fm1,Fm2) can be Pareto-improved
Back
x1,x2 ≤ 0.5 ⇒ σR Pareto dominates (x1,x2): ui > 0.5 gains because he gets his maximal feasible payoff in σR. ui < 0.5 gains because there’s a higher probability of the partner playing L & a higher probability of coordination. x1,x2 ≥ 0.5 ⇒ σL Pareto dominates (x1,x2) (analogous argument).
x1 < 0.5 < x2:
x1 < 0.5 is indifferent between the 2 actions ⇒ Agent 2 usually plays R. 0.5 < x2 is indifferent between the 2 actions ⇒ Agent 1 usually plays L. Coordination probability < 0.5 ⇒ payoff of (x1,x2) < 0.5 ⇒ σC Pareto dominates (x1,x2).
SLIDE 93 Any equilibrium (x1,x2) of Γ(Fm1,Fm2) can be Pareto-improved
Back
x1,x2 ≤ 0.5 ⇒ σR Pareto dominates (x1,x2): ui > 0.5 gains because he gets his maximal feasible payoff in σR. ui < 0.5 gains because there’s a higher probability of the partner playing L & a higher probability of coordination. x1,x2 ≥ 0.5 ⇒ σL Pareto dominates (x1,x2) (analogous argument).
x1 < 0.5 < x2:
x1 < 0.5 is indifferent between the 2 actions ⇒ Agent 2 usually plays R. 0.5 < x2 is indifferent between the 2 actions ⇒ Agent 1 usually plays L. Coordination probability < 0.5 ⇒ payoff of (x1,x2) < 0.5 ⇒ σC Pareto dominates (x1,x2).
SLIDE 94 Sketch of Proof of Main Result (2)
Back
If m,m′ ∈ ML agents coordinate on L; If m,m′ ∈ MR agents coordinate on R. Otherwise, players renegotiate to the preferred outcome. Agents always coordinate after observing m ∈ MLand m′ ∈ MR. Payoff of interior cutoff equilibrium (x∗,y∗) when one players prefer L and the opponent prefers R is at most 0.5. Agents renegotiate to coordinate on each outcome with probability 0.5. Agents with u ≤ 0.5 are indifferent between m,m′ ∈ ML ⇒ the average probability to coordinate on L is the same after sending m and m′.
SLIDE 95 Sketch of Proof of Main Result (2)
Back
If m,m′ ∈ ML agents coordinate on L; If m,m′ ∈ MR agents coordinate on R. Otherwise, players renegotiate to the preferred outcome. Agents always coordinate after observing m ∈ MLand m′ ∈ MR. Payoff of interior cutoff equilibrium (x∗,y∗) when one players prefer L and the opponent prefers R is at most 0.5. Agents renegotiate to coordinate on each outcome with probability 0.5. Agents with u ≤ 0.5 are indifferent between m,m′ ∈ ML ⇒ the average probability to coordinate on L is the same after sending m and m′.
SLIDE 96 Sketch of Proof of Main Result (2)
Back
If m,m′ ∈ ML agents coordinate on L; If m,m′ ∈ MR agents coordinate on R. Otherwise, players renegotiate to the preferred outcome. Agents always coordinate after observing m ∈ MLand m′ ∈ MR. Payoff of interior cutoff equilibrium (x∗,y∗) when one players prefer L and the opponent prefers R is at most 0.5. Agents renegotiate to coordinate on each outcome with probability 0.5. Agents with u ≤ 0.5 are indifferent between m,m′ ∈ ML ⇒ the average probability to coordinate on L is the same after sending m and m′.
SLIDE 97 Illustration: Stable Pair of Thresholds
Robustness No Communication
If ξ (m1,m2), ξ (m2,m1) are slightly perturbed, then best-reply dynamics induce agents to converge back to the eq. thresholds. (picture adapted from Sandholm, 2010)
SLIDE 98 General non Stag Hunt Coordination Games
Back
Type is a tuple (uLL,uLR,uRL,uRR) describing the payoff matrix. Feasible types: min(uLL,uRR) > max(uRL,uLR) (any non stag hunt coordination game). Asymmetric games are allowed: Different distributions of types F1 and F2. Adaptation of RP strategies: each agent reports if uLL > uRR or uLL < uRR. For asymmetric games, the set of RP eq. strategies is 2-dimensional: σα1,α2, where αi ∈ [0,1] denotes the probability of coordinating on L when the agent of population i prefers L, and the other agent prefers R.
Heller & Kuzmics Renegotiation & Coordination March 2020 47 / 33
SLIDE 99 General non Stag Hunt Coordination Games
Back
Type is a tuple (uLL,uLR,uRL,uRR) describing the payoff matrix. Feasible types: min(uLL,uRR) > max(uRL,uLR) (any non stag hunt coordination game). Asymmetric games are allowed: Different distributions of types F1 and F2. Adaptation of RP strategies: each agent reports if uLL > uRR or uLL < uRR. For asymmetric games, the set of RP eq. strategies is 2-dimensional: σα1,α2, where αi ∈ [0,1] denotes the probability of coordinating on L when the agent of population i prefers L, and the other agent prefers R.
Heller & Kuzmics Renegotiation & Coordination March 2020 47 / 33
SLIDE 100 Extreme Types with Dominant Actions
Back
The set of feasible types is [a,b], where a < 0 and b > 1. Extreme types: u < 0 (L is dominant) or u > 1 (R is dominant). Assumption: extreme types are minority: F (0) < 0.5·F (0.5), and 1−F (1) < 0.5·(1−F (0.5)). Essentially Unique RP equilibrium strategy σα, where α ≡
F(0) F(0)+(1−F(1)).
σC is renegotiation-proof in the symmetric case (F (0) = 1−F (1)).
Heller & Kuzmics Renegotiation & Coordination March 2020 48 / 33
SLIDE 101 Extreme Types with Dominant Actions
Back
The set of feasible types is [a,b], where a < 0 and b > 1. Extreme types: u < 0 (L is dominant) or u > 1 (R is dominant). Assumption: extreme types are minority: F (0) < 0.5·F (0.5), and 1−F (1) < 0.5·(1−F (0.5)). Essentially Unique RP equilibrium strategy σα, where α ≡
F(0) F(0)+(1−F(1)).
σC is renegotiation-proof in the symmetric case (F (0) = 1−F (1)).
Heller & Kuzmics Renegotiation & Coordination March 2020 48 / 33
SLIDE 102 More Than Two Actions
Back
Adaptations to the model:
k ≥ 2 actions. Agent’s type: (u1,...,uk) determines his payoff ui ∈ [0,1] if players coordinate on the i-th action. Continuous distribution F with full support on [0,1]k .
Any RP equilibrium strategy must be: coordinated, mutual preference consistent, ordinal-preference revealing.
We can no longer show that the strategy ignores intensity of preferences.
Simple adaptations σL / σR are no longer equilibria. σC is a renegotiation-proof equilibrium strategy.
Heller & Kuzmics Renegotiation & Coordination March 2020 49 / 33
SLIDE 103 More Than Two Actions
Back
Adaptations to the model:
k ≥ 2 actions. Agent’s type: (u1,...,uk) determines his payoff ui ∈ [0,1] if players coordinate on the i-th action. Continuous distribution F with full support on [0,1]k .
Any RP equilibrium strategy must be: coordinated, mutual preference consistent, ordinal-preference revealing.
We can no longer show that the strategy ignores intensity of preferences.
Simple adaptations σL / σR are no longer equilibria. σC is a renegotiation-proof equilibrium strategy.
Heller & Kuzmics Renegotiation & Coordination March 2020 49 / 33
SLIDE 104 More Than Two Actions
Back
Adaptations to the model:
k ≥ 2 actions. Agent’s type: (u1,...,uk) determines his payoff ui ∈ [0,1] if players coordinate on the i-th action. Continuous distribution F with full support on [0,1]k .
Any RP equilibrium strategy must be: coordinated, mutual preference consistent, ordinal-preference revealing.
We can no longer show that the strategy ignores intensity of preferences.
Simple adaptations σL / σR are no longer equilibria. σC is a renegotiation-proof equilibrium strategy.
Heller & Kuzmics Renegotiation & Coordination March 2020 49 / 33
SLIDE 105 Example: Equilibrium Inducing Higher Ex-Ante Payoff
Back
Each agent is randomly endowed with one of the u-s: 0.11, 0.49, 0.51, 0.89. Always coordinating on L (or on R) induces ex-ante payoff of 0.5. σL & σR induce ex-ante payoff of 0.6 (1st-best payoff is 0.65). Consider the following strategy, which induces ex-ante payoff of 0.63: Most matches: coordinate on the outcome maximizing sum of payoffs. (0.49,0.51) - coordinate on each outcome with probability 50%. (0.11,0.89) - with prob.
1 3 coordinate on L, with prob. 1 3 coordinate on R,
with prob.
1 3 play the inefficient interior cutoff equilibrium.
One can show that this is an equilibrium, but it is not renegotiation-proof.
SLIDE 106 Example: Equilibrium Inducing Higher Ex-Ante Payoff
Back
Each agent is randomly endowed with one of the u-s: 0.11, 0.49, 0.51, 0.89. Always coordinating on L (or on R) induces ex-ante payoff of 0.5. σL & σR induce ex-ante payoff of 0.6 (1st-best payoff is 0.65). Consider the following strategy, which induces ex-ante payoff of 0.63: Most matches: coordinate on the outcome maximizing sum of payoffs. (0.49,0.51) - coordinate on each outcome with probability 50%. (0.11,0.89) - with prob.
1 3 coordinate on L, with prob. 1 3 coordinate on R,
with prob.
1 3 play the inefficient interior cutoff equilibrium.
One can show that this is an equilibrium, but it is not renegotiation-proof.
SLIDE 107 Example: Equilibrium Inducing Higher Ex-Ante Payoff
Back
Each agent is randomly endowed with one of the u-s: 0.11, 0.49, 0.51, 0.89. Always coordinating on L (or on R) induces ex-ante payoff of 0.5. σL & σR induce ex-ante payoff of 0.6 (1st-best payoff is 0.65). Consider the following strategy, which induces ex-ante payoff of 0.63: Most matches: coordinate on the outcome maximizing sum of payoffs. (0.49,0.51) - coordinate on each outcome with probability 50%. (0.11,0.89) - with prob.
1 3 coordinate on L, with prob. 1 3 coordinate on R,
with prob.
1 3 play the inefficient interior cutoff equilibrium.
One can show that this is an equilibrium, but it is not renegotiation-proof.