minimally invasive mechanism design
play

Minimally Invasive Mechanism Design Distributed Covering with - PowerPoint PPT Presentation

Games Dynamics models Results Minimally Invasive Mechanism Design Distributed Covering with Carefully Chosen Advice Nina Balcan, Sara Krehbiel , Georgios Piliouras, Jinwoo Shin Georgia Institute of Technology December 11, 2012 Games Dynamics


  1. Games Dynamics models Results Minimally Invasive Mechanism Design Distributed Covering with Carefully Chosen Advice Nina Balcan, Sara Krehbiel , Georgios Piliouras, Jinwoo Shin Georgia Institute of Technology December 11, 2012

  2. Games Dynamics models Results

  3. Games Dynamics models Results General Game G = � N , S , ( cost i ) i ∈ N � N - set of agents S = ( S i ) i ∈ N - strategy sets for each agent cost i : S → R

  4. Games Dynamics models Results Covering Game Agents - elements [ n ] in a hypergraph Strategies - { on , off } Incentives - i pays c i if on ; w σ for each adjacent uncovered hyperedge σ if off

  5. Games Dynamics models Results Covering Game Agents - elements [ n ] in a hypergraph Strategies - { on , off } Incentives - i pays c i if on ; w σ for each adjacent uncovered hyperedge σ if off Best response dynamics: BR i ( s − i ) = arg min a ∈ S i cost i ( a , s − i )

  6. Games Dynamics models Results Covering Game Agents - elements [ n ] in a hypergraph Strategies - { on , off } Incentives - i pays c i if on ; w σ for each adjacent uncovered hyperedge σ if off Best response dynamics: BR i ( s − i ) = arg min a ∈ S i cost i ( a , s − i )

  7. Games Dynamics models Results Covering Game Agents - elements [ n ] in a hypergraph Strategies - { on , off } Incentives - i pays c i if on ; w σ for each adjacent uncovered hyperedge σ if off Best response dynamics: BR i ( s − i ) = arg min a ∈ S i cost i ( a , s − i )

  8. Games Dynamics models Results Covering Game Agents - elements [ n ] in a hypergraph Strategies - { on , off } Incentives - i pays c i if on ; w σ for each adjacent uncovered hyperedge σ if off Best response dynamics: BR i ( s − i ) = arg min a ∈ S i cost i ( a , s − i )

  9. Games Dynamics models Results Covering Game Agents - elements [ n ] in a hypergraph Strategies - { on , off } Incentives - i pays c i if on ; w σ for each adjacent uncovered hyperedge σ if off Best response dynamics: BR i ( s − i ) = arg min a ∈ S i cost i ( a , s − i )

  10. Games Dynamics models Results Covering Game Agents - elements [ n ] in a hypergraph Strategies - { on , off } Incentives - i pays c i if on ; w σ for each adjacent uncovered hyperedge σ if off Best response dynamics: BR i ( s − i ) = arg min a ∈ S i cost i ( a , s − i ) Nash equilibrium: s st s i = BR i ( s − i ) for all i ∈ N

  11. Games Dynamics models Results cost i ( s ) = c i if on , or � w σ for uncovered adjacent σ if off Good news: The game is a potential game (∆Φ = ∆ cost i ) � � Φ( s ) = c i + w σ i on uncovered σ Best response dynamics always converge to a pure Nash equilibrium in a potential game.

  12. Games Dynamics models Results cost i ( s ) = c i if on , or � w σ for uncovered adjacent σ if off Bad news: Nash equilibria can vary dramatically in social cost � � cost ( s ) = c i + | σ | · w σ i on uncovered σ PoA/PoS = Θ( n )

  13. Games Dynamics models Results System designer has a range of options: Have faith in best response (eg [CamposNa´ oez-Garcia-Li-08]) Enforce outcome of central optimization

  14. Games Dynamics models Results System designer has a range of options: Have faith in best response (eg [CamposNa´ oez-Garcia-Li-08]) Enforce outcome of central optimization Both! Self-interest + globally-aware advice Result: low-cost Nash equilibria

  15. Games Dynamics models Results Public Service Advertising [Balcan-Blum-Mansour-09] 1 With independent constant probability α , each agent plays advertised strategy for Phase 1; others play BR. 2 Everyone plays BR.

  16. Games Dynamics models Results Public Service Advertising [Balcan-Blum-Mansour-09] 1 With independent constant probability α , each agent plays advertised strategy for Phase 1; others play BR. 2 Everyone plays BR. Learn-Then-Decide [Balcon-Blum-Mansour-10] 1 T ∗ rounds of random update: i plays s ad with probability i p i ≥ β and BR otherwise. 2 Agents commit arbitrarily to s ad or BR; best responders converge to partial Nash equilibrium.

  17. Games Dynamics models Results Our Results (Informal) 1 Arbitrary s ad = ⇒ expected cost after PSA or LTD is O ( cost ( s ad ) 2 ) in general, O ( cost ( s ad )) for non-hypergraphs. 2 Particular s ad = ⇒ cost after PSA is O (log n ) · OPT w.h.p.

  18. Games Dynamics models Results Our Results (Informal) 1 Arbitrary s ad = ⇒ expected cost after PSA or LTD is O ( cost ( s ad ) 2 ) in general, O ( cost ( s ad )) for non-hypergraphs. 2 Particular s ad = ⇒ cost after PSA is O (log n ) · OPT w.h.p. Compare: Best response may converge to cost Ω( n ) · OPT Centralized set cover can’t guarantee better than ln n · OPT

  19. Games Dynamics models Results PSA Model: 1 Some play advertised strategy; others play BR. 2 Everyone plays BR.

  20. Games Dynamics models Results PSA Model: 1 Some play advertised strategy; others play BR. 2 Everyone plays BR. Theorem (Arbitrary advertising in PSA) For any advertising strategy s ad , the expected cost at the end of Phase 2 of PSA is O ( cost ( s ad ) 2 ) , assuming c i , w σ , F max , ∆ 2 ∈ Θ(1) . Proof overview: Enough to bound cost at end of Phase 1 End of Ph1 cost ≤ cost ( s ad ) + � bad sets w σ + � bad vertices c i

  21. Games Dynamics models Results PSA Model: 1 Some play advertised strategy; others play BR. 2 Everyone plays BR. Theorem (Carefully chosen advertising in PSA) For an advertising strategy s ad of a particular efficient form, the cost at the end of Phase 2 of PSA is O ( cost ( s ad )) w.h.p., assuming c i , w σ , F max , ∆ 2 ∈ Θ(1) . Proof idea: Each on agent uniquely covers many sets in s ad W.h.p. all these agents must turn on in Phase 1

  22. Games Dynamics models Results PSA Model: 1 Some play advertised strategy; others play BR. 2 Everyone plays BR. Theorem (Carefully chosen advertising in PSA) For an advertising strategy s ad of a particular efficient form, the cost at the end of Phase 2 of PSA is O ( cost ( s ad )) w.h.p., assuming c i , w σ , F max , ∆ 2 ∈ Θ(1) . Proof idea: Each on agent uniquely covers many sets in s ad W.h.p. all these agents must turn on in Phase 1 Corollary: There exists a PSA advertising strategy such that expected cost at the end of Phase 2 is O (log n ) · OPT .

  23. Games Dynamics models Results Two contributions: 1 Extend [BBM09, BBM10] to natural set cover game 2 Show how to improve results with carefully chosen advice Future work: Give results for broader classes of games (eg all potential games) in PSA and LTD models Enhance models to allow for heterogeneity of strategies for different agents

  24. Games Dynamics models Results Thank you!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend