cs599 algorithm design in strategic settings fall 2012
play

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 9: - PowerPoint PPT Presentation

CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 9: Prior-Free Multi-Parameter Mechanism Design (Continued) Instructor: Shaddin Dughmi Administrivia HW2 Out, due in two weeks Projects Meetings Partners Mini Homeworks graded.


  1. Relax-Solve-Round Framework Given an optimization problem over some discrete set Ω . v X Approximation Algorithm Relax to a linear or convex program over polytope P . 1 Solve the relaxed problem 2 Rounding Anticipation 12/33

  2. Relax-Solve-Round Framework Given an optimization problem over some discrete set Ω . v X Approximation Algorithm Relax to a linear or convex program over polytope P . 1 Solve the relaxed problem 2 Round the fractional solution to an integral one 3 (Randomized) Rounding scheme r : P → Ω . Rounding Anticipation 12/33

  3. Relax-Solve-Round Framework Given an optimization problem over some discrete set Ω . X v Approximation Algorithm Relax to a linear or convex program over polytope P . 1 Solve the relaxed problem 2 Round the fractional solution to an integral one 3 (Randomized) Rounding scheme r : P → Ω . Rounding Anticipation 12/33

  4. Relax-Solve-Round Framework Given an optimization problem over some discrete set Ω . v X Approximation Algorithm Relax to a linear or convex program over polytope P . 1 Solve the relaxed problem 2 Round the fractional solution to an integral one 3 (Randomized) Rounding scheme r : P → Ω . Rounding Anticipation 12/33

  5. Example of Relax-Solve-Round: CA maximize � i,A min(1 , � x ij ) j covers A � subject to i x ij ≤ 1 , for all j. x ij ≥ 0 , for all i , j. 0.25 0.25 0.5 0 0.5 0.5 C B A Capability Space Rounding Anticipation 13/33

  6. Example of Relax-Solve-Round: CA maximize � i,A min(1 , � x ij ) j covers A � subject to i x ij ≤ 1 , for all j. x ij ≥ 0 , for all i , j. 0.25 0.25 0.5 0 0.5 0.5 C B A Capability Space Observe The objective is concave, and this is a convex optimization problem solvable in polynomial time via the ellipsoid method. Rounding Anticipation 13/33

  7. Example of Relax-Solve-Round: CA maximize � i,A min(1 , � x ij ) j covers A � subject to i x ij ≤ 1 , for all j. x ij ≥ 0 , for all i , j. 0.25 0.25 0.5 0 0.5 0.5 C B A Capability Space But. . . The resulting optimal solution x ∗ may be fractional, in general. Rounding Anticipation 13/33

  8. Example of Relax-Solve-Round: CA maximize � i,A min(1 , � x ij ) j covers A � subject to i x ij ≤ 1 , for all j. x ij ≥ 0 , for all i , j. 0.25 0.25 0.5 0 0.5 0.5 C B A Capability Space Classical Independent Rounding algorithm Independently for each item j , give j to player i with probability x ∗ ij . Rounding Anticipation 13/33

  9. Example of Relax-Solve-Round: CA maximize � i,A min(1 , � x ij ) j covers A � subject to i x ij ≤ 1 , for all j. x ij ≥ 0 , for all i , j. 0.25 0.25 0.5 C B A Capability Space Classical Independent Rounding algorithm Independently for each item j , give j to player i with probability x ∗ ij . Rounding Anticipation 13/33

  10. Example of Relax-Solve-Round: CA maximize � i,A min(1 , � x ij ) j covers A � subject to i x ij ≤ 1 , for all j. x ij ≥ 0 , for all i , j. 0 0.5 0.5 C B A Capability Space Classical Independent Rounding algorithm Independently for each item j , give j to player i with probability x ∗ ij . Rounding Anticipation 13/33

  11. Fact classical independent rounding of the optimal fractional solution gives a (1 − 1 /e ) -approximation algorithm for welfare maximization. Fraction: x 1 x 2 Fix solution x and player i Capability Space Rounding Anticipation 14/33

  12. Fact classical independent rounding of the optimal fractional solution gives a (1 − 1 /e ) -approximation algorithm for welfare maximization. Fraction: x 1 x 2 Fix solution x and player i Suffices to show that each capability A covered with probability at least � (1 − 1 /e ) min(1 , x ij ) j covers A Capability Space Rounding Anticipation 14/33

  13. Fact classical independent rounding of the optimal fractional solution gives a (1 − 1 /e ) -approximation algorithm for welfare maximization. Fraction: x 1 x 2 Fix solution x and player i Suffices to show that each capability A covered with probability at least � (1 − 1 /e ) min(1 , x ij ) j covers A Capability Space � � e − x j Pr [ cover A ] = 1 − (1 − x j ) ≥ 1 − j covers A j covers A � � = 1 − exp( − x j ) ≥ (1 − 1 /e ) x j j covers A j covers A Rounding Anticipation 14/33

  14. Approximation and Truthfulness Difficulty Most approximation algorithms in this framework not MIDR, and hence cannot be made truthful. Due to “lack of structure” in rounding step. Rounding Anticipation 15/33

  15. Approximation and Truthfulness Difficulty Most approximation algorithms in this framework not MIDR, and hence cannot be made truthful. Due to “lack of structure” in rounding step. Another Difficulty The Lavi-Swamy approach does not seem to apply here. Welfare is non-linear in encoding of solutions Interpreting a fractional solution as a distribution over integer solutions (i.e. rounding) is no longer loss-less Optimize over a set of P of fractional solutions is no longer equivalent to optimizing over corresponding distributions { D x : x ∈ P } . Rounding Anticipation 15/33

  16. Proposal: Anticipate the Rounding Algorithm Relax: maximize welfare ( x ) 1 subject to x ∈ P Solve: Let x ∗ be the optimal solution of relaxation. 2 Round: Output r ( x ∗ ) 3 Usually, we solve the relaxation then round the fractional solution As we discussed, the rounding “disconnects” the fractional optimization problem over P from the MIDR optimization problem over { r ( x ) : x ∈ P } Rounding Anticipation 16/33

  17. Proposal: Anticipate the Rounding Algorithm Relax: maximize welfare ( x ) welfare ( r ( x )) 1 subject to x ∈ P Solve: Let x ∗ be the optimal solution of relaxation. 2 Round: Output r ( x ∗ ) 3 Usually, we solve the relaxation then round the fractional solution As we discussed, the rounding “disconnects” the fractional optimization problem over P from the MIDR optimization problem over { r ( x ) : x ∈ P } Instead, incorporate rounding into the objective Rounding Anticipation 16/33

  18. Proposal: Anticipate the Rounding Algorithm Relax: maximize welfare ( x ) welfare ( r ( x )) 1 subject to x ∈ P Solve: Let x ∗ be the optimal solution of relaxation. 2 Round: Output r ( x ∗ ) 3 Usually, we solve the relaxation then round the fractional solution As we discussed, the rounding “disconnects” the fractional optimization problem over P from the MIDR optimization problem over { r ( x ) : x ∈ P } Instead, incorporate rounding into the objective Find fractional solution with best rounded image Rounding Anticipation 16/33

  19. Proposal: Anticipate the Rounding Algorithm Relax: maximize welfare ( x ) E[ welfare ( r ( x ))] 1 subject to x ∈ P Solve: Let x ∗ be the optimal solution of relaxation. 2 Round: Output r ( x ∗ ) 3 Usually, we solve the relaxation then round the fractional solution As we discussed, the rounding “disconnects” the fractional optimization problem over P from the MIDR optimization problem over { r ( x ) : x ∈ P } Instead, incorporate rounding into the objective Find fractional solution with best rounded image Rounding Anticipation 16/33

  20. X Rounding Anticipation 17/33

  21. X Rounding Anticipation 17/33

  22. X Rounding Anticipation 17/33

  23. Rounding Anticipation 17/33

  24. v Rounding Anticipation 17/33

  25. v Lemma For any rounding scheme r , this algorithm is maximal in distributional range. Maximizing over the range of rounding scheme r . Rounding Anticipation 17/33

  26. v Lemma For any rounding scheme r , this algorithm is maximal in distributional range. Maximizing over the range of rounding scheme r . Difficulty For most traditional rounding schemes r , this is NP-hard. Rounding Anticipation 17/33

  27. NP-Hardness of Anticipating classical independent rounding r ( x ) = x for every integer solution x Rounding Anticipation 18/33

  28. NP-Hardness of Anticipating classical independent rounding r ( x ) = x for every integer solution x The distributional range { r ( x ) : x ∈ P} includes integer solutions Rounding Anticipation 18/33

  29. NP-Hardness of Anticipating classical independent rounding r ( x ) = x for every integer solution x The distributional range { r ( x ) : x ∈ P} includes integer solutions The MIDR allocation rule is NP-hard Rounding Anticipation 18/33

  30. NP-Hardness of Anticipating classical independent rounding r ( x ) = x for every integer solution x The distributional range { r ( x ) : x ∈ P} includes integer solutions The MIDR allocation rule is NP-hard Next Up A rounding algorithm which is easier to anticipate!!! Rounding Anticipation 18/33

  31. Rounding Algorithms for CA 0.25 0.25 0.5 0 0.5 0.5 Classical Independent Rounding ( x ) Independently for each item j , give j to player i with probability x ij . Rounding Anticipation 19/33

  32. Rounding Algorithms for CA 0.25 0.25 0.5 Classical Independent Rounding ( x ) Independently for each item j , give j to player i with probability x ij . Rounding Anticipation 19/33

  33. Rounding Algorithms for CA 0 0.5 0.5 Classical Independent Rounding ( x ) Independently for each item j , give j to player i with probability x ij . Rounding Anticipation 19/33

  34. Rounding Algorithms for CA 0.25 0.25 0.5 0 0.5 0.5 Classical Independent Rounding ( x ) Independently for each item j , give j to player i with probability x ij . Optimizing welfare ( r ( x )) over all x ∈ P is NP-hard. Rounding Anticipation 19/33

  35. Rounding Algorithms for CA 0.25 0.25 0.5 0.22 0 0.5 0.22 0.5 0.39 0.39 0.39 Classical Independent Poisson Rounding ( x ) Rounding ( x ) Independently for each item Independently for each item j , give j to player i with probability 1 − e − x ij . j , give j to player i with probability x ij . Optimizing welfare ( r ( x )) over all x ∈ P is NP-hard. Rounding Anticipation 19/33

  36. Rounding Algorithms for CA 0.25 0.25 0.5 0.22 0.22 0.39 Classical Independent Poisson Rounding ( x ) Rounding ( x ) Independently for each item Independently for each item j , give j to player i with probability 1 − e − x ij . j , give j to player i with probability x ij . Optimizing welfare ( r ( x )) over all x ∈ P is NP-hard. Rounding Anticipation 19/33

  37. Rounding Algorithms for CA 0 0.5 0.5 0.39 0.39 Classical Independent Poisson Rounding ( x ) Rounding ( x ) Independently for each item Independently for each item j , give j to player i with probability 1 − e − x ij . j , give j to player i with probability x ij . Optimizing welfare ( r ( x )) over all x ∈ P is NP-hard. Rounding Anticipation 19/33

  38. Rounding Algorithms for CA 0.25 0.25 0.5 0.22 0 0.5 0.22 0.5 0.39 0.39 0.39 Classical Independent Poisson Rounding ( x ) Rounding ( x ) Independently for each item Independently for each item j , give j to player i with probability 1 − e − x ij . j , give j to player i with probability x ij . Can optimize welfare ( r ( x )) over x ∈ P in polynomial Optimizing welfare ( r ( x )) time! over all x ∈ P is NP-hard. Rounding Anticipation 19/33

  39. Rounding Algorithms for CA 0.25 0.25 0.5 0.22 0 0.5 0.22 0.5 0.39 0.39 0.39 Classical Independent Poisson Rounding ( x ) Rounding ( x ) Independently for each item Independently for each item j , give j to player i with probability 1 − e − x ij . j , give j to player i with probability x ij . Can optimize welfare ( r ( x )) over x ∈ P in polynomial Optimizing welfare ( r ( x )) time! over all x ∈ P is NP-hard. e ) x ≤ 1 − e − x ≤ x Note: (1 − 1 Rounding Anticipation 19/33

  40. Proof Overview Theorem (Dughmi, Roughgarden, and Yan ’11) There is a polynomial time, 1 − 1 e approximate, MIDR algorithm for combinatorial auctions with coverage valuations. Rounding Anticipation 20/33

  41. Proof Overview Theorem (Dughmi, Roughgarden, and Yan ’11) There is a polynomial time, 1 − 1 e approximate, MIDR algorithm for combinatorial auctions with coverage valuations. Lemma (Polynomial-time solvability) The expected welfare of rounding x ∈ P is a concave function of x . Implies that finding the rounding-optimal fractional solution is a convex optimization problem, solvable in polynomial time*. Rounding Anticipation 20/33

  42. Proof Overview Theorem (Dughmi, Roughgarden, and Yan ’11) There is a polynomial time, 1 − 1 e approximate, MIDR algorithm for combinatorial auctions with coverage valuations. Lemma (Polynomial-time solvability) The expected welfare of rounding x ∈ P is a concave function of x . Implies that finding the rounding-optimal fractional solution is a convex optimization problem, solvable in polynomial time*. Lemma (Approximation) For every set of coverage valuations and integer solution y ∈ P , welfare ( r ( y )) ≥ (1 − 1 e ) welfare ( y ) Implies that optimizing welfare of rounded solution over P gives a (1 − 1 e ) -approximation algorithm. Rounding Anticipation 20/33

  43. Proof: Polynomial-time Solvability Proof. Fix fractional solution { x ij } ij x ij is fraction of item j given to player i . Rounding Anticipation 21/33

  44. Proof: Polynomial-time Solvability Proof. Fix fractional solution { x ij } ij x ij is fraction of item j given to player i . Poisson rounding gives j to i with probability 1 − e − x ij . Rounding Anticipation 21/33

  45. Proof: Polynomial-time Solvability Proof. Fix fractional solution { x ij } ij x ij is fraction of item j given to player i . Poisson rounding gives j to i with probability 1 − e − x ij . Let random variable S i denote set given to i . Want to show that E [ � i v i ( S i )] is concave in variables x ij . Rounding Anticipation 21/33

  46. Proof: Polynomial-time Solvability Proof. Fix fractional solution { x ij } ij x ij is fraction of item j given to player i . Poisson rounding gives j to i with probability 1 − e − x ij . Let random variable S i denote set given to i . Want to show that E [ � i v i ( S i )] is concave in variables x ij . By linearity of expectations and the fact concavity is preserved by sum, suffices to show E [ v i ( S i )] is concave for fixed player i . Rounding Anticipation 21/33

  47. Proof: Polynomial-time Solvability Fraction: x 1 x 2 1 − e − x 1 1 − e − x 2 Probability: C B A Capability Space Rounding Anticipation 22/33

  48. Proof: Polynomial-time Solvability Fraction: x 1 x 2 1 − e − x 1 1 − e − x 2 Probability: Value= Pr [ Cover A ] + Pr [ Cover B ] + Pr [ Cover C ] Suffices to show each term C concave B A Capability Space Rounding Anticipation 22/33

  49. Proof: Polynomial-time Solvability Fraction: x 1 x 2 1 − e − x 1 1 − e − x 2 Probability: Value= Pr [ Cover A ] + Pr [ Cover B ] + Pr [ Cover C ] Suffices to show each term C concave B A Capability Space Pr [ Cover A ] = 1 − e − x 1 Pr [ Cover B ] = 1 − e − x 2 Pr [ Cover C ] = 1 − e − ( x 1 + x 2 ) Rounding Anticipation 22/33

  50. Proof: Polynomial-time Solvability Fraction: x 1 x 2 1 − e − x 1 1 − e − x 2 Probability: Value= Pr [ Cover A ] + Pr [ Cover B ] + Pr [ Cover C ] Suffices to show each term C concave B A Capability Space In general,   e − x j = 1 − exp � � Pr [ cover D ] = 1 −  − x j  j covers D j covers D which is a concave function of x . Rounding Anticipation 22/33

  51. Proof: Approximation Fraction: y 1 y 2 Fix player i , and integer 1 − e − y 1 1 − e − y 2 Probability: solution y Capability Space Rounding Anticipation 23/33

  52. Proof: Approximation Fraction: y 1 y 2 Fix player i , and integer 1 − e − y 1 1 − e − y 2 Probability: solution y Suffices to show that each capability A covered in y is covered with with probability at least (1 − 1 /e ) in r ( y ) Capability Space Rounding Anticipation 23/33

  53. Proof: Approximation Fraction: y 1 y 2 Fix player i , and integer 1 − e − y 1 1 − e − y 2 Probability: solution y Suffices to show that each capability A covered in y is covered with with probability at least (1 − 1 /e ) in r ( y ) There is an item j covering A with y ij = 1 Capability Space Rounding Anticipation 23/33

  54. Proof: Approximation Fraction: y 1 y 2 Fix player i , and integer 1 − e − y 1 1 − e − y 2 Probability: solution y Suffices to show that each capability A covered in y is covered with with probability at least (1 − 1 /e ) in r ( y ) There is an item j covering A with y ij = 1 Player i gets j with Capability Space probability 1 − 1 /e in r ( y ) Rounding Anticipation 23/33

  55. Proof Overview Theorem (Dughmi, Roughgarden, and Yan ’11) There is a polynomial time, 1 − 1 e approximate, MIDR algorithm for combinatorial auctions with coverage valuations. Lemma (Polynomial-time solvability) The expected welfare of rounding x ∈ P is a concave function of x . Implies that finding the rounding-optimal fractional solution is a convex optimization problem, solvable in polynomial time*. Lemma (Approximation) For every set of coverage valuations and integer solution y ∈ P , welfare ( r ( y )) ≥ (1 − 1 e ) welfare ( y ) Implies that optimizing welfare of rounded solution over P gives a (1 − 1 e ) -approximation algorithm. Rounding Anticipation 24/33

  56. Relation to Lavi/Swamy Lavi-Swamy can be interpreted as rounding anticipation for a “simple” convex rounding algorithm Rounding algorithm r rounds fractional point x of LP to distribution D x with expectation x α . By linearity, the LP objective v T x and the welfare of the rounded solution v T r ( x ) = v T x are the same, up to a universal scaling α factor α . Therefore, solving the LP optimizes over the range of distributions resulting from rounding algorithm r X X Rounding Anticipation 25/33

  57. Outline Review 1 Rounding Anticipation 2 Characterizations of Incentive Comapatibility 3 Direct Characterization Characterizing the Allocation rule Lower Bounds in Prior Free AMD 4

  58. Characterizing Incentive Compatible Mechanisms Recall: monotonicity characterization of truthful mechanisms for single parameter problems There are characterizations in general (non-SP) mechanism design problems However: more complex, and nuanced Nevertheless, useful for lower bounds Characterizations of Incentive Comapatibility 26/33

  59. Taxation Principle For each player i and fixed reports v − i of others: Characterizations of Incentive Comapatibility 27/33

  60. Taxation Principle For each player i and fixed reports v − i of others: V V 2 3 Characterizations of Incentive Comapatibility 27/33

  61. Taxation Principle For each player i and fixed reports v − i of others: Truthful mechanism fixes a menu of distributions over allocations, and associated prices $15 $10 Characterizations of Incentive Comapatibility 27/33

  62. Taxation Principle For each player i and fixed reports v − i of others: Truthful mechanism fixes a menu of distributions over allocations, and associated prices When player i reports v i , the mechanism: Chooses the distribution/price pair ( D, p ) maximizing E ω ∼ D [ v i ( ω )] − p . Allocates a sample ω ∼ D , and charges player i p $15 $10 V 1 Characterizations of Incentive Comapatibility 27/33

  63. Cycle Monotonicity The most general characterization of dominant-strategy implementable allocation rules. Cycle Monotonicity An allocation rule f is cycle monotone if for every player i , every valuation profile v − i ∈ V − i of other players, every integer k ≥ 0 , and every sequence v 1 i , . . . , v k i ∈ V i of k valuations for player i , the following holds k � [ v i ( ω j ) − v i ( ω j +1 )] ≥ 0 j =1 where ω j denotes f ( v j i , v − i ) for all j ∈ { 1 , . . . , k } , and ω k +1 = ω 1 . Theorem For every mechanism design problem, an allocation rule f is dominant-strategy implementable if and only if it is cycle monotone. Characterizations of Incentive Comapatibility 28/33

  64. Weak Monotonicity The special case of cycle monotonicity for cycles of length 2 . Weak Monotonicity An allocation rule f is weakly monotone if for every player i , every valuation profile v − i ∈ V − i of other players, and every pair of valuations v i , v ′ i ∈ V i of player i , the following holds v i ( ω ) − v i ( ω ′ ) ≥ v ′ i ( ω ) − v ′ i ( ω ′ ) where ω = f ( v i , v − i ) and ω ′ = f ( v ′ i , v − i ) This is necessary for all mechanism design problems. For problems with a convex domain, it is also sufficient. Theorem [Saks,Yu] For every mechanism design problem where each V i ⊆ R Ω is a convex set of functions, an allocation rule f is dominant-strategy implementable if and only if it is weakly monotone. Characterizations of Incentive Comapatibility 29/33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend