gross substitutes tutorial
play

Gross Substitutes Tutorial Part I: Combinatorial structure and - PowerPoint PPT Presentation

Gross Substitutes Tutorial Part I: Combinatorial structure and algorithms (Renato Paes Leme, Google) Part II: Economics and the boundaries of substitutability (Inbal Talgam-Cohen, Hebrew University) Three seemingly-independent problems Three


  1. Walrasian tatonnement • This process always ends, otherwise prices go to infinity. • When it ends in the limit S i ∈ D ( v i ; p ) ✏ → 0 • What else ? • The only condition left is that ∪ i S i = [ n ] • For that we need: S i ⊆ X i ∈ D ( v i ; p i ) • Definition: A valuation satisfied gross substitutes if for all prices and there is 
 X ∈ D ( v ; p 0 ) p ≤ p 0 S ∈ D ( v ; p ) s.t. S ∩ { i ; p i = p 0 i } ⊆ X • With the new definition, the algorithm always keeps a partition.

  2. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists.

  3. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =

  4. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =

  5. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =

  6. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =

  7. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =

  8. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) = • matroid-matching

  9. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) = • matroid-matching

  10. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) = • matroid-matching Open: GS ?= matroid-matching

  11. Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Theorem [Gul-Stachetti]: If a class of valuations C contains all unit-demand valuations and Walrasian equilibrium always exists then C ⊆ GS

  12. 
 Valuated Matroids • Given vectors define 
 v 1 , . . . , v m ∈ Q n ψ p ( v 1 , . . . , v n ) = n if det( v 1 , . . . , v n ) = p − n · a/b for prime . a, b, p ∈ Z p • Question in algebra: v i ∈ V ψ p ( v 1 , . . . , v n ) s.t. det( v 1 , . . . , v n ) 6 = 0 min • Solution is a greedy algorithm: start with any non- degenerate set and go over each items and replace it by the one that minimizes . ψ p ( v 1 , . . . , v n ) • [DW]: Grassmann-Plucker relations look like matroid cond

  13. Valuated Matroids • Definition: a function is a valuated � [ n ] � → R v : k matroid if the “Greedy is optimal”.

  14. Matroidal maps • Definition: a function is a matroidal map if v : 2 [ n ] → R for every a set in can be obtained by D ( v ; p ) p ∈ R n the greedy algorithm : and S 0 = ∅ S t = S t − 1 ∪ { i t } for i t ∈ argmax i v p ( i | S t )

  15. Matroidal maps • Definition: a function is a matroidal map if v : 2 [ n ] → R for every a set in can be obtained by D ( v ; p ) p ∈ R n the greedy algorithm : and S 0 = ∅ S t = S t − 1 ∪ { i t } for i t ∈ argmax i v p ( i | S t ) • Definition: a subset system is a matroid if M ⊆ 2 [ n ] for every the problem can be solved S ∈ M p ( S ) max p ∈ R n by the greedy algorithm.

  16. Discrete Concavity • A function is convex if for all , a f : R n → R p ∈ R n local minimum of is a global f p ( x ) = f ( x ) � h p, x i minimum. • Also, gradient descent converges for convex functions. • We want to extend this notion to function in the v : 2 [ n ] → R v : Z [ n ] → R hypercube: (or lattice 
 or other discrete sets such as the basis of a matroid)

  17. Discrete Concavity • A function is convex if for all , a f : R n → R p ∈ R n local minimum of is a global f p ( x ) = f ( x ) � h p, x i minimum. • Also, gradient descent converges for convex functions. • We want to extend this notion to function in the v : 2 [ n ] → R v : Z [ n ] → R hypercube: (or lattice 
 or other discrete sets such as the basis of a matroid)

  18. Discrete Concavity • A function is convex if for all , a f : R n → R p ∈ R n local minimum of is a global f p ( x ) = f ( x ) � h p, x i minimum. • Also, gradient descent converges for convex functions. • We want to extend this notion to function in the v : 2 [ n ] → R v : Z [ n ] → R hypercube: (or lattice 
 or other discrete sets such as the basis of a matroid)

  19. 
 
 
 
 Discrete Concavity • A function is discrete concave if for all 
 v : 2 [ n ] → R all local minima of are global minima. I.e. 
 p ∈ R n v p v p ( S ) ≥ v p ( S ∪ i ) , ∀ i / ∈ S v p ( S ) ≥ v p ( S \ j ) , ∀ j ∈ S v p ( S ) ≥ v p ( S ∪ i \ j ) , ∀ i / ∈ S, j ∈ S then . In particular local v p ( S ) ≥ v p ( T ) , ∀ T ⊆ [ n ] search always converges. 
 • [Murota ’96] M-concave (generalize valuated matroids) 
 [Murota-Shioura ’99] -concave functions M \

  20. Equivalence • [Fujishige-Yang] A function is gross v : 2 [ n ] → R substitutes i ff it is a matroidal map i ff it is discrete concave. [Dress-Wenzel ’91] [Kelso-Crawford ’82] [Murota-Shioura ’99] generalize necessary /“su ffi cient” generalize convexity Grassmann-Plucker condition for price to discrete domains relations adjustment to converge valuated matroids M-discrete concave gross substitutes matroidal maps

  21. Equivalence • [Fujishige-Yang] A function is gross v : 2 [ n ] → R substitutes i ff it is a matroidal map i ff it is discrete concave. [Dress-Wenzel ’91] [Kelso-Crawford ’82] [Murota-Shioura ’99] generalize necessary /“su ffi cient” generalize convexity Grassmann-Plucker condition for price to discrete domains relations adjustment to converge valuated matroids M-discrete concave gross substitutes matroidal maps • In particular in poly-time. S ∈ D ( v ; p )

  22. Equivalence • [Fujishige-Yang] A function is gross v : 2 [ n ] → R substitutes i ff it is a matroidal map i ff it is discrete concave. [Dress-Wenzel ’91] [Kelso-Crawford ’82] [Murota-Shioura ’99] generalize necessary /“su ffi cient” generalize convexity Grassmann-Plucker condition for price to discrete domains relations adjustment to converge valuated matroids M-discrete concave gross substitutes matroidal maps • In particular in poly-time. S ∈ D ( v ; p ) • Proof through discrete di ff erential equations

  23. 
 
 Discrete Di ff erential Equations v : 2 [ n ] → R • Given a function we define the discrete derivative with respect to as the function 
 i ∈ [ n ] ∂ i v : 2 [ n ] \ i → R which is given by: 
 ∂ i v ( S ) = v ( S ∪ i ) − v ( S ) (another name for the marginal)

  24. 
 
 
 Discrete Di ff erential Equations v : 2 [ n ] → R • Given a function we define the discrete derivative with respect to as the function 
 i ∈ [ n ] ∂ i v : 2 [ n ] \ i → R which is given by: 
 ∂ i v ( S ) = v ( S ∪ i ) − v ( S ) (another name for the marginal) • If we apply it twice we get: 
 ∂ ij v ( S ) := ∂ j ∂ i v ( S ) = v ( S ∪ ij ) − v ( S ∪ i ) − v ( S ∪ j ) + v ( S ) • Submodularity: ∂ ij v ( S ) ≤ 0

  25. 
 
 
 Discrete Di ff erential Equations v : 2 [ n ] → R • [Reijnierse, Gellekom, Potters] A function 
 is in gross substitutes i ff it satisfies: 
 ∂ ij v ( S ) ≤ max( ∂ ik v ( S ) , ∂ kj v ( S )) ≤ 0 condition on the discrete Hessian. • Idea: A function is in GS i ff there is not price such that: 
 or 
 D ( v ; p ) = { S, S ∪ ij } D ( v ; p ) = { S ∪ k, S ∪ ij } If v is not submodular, we can construct a price of the first type. If then we ∂ ij v ( S ) > max( ∂ ik v ( S ) , ∂ kj v ( S )) can find a certificate of the second type.

  26. Algorithmic Problems • Welfare problem: given m agents with 
 v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition 
 S 1 , . . . , S m find whether it is optimal. 
 • Walrasian prices: given the optimal partition 
 ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )

  27. Algorithmic Problems • Techniques: • Tatonnement • Linear Programming • Gradient Descent • Cutting Plane Methods • Combinatorial Algorithms

  28. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS P S x iS = 1 , ∀ i ∈ [ m ] P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i { 0 , 1 } x iS ∈ [0 , 1]

  29. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS P S x iS = 1 , ∀ i ∈ [ m ] P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i x iS ∈ [0 , 1]

  30. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual

  31. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • For GS, the IP is integral: W IP ≤ W LP = W D-LP • Consider a Walrasian equilibrium and p the Walrasian prices and u the agent utilities. Then it is a solution to the dual, so: W D-LP ≤ W eq = W IP

  32. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual

  33. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • In general, Walrasian equilibrium exists i ff LP is integral.

  34. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • In general, Walrasian equilibrium exists i ff LP is integral. • Separation oracle for the dual: 
 u i ≥ max v i ( S ) − p ( S ) S is the demand oracle problem.

  35. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual

  36. Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • Walrasian equilibrium exists + demand oracle in poly-time = Welfare problem in poly-time • [Roughgarden, Talgam-Cohen] Use complexity theory to show non-existence of equilibrium, e.g. budget additive.

  37. Gradient Descent • We can Lagrangify the dual constraints and obtain the following convex potential function: φ ( p ) = P i max S [ v i ( S ) − p ( S )] + P j p j • Theorem: the set of Walrasian prices (when they exist) are the set of minimizers of . φ ∂ j φ ( p ) = 1 − P i 1[ j ∈ S i ]; S i ∈ D ( v i ; p ) • Gradient descent: increase price of over-demanded items and decrease price of over-demanded items. • Tatonnement: p j ← p j − ✏ · sgn @ j � ( p )

  38. Comparing Methods method oracle running-time

  39. How to access the input

  40. How to access the input Value oracle: given i and S: 
 query . v i ( S )

  41. How to access the input Demand oracle: Value oracle: given i and p: 
 given i and S: 
 query . query . v i ( S ) S ∈ D ( v i , p )

  42. How to access the input Demand oracle: Aggregate Demand: Value oracle: given p, query. given i and p: 
 given i and S: 
 query . query . v i ( S ) S ∈ D ( v i , p ) P i S i ; S i ∈ D ( v i , p )

  43. Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD

  44. Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly

  45. Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly • [PL-Wong]: We can compute an exact equilibrium 
 ˜ with calls to an aggregate demand oracle. O ( n )

  46. Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly

  47. Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly combinatorial value strongly-poly

  48. Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly combinatorial value strongly-poly • [Murota]: We can compute an exact equilibrium 
 ˜ for gross susbtitutes in time. O (( mn + n 3 ) T V )

  49. Algorithmic Problems • Welfare problem: given m agents with 
 v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition 
 S 1 , . . . , S m find whether it is optimal. 
 • Walrasian prices: given the optimal partition 
 ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )

  50. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.

  51. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.

  52. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w jk = v i ( S i ) − v i ( S i ∪ k \ j )

  53. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w φ i k = v i ( S i ) − v i ( S i ∪ k )

  54. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w j φ i 0 = v i ( S i ) − v i ( S i \ j )

  55. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w φ i φ i 0 = 0

  56. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.

  57. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.

  58. Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.

  59. 
 
 
 Computing Walrasian prices • Theorem: the allocation is optimal if the exchange graph has no negative cycle. • Proof: if no negative cycles the distance is well defined. So let then: 
 p j = − dist( φ , j ) dist( φ , k ) ≤ dist( φ , j ) + w jk v i ( S i ) ≥ v i ( S i ∪ k \ j ) − p k + p j And since is locally-opt then it is globally opt. 
 S i Conversely: Walrasian prices are a dual certificate showing that no negative cycles exist.

  60. 
 
 
 Computing Walrasian prices • Theorem: the allocation is optimal if the exchange graph has no negative cycle. • Proof: if no negative cycles the distance is well defined. So let then: 
 p j = − dist( φ , j ) dist( φ , k ) ≤ dist( φ , j ) + w jk v i ( S i ) ≥ v i ( S i ∪ k \ j ) − p k + p j And since is locally-opt then it is globally opt. 
 S i Conversely: Walrasian prices are a dual certificate showing that no negative cycles exist. • Nice consequence: Walrasian prices form a lattice.

  61. Algorithmic Problems • Welfare problem: given m agents with 
 v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition 
 S 1 , . . . , S m find whether it is optimal. 
 • Walrasian prices: given the optimal partition 
 ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )

  62. Algorithmic Problems • Welfare problem: given m agents with 
 v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition 
 S 1 , . . . , S m find whether it is optimal. 
 • Walrasian prices: given the optimal partition 
 ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )

  63. Algorithmic Problems • Welfare problem: given m agents with 
 v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition 
 S 1 , . . . , S m find whether it is optimal. 
 • Walrasian prices: given the optimal partition 
 ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )

  64. Incremental Algorithm • For each we will solve problem to find the W t t = 1 ..n optimal allocation of items to buyers. [ t ] = { 1 ..t } m • Problem is easy. W 1 • Assume now we solved getting allocation 
 S 1 , . . . , S m W t and a certificate = maximal Walrasian prices. p w jk = v i ( S i ) − v i ( S i ∪ k \ j )+ p k − p j w j φ i 0 = v i ( S i ) − v i ( S i \ j ) − p j w φ i k = v i ( S i ) − v i ( S i ∪ k )+ p k

  65. Incremental Algorithm • For each we will solve problem to find the W t t = 1 ..n optimal allocation of items to buyers. [ t ] = { 1 ..t } m • Problem is easy. W 1 • Assume now we solved getting allocation 
 S 1 , . . . , S m W t and a certificate = maximal Walrasian prices. p

  66. Incremental Algorithm • For each we will solve problem to find the W t t = 1 ..n optimal allocation of items to buyers. [ t ] = { 1 ..t } m • Problem is easy. W 1 • Assume now we solved getting allocation 
 S 1 , . . . , S m W t and a certificate = maximal Walrasian prices. p

  67. Incremental Algorithm • Algorithm: compute shortest path from to φ t + 1 • Update allocation by implementing path swaps

  68. Incremental Algorithm • Algorithm: compute shortest path from to φ t + 1 • Update allocation by implementing path swaps

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend