optimization of submodular functions tutorial lecture ii
play

Optimization of Submodular Functions Tutorial - lecture II Jan - PowerPoint PPT Presentation

Optimization of Submodular Functions Tutorial - lecture II Jan Vondrk 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrk (IBM Almaden) Submodular Optimization Tutorial 1 / 24 Outline Lecture I: Submodular functions: what and why? 1


  1. Optimization of Submodular Functions Tutorial - lecture II Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 24

  2. Outline Lecture I: Submodular functions: what and why? 1 Convex aspects: Submodular minimization 2 Concave aspects: Submodular maximization 3 Lecture II: Hardness of constrained submodular minimization 1 Unconstrained submodular maximization 2 Hardness more generally: the symmetry gap 3 Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 2 / 24

  3. Hardness of constrained submodular minimization We saw: Submodular minimization is in P (without constraints, and also under "parity type" constraints). Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 3 / 24

  4. Hardness of constrained submodular minimization We saw: Submodular minimization is in P (without constraints, and also under "parity type" constraints). However: minimization is brittle and can become very hard to approximate under simple constraints. � n log n -hardness for min { f ( S ) : | S | ≥ k } , Submodular Load Balancing, Submodular Sparsest Cut [Svitkina,Fleischer ’09] n Ω( 1 ) -hardness for Submodular Spanning Tree, Submodular Perfect Matching, Submodular Shortest Path [Goel,Karande,Tripathi,Wang ’09] These hardness results assume the value oracle model: the only access to f is through value queries, f ( S ) =? Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 3 / 24

  5. Superconstant hardness for submodular minimization Problem: min { f ( S ) : | S | ≥ k } . Construction of [Goemans,Harvey,Iwata,Mirrokni ’09]: A = random (hidden) set of size k = √ n A f ( S ) = min {√ n , | S \ A | + min { log n , | S ∩ A |} log n √ n Analysis: with high probability, a value query does not give any information about A ⇒ an algorithm will return a set of value √ n , while the optimum is log n . Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 4 / 24

  6. Overview of submodular minimization CONSTRAINED SUBMODULAR MINIMIZATION Constraint Approximation Hardness hardness ref Vertex cover 2 2 [UGC] Khot,Regev ’03 k -unif. hitting set k k [UGC] Khot,Regev ’03 2 − 2 / k 2 − 2 / k k -way partition Ene,V.,Wu ’12 log n log n Facility location Svitkina,Tardos ’07 n / log 2 n n Set cover Iwata,Nagano ’09 O ( √ n ) Ω( √ n ) ˜ ˜ | S | ≥ k Svitkina,Fleischer ’09 O ( √ n ) Ω( √ n ) ˜ ˜ Sparsest Cut Svitkina,Fleischer ’09 O ( √ n ) Ω( √ n ) ˜ ˜ Load Balancing Svitkina,Fleischer ’09 O ( n 2 / 3 ) Ω( n 2 / 3 ) Shortest path GKTW ’09 Spanning tree O ( n ) Ω( n ) GKTW ’09 Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 5 / 24

  7. Outline Lecture I: Submodular functions: what and why? 1 Convex aspects: Submodular minimization 2 Concave aspects: Submodular maximization 3 Lecture II: Hardness of constrained submodular minimization 1 Unconstrained submodular maximization 2 Hardness more generally: the symmetry gap 3 Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 6 / 24

  8. Maximization of a nonnegative submodular function We saw: Maximizing a submodular function is NP-hard (Max Cut). Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 7 / 24

  9. Maximization of a nonnegative submodular function We saw: Maximizing a submodular function is NP-hard (Max Cut). Unconstrained submodular maximization: Given a submodular function f : 2 N → R + , how well can we approximate the maximum? Special case - Max Cut: T polynomial-time 0 . 878-approximation [Goemans-Williamson ’95], best possible assuming the Unique Games Conjecture [Khot,Kindler, Mossel,O’Donnell ’04, Mossel,O’Donnell,Oleszkiewicz ’05] Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 7 / 24

  10. Optimal approximation for submodular maximization Unconstrained submodular maximization: max S ⊆ N f ( S ) has been resolved recently: there is a (randomized) 1 / 2-approximation [Buchbinder,Feldman,Naor,Schwartz ’12] ( 1 / 2 + ǫ ) -approximation in the value oracle model would require exponentially many queries [Feige,Mirrokni,V. ’07] ( 1 / 2 + ǫ ) -approximation for certain explicitly represented submodular functions would imply NP = RP [Dobzinski,V. ’12] Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 8 / 24

  11. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. ∅ In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  12. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  13. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  14. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  15. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  16. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  17. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  18. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

  19. 1 2 -approximation for submodular maximization [Buchbinder,Feldman,Naor,Schwartz ’12] A double-greedy algorithm with two evolving solutions: Initialize A = ∅ , B = everything. In each step, grow A or shrink B . Invariant: A ⊆ B . While A � = B { Pick i ∈ B \ A ; Let α = max { f ( A + i ) − f ( A ) , 0 } , β = max { f ( B − i ) − f ( B ) , 0 } ; α α + β , include i in A ; With probability β α + β remove i from B ; } With probability Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend