low degree hardness of random optimization problems alex
play

Low-Degree Hardness of Random Optimization Problems Alex Wein - PowerPoint PPT Presentation

Low-Degree Hardness of Random Optimization Problems Alex Wein Courant Institute, New York University Joint work with: David Gamarnik Aukosh Jagannath MIT Waterloo 1 / 18 Random Optimization Problems Examples: 2 / 18 Random Optimization


  1. Low-Degree Hardness of Random Optimization Problems Alex Wein Courant Institute, New York University Joint work with: David Gamarnik Aukosh Jagannath MIT Waterloo 1 / 18

  2. Random Optimization Problems Examples: 2 / 18

  3. Random Optimization Problems Examples: ◮ Max clique in a random graph 2 / 18

  4. Random Optimization Problems Examples: ◮ Max clique in a random graph ◮ Max- k -SAT on a random formula 2 / 18

  5. Random Optimization Problems Examples: ◮ Max clique in a random graph ◮ Max- k -SAT on a random formula ◮ Maximizing a random degree- p polynomial over the sphere 2 / 18

  6. Random Optimization Problems Examples: ◮ Max clique in a random graph ◮ Max- k -SAT on a random formula ◮ Maximizing a random degree- p polynomial over the sphere Note: no planted solution 2 / 18

  7. Random Optimization Problems Examples: ◮ Max clique in a random graph ◮ Max- k -SAT on a random formula ◮ Maximizing a random degree- p polynomial over the sphere Note: no planted solution Q: What is the typical value of the optimum (OPT)? 2 / 18

  8. Random Optimization Problems Examples: ◮ Max clique in a random graph ◮ Max- k -SAT on a random formula ◮ Maximizing a random degree- p polynomial over the sphere Note: no planted solution Q: What is the typical value of the optimum (OPT)? Q: What objective value can be reached algorithmically (ALG)? 2 / 18

  9. Random Optimization Problems Examples: ◮ Max clique in a random graph ◮ Max- k -SAT on a random formula ◮ Maximizing a random degree- p polynomial over the sphere Note: no planted solution Q: What is the typical value of the optimum (OPT)? Q: What objective value can be reached algorithmically (ALG)? Q: In cases where it seems hard to reach a particular objective value, can we understand why? 2 / 18

  10. Random Optimization Problems Examples: ◮ Max clique in a random graph ◮ Max- k -SAT on a random formula ◮ Maximizing a random degree- p polynomial over the sphere Note: no planted solution Q: What is the typical value of the optimum (OPT)? Q: What objective value can be reached algorithmically (ALG)? Q: In cases where it seems hard to reach a particular objective value, can we understand why? In a unified way? 2 / 18

  11. Max Independent Set Example (max independent set): given sparse graph G ( n , d / n ), S ⊆ [ n ] | S | max s . t . S independent 3 / 18

  12. Max Independent Set Example (max independent set): given sparse graph G ( n , d / n ), S ⊆ [ n ] | S | max s . t . S independent OPT = 2 log d n d [Frieze ’90] 3 / 18

  13. Max Independent Set Example (max independent set): given sparse graph G ( n , d / n ), S ⊆ [ n ] | S | max s . t . S independent OPT = 2 log d ALG = log d n n d d [Frieze ’90] 3 / 18

  14. Max Independent Set Example (max independent set): given sparse graph G ( n , d / n ), S ⊆ [ n ] | S | max s . t . S independent OPT = 2 log d ALG = log d n n d d [Frieze ’90] [Karp ’76] : Is there a better algorithm? 3 / 18

  15. Max Independent Set Example (max independent set): given sparse graph G ( n , d / n ), S ⊆ [ n ] | S | max s . t . S independent OPT = 2 log d ALG = log d n n d d [Frieze ’90] [Karp ’76] : Is there a better algorithm? Structural evidence suggests no! [Achlioptas, Coja-Oghlan ’08; Coja-Oghlan, Efthymiou ’10] 3 / 18

  16. Max Independent Set Example (max independent set): given sparse graph G ( n , d / n ), S ⊆ [ n ] | S | max s . t . S independent OPT = 2 log d ALG = log d n n d d [Frieze ’90] [Karp ’76] : Is there a better algorithm? Structural evidence suggests no! [Achlioptas, Coja-Oghlan ’08; Coja-Oghlan, Efthymiou ’10] Local algorithms achieve value ALG and no better [Gamarnik, Sudan ’13; Rahman, Vir´ ag ’14] 3 / 18

  17. Spherical Spin Glass Example (spherical p -spin model): for Y ∈ R ⊗ p i.i.d. N (0 , 1), 1 √ n � Y , v ⊗ p � max � v � =1 4 / 18

  18. Spherical Spin Glass Example (spherical p -spin model): for Y ∈ R ⊗ p i.i.d. N (0 , 1), 1 √ n � Y , v ⊗ p � max � v � =1 [Auffinger, Ben Arous, ˇ OPT = Θ(1) Cern´ y ’13] 4 / 18

  19. Spherical Spin Glass Example (spherical p -spin model): for Y ∈ R ⊗ p i.i.d. N (0 , 1), 1 √ n � Y , v ⊗ p � max � v � =1 [Auffinger, Ben Arous, ˇ OPT = Θ(1) Cern´ y ’13] ALG = Θ(1) [Subag ’18] 4 / 18

  20. Spherical Spin Glass Example (spherical p -spin model): for Y ∈ R ⊗ p i.i.d. N (0 , 1), 1 √ n � Y , v ⊗ p � max � v � =1 [Auffinger, Ben Arous, ˇ OPT = Θ(1) Cern´ y ’13] ALG = Θ(1) [Subag ’18] ALG < OPT (for p ≥ 3) 4 / 18

  21. Spherical Spin Glass Example (spherical p -spin model): for Y ∈ R ⊗ p i.i.d. N (0 , 1), 1 √ n � Y , v ⊗ p � max � v � =1 [Auffinger, Ben Arous, ˇ OPT = Θ(1) Cern´ y ’13] ALG = Θ(1) [Subag ’18] ALG < OPT (for p ≥ 3) Approximate message passing (AMP) algorithms achieve value ALG and no better [El Alaoui, Montanari, Sellke ’20] 4 / 18

  22. What’s Missing? How to give the best “evidence” that there are no better algorithms? 5 / 18

  23. What’s Missing? How to give the best “evidence” that there are no better algorithms? Prior work rules out certain classes of algorithms (local, AMP), but do we expect these to be optimal? 5 / 18

  24. What’s Missing? How to give the best “evidence” that there are no better algorithms? Prior work rules out certain classes of algorithms (local, AMP), but do we expect these to be optimal? ◮ AMP is not optimal for tensor PCA [Montanari, Richard ’14] 5 / 18

  25. What’s Missing? How to give the best “evidence” that there are no better algorithms? Prior work rules out certain classes of algorithms (local, AMP), but do we expect these to be optimal? ◮ AMP is not optimal for tensor PCA [Montanari, Richard ’14] Would like a unified framework for lower bounds 5 / 18

  26. What’s Missing? How to give the best “evidence” that there are no better algorithms? Prior work rules out certain classes of algorithms (local, AMP), but do we expect these to be optimal? ◮ AMP is not optimal for tensor PCA [Montanari, Richard ’14] Would like a unified framework for lower bounds ◮ Local algorithms only make sense on sparse graphs 5 / 18

  27. What’s Missing? How to give the best “evidence” that there are no better algorithms? Prior work rules out certain classes of algorithms (local, AMP), but do we expect these to be optimal? ◮ AMP is not optimal for tensor PCA [Montanari, Richard ’14] Would like a unified framework for lower bounds ◮ Local algorithms only make sense on sparse graphs Solution: lower bounds against a larger class of algorithms (low-degree polynomials) that contains both local and AMP algorithms 5 / 18

  28. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials 6 / 18

  29. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N 6 / 18

  30. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N ◮ Input: e.g. graph Y ∈ { 0 , 1 } ( n 2 ) 6 / 18

  31. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N ◮ Input: e.g. graph Y ∈ { 0 , 1 } ( n 2 ) ◮ Output: e.g. b ∈ { 0 , 1 } 6 / 18

  32. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N ◮ Input: e.g. graph Y ∈ { 0 , 1 } ( n 2 ) ◮ Output: e.g. b ∈ { 0 , 1 } or v ∈ R n 6 / 18

  33. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N ◮ Input: e.g. graph Y ∈ { 0 , 1 } ( n 2 ) ◮ Output: e.g. b ∈ { 0 , 1 } or v ∈ R n ◮ “Low” means O (log n ) where n is dimension 6 / 18

  34. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N ◮ Input: e.g. graph Y ∈ { 0 , 1 } ( n 2 ) ◮ Output: e.g. b ∈ { 0 , 1 } or v ∈ R n ◮ “Low” means O (log n ) where n is dimension Examples of low-degree algorithms: 6 / 18

  35. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N ◮ Input: e.g. graph Y ∈ { 0 , 1 } ( n 2 ) ◮ Output: e.g. b ∈ { 0 , 1 } or v ∈ R n ◮ “Low” means O (log n ) where n is dimension input Y ∈ R n × n Examples of low-degree algorithms: 6 / 18

  36. The Low-Degree Polynomial Framework Study a restricted class of algorithms: low-degree polynomials ◮ Multivariate polynomial f : R M → R N ◮ Input: e.g. graph Y ∈ { 0 , 1 } ( n 2 ) ◮ Output: e.g. b ∈ { 0 , 1 } or v ∈ R n ◮ “Low” means O (log n ) where n is dimension input Y ∈ R n × n Examples of low-degree algorithms: ◮ Power iteration: Y k 1 6 / 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend