on the number of iterations for dantzig wolfe
play

On the Number of Iterations for Dantzig-Wolfe Optimization and - PowerPoint PPT Presentation

On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms Neal Young Phil Klein Brown University Dartmouth College 1 simple multicommodity flow problem P = { s i t i paths } . Route at


  1. On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms Neal Young Phil Klein Brown University Dartmouth College 1

  2. simple multicommodity flow problem P = { s i → t i paths } . Route at least 1 total unit of flow, 2

  3. respecting capacity constraint c (0 < c < 1).

  4. simple multicommodity flow algorithm P = { s i → t i paths } . Repeat for T iterations: 3

  5. Route 1 /T units of flow on min-cost path in P , where cost of edge e is (1 + ǫ ) T flow( e ) .

  6. performance guarantee THM: If flow of congestion c exists, then algorithm returns flow of congestion c (1 + ǫ ) provided T ≥ 3 ln( m ) c ǫ 2 . 4

  7. generic packing problem input: real matrix A , vector b , generic polytope P output: x ∈ P such that Ax ≤ b . 5

  8. generic packing problem input: real matrix A , vector b , generic polytope P oracle for P : given vector c returns arg min x ∈ P c · x . ǫ -approximate solution: x ∈ P such that Ax ≤ (1 + ǫ ) b . 6

  9. Lagrangian Relaxation idea: replace constraints by costs 1950: Ford, Fulkerson Reduced multicommod. flow to iterated min-cost flow. 1960: Dantzig, Wolfe Generalized to generic packing problem. 1970: Held, Karp Reduced TSP l.b. to iterated min. 1-spanning-tree. 1990: Shahrokhi, Matula Multicommodity flow, guaranteed convergence rate. . . . 1995: Plotkin, Shmoys, Tardos Generalized to generic packing problem. 7

  10. ρ ln( m ) /ǫ 2 # Iterations prop. to ρ = “width”, m =# constraints. . . .

  11. main result: a lower bound THM: Any ǫ -approximation algorithm for the generic packing problem requires a number of iterations prop. to T = ρ ln( m ) /ǫ 2 for sufficiently large m . 8

  12. proof idea ( ρ = 2): Reduce to question about two-player zero-sum matrix games. value( M ) = min x ∈ P max ( M x ) j , where P = { x ≥ 0 : � i x i = 1 } . j THM: Let M be a random matrix in { 0 , 1 } m ×√ m . With high probability, every m × T submatrix B of M has value( B ) > (1 + ǫ )value( M ) where T = Ω(ln( m ) /ǫ 2 ). COROLLARY: At least T oracle calls to know value( M ) within 1 + ǫ . 9

  13. underlying idea: Show m × T submatrix has high value with high probability: 10

  14. Discrepancy theory.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend