high performance computational techniques for the simplex
play

High performance computational techniques for the simplex method - PowerPoint PPT Presentation

High performance computational techniques for the simplex method Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk CO@Work 2020 16 September 2020 Overview Revision of LP theory and simplex algorithms Computational


  1. High performance computational techniques for the simplex method Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk CO@Work 2020 16 September 2020

  2. Overview Revision of LP theory and simplex algorithms Computational techniques for serial simplex Bound-flipping ratio test (dual simplex) Hyper-sparsity Cost perturbation (dual simplex) Computational techniques for parallel simplex Structured LP problems General LP problems Julian Hall High performance computational techniques for the simplex method 2 / 47

  3. Solving LP problems: Background f = ❝ T ① minimize subject to A ① = ❜ ① ≥ 0 Background Example Fundamental model in optimal decision-making Solution techniques ◦ Simplex method (1947) ◦ Interior point methods (1984) Large problems have ◦ 10 3 –10 8 variables ◦ 10 3 –10 8 constraints Matrix A is usually sparse and STAIR: 356 rows, 467 columns and 3856 nonzeros may be structured Julian Hall High performance computational techniques for the simplex method 3 / 47

  4. Solving LP problems: Background x 3 f = ❝ T ① minimize subject to A ① = ❜ ① ≥ 0 x 2 A vertex of the feasible region K ⊂ R n has m basic components, i ∈ B n − m zero nonbasic components, j ∈ N K The equations and ① are partitioned according to B ∪ N ⇒ ① B = B − 1 ( ❜ − N ① N ) = � ❜ − � B ① B + N ① N = ❜ N ① N since the basis matrix B is nonsingular Reduced objective is then f = � ❝ T N ① N , where � B � f = ❝ T f + � ❜ ❝ T B B − 1 N N = ❝ T N − ❝ T and � x 1 For ① N = 0, partition yields an optimal solution if there is Primal feasibility � ❜ ≥ 0; Dual feasibility � ❝ N ≥ 0 Julian Hall High performance computational techniques for the simplex method 4 / 47

  5. Solving dual LP problems: Optimality and the dual simplex algorithm Consider the dual problem f D = ❜ T ② A T ② + s = ❝ maximize subject to s ≥ 0 Equations, s and ❝ partitioned according to B ∪ N as � � B T � � s B � � ❝ B � ② = B − T ( ❝ B − s B ) ② + = ⇒ N T ❝ N + � s N ❝ N N T s B s N = � T Reduced objective is f D = � f − � ❜ s B For s B = 0, partition yields an optimal solution if there is Primal feasibility � ❜ ≥ 0; Dual feasibility � ❝ N ≥ 0 Dual simplex algorithm for an LP is primal algorithm applied to the dual problem Structure of dual equations allows dual simplex algorithm to be applied to primal simplex tableau Julian Hall High performance computational techniques for the simplex method 5 / 47

  6. Primal simplex algorithm RHS N Assume � ❜ ≥ 0 Seek � ❝ N ≥ 0 Scan � c j < 0 for q to leave N � � a q b Scan � b i / � a iq > 0 for p to leave B B a T � � a pq � b p p c T � � c q N Julian Hall High performance computational techniques for the simplex method 6 / 47

  7. Dual simplex algorithm: Choose a row RHS N Seek � Assume � ❝ N ≥ 0 ❜ ≥ 0 Scan � b i < 0 for p to leave B � b B � b p Julian Hall High performance computational techniques for the simplex method 7 / 47

  8. Dual simplex algorithm: Choose a column RHS N Seek � Assume � ❝ N ≥ 0 ❜ ≥ 0 Scan � b i < 0 for p to leave B Scan � c j / � a pj < 0 for q to leave N B a T � a pq � p c T � � c q N Julian Hall High performance computational techniques for the simplex method 8 / 47

  9. Dual simplex algorithm: Update reduced costs and RHS RHS N Seek � Assume � ❝ N ≥ 0 ❜ ≥ 0 Scan � b i < 0 for p to leave B � � a q b Scan � c j / � a pj < 0 for q to leave N B a T � � a pq � b p p Update: Exchange p and q between B and N Update � ❜ := � α P = � c T ❜ − α P � ❛ q b p / � a pq � � c q N ❝ T ❝ T ❛ T Update � N := � N + α D � α D = − � c q / � a pq p Julian Hall High performance computational techniques for the simplex method 9 / 47

  10. Dual simplex algorithm: Data required RHS N Seek � Assume � ❝ N ≥ 0 ❜ ≥ 0 Scan � b i < 0 for p to leave B � � a q b Scan � c j / � a pj < 0 for q to leave N B a T � � a pq � b p p Update: Exchange p and q between B and N Update � ❜ := � α P = � c T ❜ − α P � ❛ q b p / � a pq � � c q N ❝ T ❝ T ❛ T Update � N := � N + α D � α D = − � c q / � a pq p Data required ❛ T p = ❡ T p B − 1 N Pivotal row � ❛ q = B − 1 ❛ q Pivotal column � Julian Hall High performance computational techniques for the simplex method 10 / 47

  11. Dual simplex algorithm RHS N Seek � Assume � ❝ N ≥ 0 ❜ ≥ 0 Scan � b i < 0 for p to leave B � � a q b Scan � c j / � a pj < 0 for q to leave N B a T � � a pq � b p p Update: Exchange p and q between B and N Update � ❜ := � α P = � c T ❜ − α P � ❛ q b p / � a pq � � c q N ❝ T ❝ T ❛ T Update � N := � N + α D � α D = − � c q / � a pq p Data required Why does it work? ❛ T � p = ❡ T p B − 1 N Pivotal row � b p × � c q Objective improves by each iteration ❛ q = B − 1 ❛ q a pq � Pivotal column � Julian Hall High performance computational techniques for the simplex method 11 / 47

  12. Solving LP problems: Primal or dual simplex? Primal simplex algorithm Traditional variant Solution generally not primal feasible when (primal) LP is tightened Dual simplex algorithm Preferred variant Easier to get dual feasibility More progress in many iterations Solution dual feasible when primal LP is tightened Julian Hall High performance computational techniques for the simplex method 12 / 47

  13. Simplex method: Computation Standard simplex method (SSM): Major computational component N − 1 Update of tableau: � N := � ❛ T RHS N ❛ q � � p a pq � where � N = B − 1 N � � B N b Hopelessly inefficient for sparse LP problems c T � Prohibitively expensive for large LP problems N Revised simplex method (RSM): Major computational components ❛ T B T π p = ❡ p p = π T Pivotal row via and � p N BTRAN PRICE Represent B − 1 Pivotal column via B � ❛ q = ❛ q FTRAN INVERT Update B − 1 exploiting ¯ B = B + ( ❛ q − B ❡ p ) ❡ T UPDATE p Julian Hall High performance computational techniques for the simplex method 13 / 47

  14. Simplex method: Computation Representing B − 1 : INVERT Form B = LU using sparsity-exploiting Markowitz technique L unit lower triangular U upper triangular Representing B − 1 : UPDATE Exploit ¯ B = B + ( ❛ q − B ❡ p ) ❡ T p to limit refactorization Many schemes: simplest is product form ¯ B = B + ( ❛ q − B ❡ p ) ❡ T ❛ p − ❡ p ) ❡ T = B [ I + ( � p ] = BE p where E is easily invertible Julian Hall High performance computational techniques for the simplex method 14 / 47

  15. Simplex method: Mittelmann test set Industry standard set of 40 LP problems Rows Nonzeros Nonzeros Rows Cols Nonzeros Cols Rows × Cols max(Rows,Cols) Min 960 1560 38304 1/255 0.0005% 2.2 Geomean 54256 72442 910993 0.75 0.02% 6.5 Max 986069 1259121 11279748 85 16% 218.0 Mittelmann solution time measure Unsolved problems given “timeout” solution time Shift all solution times up by 10s Compute geometric mean of logs of shifted times Solution time measure is exponent of geometric mean shifted down by 10s Julian Hall High performance computational techniques for the simplex method 15 / 47

  16. Dual simplex: Bound-flipping Ratio Test (BFRT)

  17. Dual simplex: Bound-flipping Ratio Test (BFRT) General bounded equality problem is � � minimize f = ❝ T ① ❧ ≤ ① ≤ ✉ subject to A I ① = ❜ At a vertex, nonbasic variables ① N take values ✈ N of lower or upper bounds Equations and ① partitioned according to B ∪ N as ⇒ ① B = B − 1 ( ❜ − N ① N ) = � ❜ − � B ① B + N ① N = ❜ N δ N where ① N = ✈ N + δ N and � ❜ = B − 1 ( ❜ − N ✈ N ) For δ N = 0, the partition yields an optimal solution if there is � � c j ≥ 0 x j = l j Primal feasibility ❧ B ≤ � ❜ ≤ ✉ B j ∈ N Dual feasibility � c j ≤ 0 x j = u j Julian Hall High performance computational techniques for the simplex method 17 / 47

  18. Dual simplex: Bound-flipping Ratio Test (BFRT) Reduced objective is f D f − ( � B − ( ✉ − � f D = � ❜ − ❧ ) T s + ❜ ) T s − B d 3 d 4 Suppose p ∈ B is chosen such that � b p < l p d 2 d 5 so s p is increased from zero As f D increases, some s j , j ∈ N is zeroed d 1 If x j “flips bound” then � b p increases If still have � b p < l p , then s p α j 2 α j 1 α j 3 α j 4 s j changes sign s p can be increased further In general Find { α 1 , α 2 , . . . } (easily) Sort breakpoints as { α j 1 , α j 2 , . . . } Analyse d 1 = l p − � b p > 0 and (by recurrence) { d 2 , d 3 , . . . } for sign change Multiple iteration “progress”, with only one basis change and set of B/FTRAN s Julian Hall High performance computational techniques for the simplex method 18 / 47

  19. Simplex method: Exploiting hyper-sparsity

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend