math529 fundamentals of optimization fundamentals of
play

MATH529 Fundamentals of Optimization Fundamentals of Constrained - PowerPoint PPT Presentation

MATH529 Fundamentals of Optimization Fundamentals of Constrained Optimization V: Linear Programming and the Simplex Method Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 61 Methods for Constrained


  1. MATH529 – Fundamentals of Optimization Fundamentals of Constrained Optimization V: Linear Programming and the Simplex Method Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 61

  2. Methods for Constrained Optimization Optimization Algorithms exploit the structure of the problem: Linear Programs: Simplex Method, Interior-point Method, . . . Nonlinear Programs: Sequential Quadratic Programming, Penalty Methods, . . . 2 / 61

  3. Linear Programming Definition A linear program is one with linear objective function and linear constraints, which may include equality and inequality constraints. Example: min − 4 x 1 + 2 x 2 − 2 x 3 + 13 x 4 , subject to x 1 + x 2 ≥ 3 , x 1 − x 3 + x 4 = 3 , x 1 , x 2 , x 3 , x 4 ≥ 0 3 / 61

  4. Linear Programming Appeal: It is often easier (and more appropriate) to model problems as linear programs KKT conditions are valid The feasible set is a polytope (a convex, connected, set with flat, polygonal faces). The solution is found at extreme points, thus reducing the search space to specific regions of the feasible set. A local optimum will also be a global optimum. 4 / 61

  5. Linear Programming: Toward a solution method Interior, boundary, extreme points: No line between points belonging Not in the set to the set 5 / 61

  6. Linear Programming: Toward a solution method Supporting hyperplanes: H F A supporting hyperplane H has one or more points in common with a convex set F , but F lies completely one one side of H . 6 / 61

  7. Linear Programming: Toward a solution method Theorem 1: Given u , a boundary point of a closed convex set, there is at least one supporting hyperplane at u . Theorem 2: For a closed convex set bounded from below, there is at least one extreme point in every supporting hyperplane. Since in linear programming the objective function is linear, the hyperplane corresponding to the optimal value of the function will be a supporting hyperplane of the feasible set. 7 / 61

  8. Linear Programming: Toward a solution method Theorem 1: Given u , a boundary point of a closed convex set, there is at least one supporting hyperplane at u . Theorem 2: For a closed convex set bounded from below, there is at least one extreme point in every supporting hyperplane. Since in linear programming the objective function is linear, the hyperplane corresponding to the optimal value of the function will be a supporting hyperplane of the feasible set. We can therefore pay attention only to extreme points! 8 / 61

  9. Linear Programming: Toward a solution method Example: max 40 x + 30 y subject to: x + 2 y ≤ 24 0 ≤ x ≤ 16 0 ≤ y ≤ 8 9 / 61

  10. Linear Programming: Toward a solution method y x 10 / 61

  11. Linear Programming: Toward a solution method Dummy variables: Slacks and Surpluses: y x 11 / 61

  12. Linear Programming: Toward a solution method Dummy variables: Slacks and Surpluses: y x With the inexact satisfaction of a constraint, there is a slack or surplus related to one or more constraints. 12 / 61

  13. Linear Programming: Toward a solution method Step 1: Transforming the program: max p = 40 x + 30 y + 0 s 1 + 0 s 2 + 0 s 3 + 0 s 4 + 0 s 5 subject to: x + 2 y + s 1 = 24 x + s 2 = 16 y + s 3 = 8 x − s 4 = 0 ∗ y − s 5 = 0 ∗ x , y , s 1 , s 2 , s 3 , s 4 , s 5 ≥ 0 *For surpluses, we add − s i , with s i ≥ 0. 13 / 61

  14. Linear Programming: Toward a solution method Step 1: Transforming the program: max p = 40 x + 30 y subject to: x + 2 y + s 1 = 24 x + s 2 = 16 y + s 3 = 8 x , y , s 1 , s 2 , s 3 ≥ 0 14 / 61

  15. Linear Programming: Toward a solution method Step 2: Generating extreme points:  x   1 2 1 0 0   24  y     1 0 0 1 0 = 16 s 1         0 1 0 0 1 s 2 8   s 3 15 / 61

  16. Linear Programming: Toward a solution method Step 2: Generating extreme points:  x   1 2 1 0 0   24  y     1 0 0 1 0 s 1 = 16         0 1 0 0 1 8 s 2   s 3 Let x = y = 0, then       1 0 0 s 1 24  = 0 1 0 16 s 2      0 0 1 s 3 8 That is s 1 = 24, s 2 = 16, and s 3 = 8. 16 / 61

  17. Linear Programming: Toward a solution method y x 17 / 61

  18. Linear Programming: Toward a solution method Step 2: Generating extreme points:   x  1 2 1 0 0   24  y     1 0 0 1 0 s 1 = 16         0 1 0 0 1 8 s 2   s 3 Let x = s 1 = 0, then  2 0 0     24  y  = 0 1 0 16 s 2      1 0 1 s 3 8 That is y = 12, s 2 = 16, and s 3 = − 4. (Violates nonnegativity restrictions!) 18 / 61

  19. Linear Programming: Toward a solution method Step 2: Generating extreme points:   x     1 2 1 0 0 24 y     1 0 0 1 0 s 1 = 16         0 1 0 0 1 8 s 2   s 3 Let x = s 2 = 0, then  2 1 0     24  y  = 0 0 0 s 1 16      1 0 1 8 s 3 Invalid system! 19 / 61

  20. Linear Programming: Toward a solution method Step 2: Generating extreme points:   x     1 2 1 0 0 24 y     1 0 0 1 0 s 1 = 16         0 1 0 0 1 8 s 2   s 3 Let x = s 3 = 0, then  2 1 0     24  y  = 0 0 1 s 1 16      1 0 0 8 s 2 That is y = 8, s 2 = 16, and s 3 = 8. 20 / 61

  21. Linear Programming: Toward a solution method y x 21 / 61

  22. Linear Programming: Toward a solution method Step 2: Generating extreme points:   x     1 2 1 0 0 24 y     1 0 0 1 0 s 1 = 16         0 1 0 0 1 8 s 2   s 3 Let s 1 = s 3 = 0, then  1 2 0     24  x  = 1 0 1 y 16      0 1 0 8 s 2 That is x = 8, y = 8, and s 2 = 8. 22 / 61

  23. Linear Programming: Toward a solution method y x 23 / 61

  24. Linear Programming: Toward a solution method We now have a method to systematically generate extreme points. But there are a few remaining questions: How to select which columns to eliminate so that we generate as few extreme points as possible? How can we use the objective function to guide the search? How to avoid generating invalid moves? 24 / 61

  25. The Simplex Method 25 / 61

  26. George B. Dantzig 26 / 61

  27. The idea behind the Simplex Method 27 / 61

  28. The Simplex Method Simplex tableau: Constant p x y s 1 s 2 s 3 1 -40 -30 0 0 0 0 0 1 2 1 0 0 24 0 1 0 0 1 0 16 0 0 1 0 0 1 8 Basis at (0,0):   s 1 s 2 s 3 1 0 0     0 1 0   0 0 1 28 / 61

  29. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. Constant p x y s 1 s 2 s 3 1 -40 -30 0 0 0 0 0 1 2 1 0 0 24 0 1 0 0 1 0 16 0 0 1 0 0 1 8 29 / 61

  30. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. Choose the column associated with the negative entry with the largest abs. value Constant p x y s 1 s 2 s 3 1 -40 -30 0 0 0 0 0 1 2 1 0 0 24 0 1 0 0 1 0 16 0 0 1 0 0 1 8 30 / 61

  31. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. If we choose to replace s 1 : Constant p x y s 1 s 2 s 3 1 -40 -30 0 0 0 0 40 R 2 + R 1 0 1 2 1 0 0 24 0 1 0 0 1 0 16 R 2 − R 3 0 0 1 0 0 1 8 31 / 61

  32. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. If we choose to replace s 1 : Constant p x y s 1 s 2 s 3 1 0 50 40 0 0 960 0 1 2 1 0 0 24 0 0 2 1 -1 0 8 0 0 1 0 0 1 8 New basis:   x s 2 s 3 1 0 0     0 − 1 0   0 0 1 Thus x = 24, y = 0, s 1 = 0, s 2 = − 8, s 3 = 8 32 / 61

  33. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. If we choose to replace s 1 : Constant p x y s 1 s 2 s 3 1 0 50 40 0 0 960 0 1 2 1 0 0 24 0 0 2 1 -1 0 8 0 0 1 0 0 1 8 New basis:   x s 2 s 3 1 0 0     0 − 1 0   0 0 1 Thus x = 24, y = 0, s 1 = 0, s 2 = − 8, s 3 = 8 Invalid move! 33 / 61

  34. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. If we choose to replace s 2 : Constant p x y s 1 s 2 s 3 1 -40 -30 0 0 0 0 40 R 3 + R 1 0 1 2 1 0 0 24 R 2 − R 3 0 1 0 0 1 0 16 0 0 1 0 0 1 8 34 / 61

  35. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. If we choose to replace s 2 : Constant p x y s 1 s 2 s 3 1 0 -30 0 40 0 640 0 0 2 1 -1 0 8 0 1 0 0 1 0 16 0 0 1 0 0 1 8 New basis:   s 1 x s 3 1 0 0     0 1 0   0 0 1 Thus x = 16, y = 0, s 1 = 8, s 2 = 0, s 3 = 8 35 / 61

  36. The Simplex Method Pivoting: Choose the pivot column, then the pivot element. So how do we choose the pivot element? Pick the positive elements in the pivot column, Divide the constant column by these elements, Select the element corresponding to the smallest quotient as pivot element 36 / 61

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend