Basics on Linear Programming Combinatorial Problem Solving (CPS) - - PowerPoint PPT Presentation

basics on linear programming
SMART_READER_LITE
LIVE PREVIEW

Basics on Linear Programming Combinatorial Problem Solving (CPS) - - PowerPoint PPT Presentation

Basics on Linear Programming Combinatorial Problem Solving (CPS) Javier Larrosa Albert Oliveras Enric Rodr guez-Carbonell May 6, 2020 Linear Programs (LPs) A linear program is an optimization problem of the form min c T x A 1 x


slide-1
SLIDE 1

Basics on Linear Programming

Combinatorial Problem Solving (CPS)

Javier Larrosa Albert Oliveras Enric Rodr´ ıguez-Carbonell

May 6, 2020

slide-2
SLIDE 2

Linear Programs (LP’s)

2 / 31

A linear program is an optimization problem of the form min cT x A1x ≤ b1 A2x = b2 A3x ≥ b3 x ∈ Rn c ∈ Rn, bi ∈ Rmi, Ai ∈ Rmi×n, i = 1, 2, 3

x is the vector of variables

cTx is the cost or objective function

A1x ≤ b1, A2x = b2 and A3x ≥ b3 are the constraints

slide-3
SLIDE 3

Linear Programs (LP’s)

2 / 31

A linear program is an optimization problem of the form min cT x A1x ≤ b1 A2x = b2 A3x ≥ b3 x ∈ Rn c ∈ Rn, bi ∈ Rmi, Ai ∈ Rmi×n, i = 1, 2, 3

x is the vector of variables

cTx is the cost or objective function

A1x ≤ b1, A2x = b2 and A3x ≥ b3 are the constraints

Example: min x + y + z x + y = 3 0 ≤ x ≤ 2 0 ≤ y ≤ 2

slide-4
SLIDE 4

Notes on the Definition of LP

3 / 31

Solving minimization or maximization is equivalent: max{ f(x) | x ∈ S } = − min{ −f(x) | x ∈ S }

Satisfiability problems are a particular case: take arbitrary cost function, e.g., c = 0

slide-5
SLIDE 5

Equivalent Forms of LP’s (1)

4 / 31

This form is not the most convenient for algorithms WLOG we can transform such a problem as follows 1. Split = constraints into ≥ and ≤ constraints min cT x A1x ≤ b1 A2x = b2 A3x ≥ b3 = ⇒ min cTx A1x ≤ b1 A2x ≤ b2 A2x ≥ b2 A3x ≥ b3 Now all constraints are ≤ or ≥

slide-6
SLIDE 6

Equivalent Forms of LP’s (2)

5 / 31

Example of step 1.: min x + y + z x + y = 3 0 ≤ x ≤ 2 0 ≤ y ≤ 2 = ⇒ min x + y + z x + y ≤ 3 x + y ≥ 3 0 ≤ x ≤ 2 0 ≤ y ≤ 2

slide-7
SLIDE 7

Equivalent Forms of LP’s (3)

6 / 31

2. Transform ≥ constraints into ≤ constraints by multiplying by -1 min cT x A1x ≤ b1 A2x ≥ b2 = ⇒ min cT x A1x ≤ b1 −A2x ≤ −b2 Now all constraints are ≤

slide-8
SLIDE 8

Equivalent Forms of LP’s (4)

7 / 31

Example of step 2.: min x + y + z x + y ≤ 3 x + y ≥ 3 0 ≤ x ≤ 2 0 ≤ y ≤ 2 = ⇒ min x + y + z x + y ≤ 3 −x − y ≤ −3 0 ≤ x ≤ 2 0 ≤ y ≤ 2

slide-9
SLIDE 9

Equivalent Forms of LP’s (5)

8 / 31

3. Replace variables x by y − z, where y, z are vectors of fresh variables, and add constraints y ≥ 0, z ≥ 0 min cTx Ax ≤ b = ⇒ min cT y − cTz Ay − Az ≤ b y, z ≥ 0 Now all constraints are ≤ and all variables have to be ≥ 0

slide-10
SLIDE 10

Equivalent Forms of LP’s (6)

9 / 31

Actually only needed for variables which are not already non-negative. (in the example, only z) Example of step 3.: min x + y + z x + y ≤ 3 −x − y ≤ −3 0 ≤ x ≤ 2 0 ≤ y ≤ 2 = ⇒ min x + y + u − v x + y ≤ 3 −x − y ≤ −3 0 ≤ x ≤ 2 0 ≤ y ≤ 2 u, v ≥ 0

slide-11
SLIDE 11

Equivalent Forms of LP’s (7)

10 / 31

4. Add a slack variable to each ≤ constraint to convert it into = min cT x Ax ≤ b x ≥ 0 = ⇒ min cTx Ax + s = b x, s ≥ 0 Now all constraints are = and all variables have to be ≥ 0

slide-12
SLIDE 12

Equivalent Forms of LP’s (8)

11 / 31

Example of step 4.: min x + y + u − v x + y ≤ 3 −x − y ≤ −3 0 ≤ x ≤ 2 0 ≤ y ≤ 2 u, v ≥ 0 = ⇒ min x + y + u − v x + y + s1 = 3 −x − y + s2 = −3 x + s3 = 2 y + s4 = 2 x, y, u, v, s1, s2, s3, s4 ≥ 0

slide-13
SLIDE 13

Equivalent Forms of LP’s (9)

12 / 31

Altogether: min x + y + z x + y = 3 0 ≤ x ≤ 2 0 ≤ y ≤ 2 = ⇒ min x + y + u − v x + y + s1 = 3 −x − y + s2 = −3 x + s3 = 2 y + s4 = 2 x, y, u, v, s1, s2, s3, s4 ≥ 0

slide-14
SLIDE 14

Equivalent Forms of LP’s (10)

13 / 31

In the end we get a problem in standard form: min cT x Ax = b x ≥ 0 c ∈ Rn, b ∈ Rm, A ∈ Rm×n, n ≥ m, rank(A) = m

These transformations are not strictly necessary (they increase no. of constraints and variables), but are convenient in a first formulation of the algorithms

Often variables are identified with columns of the matrix, and constraints are identified with rows

slide-15
SLIDE 15

Methods for Solving LP’s

14 / 31

Simplex algorithms

Interior-point algorithms

slide-16
SLIDE 16

Methods for Solving LP’s

14 / 31

Simplex algorithms

Interior-point algorithms

slide-17
SLIDE 17

Basic Definitions (1)

15 / 31

min cT x Ax = b x ≥ 0

Any vector x such that Ax = b is called a solution

A solution x satisfying x ≥ 0 is called a feasible solution

An LP with feasible solutions is called feasible;

  • therwise it is called infeasible

A feasible solution x∗ is called optimal if cT x∗ ≤ cT x for all feasible solution x

A feasible LP with no optimal solution is unbounded

slide-18
SLIDE 18

Basic Definitions (2)

16 / 31

max x + 2y x + y + s1 = 3 x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0

(x, y, s1, s2, s3) = (−1, −1, 5, 3, 3) is a solution but not feasible

(x, y, s1, s2, s3) = (1, 1, 1, 1, 1) is a feasible solution

slide-19
SLIDE 19

Basic Definitions (3)

17 / 31

max x + βy x + y + s1 = α x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0

If α = −1 the LP is not feasible

If α = 3, β = 2 then (x, y, s1, s2, s3) = (1, 2, 0, 1, 0) is the (only) optimal solution

slide-20
SLIDE 20

Basic Definitions (3)

17 / 31

max x + βy x + y + s1 = α x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0

If α = −1 the LP is not feasible

If α = 3, β = 2 then (x, y, s1, s2, s3) = (1, 2, 0, 1, 0) is the (only) optimal solution

There may be more than one optimal solution: If α = 3 and β = 1 then {(1, 2, 0, 1, 0), (2, 1, 0, 0, 1), ( 3

2, 3 2, 0, 1 2, 1 2)} are optimal

slide-21
SLIDE 21

Basic Definitions (4)

18 / 31

max x + y x y x ≤ 2 x + y ≤ 3 max x + 2y y ≤ 2 y ≥ 0 x ≥ 0 (2, 1) (1, 2)

slide-22
SLIDE 22

Basic Definitions (5)

19 / 31

min x + 2y x + y ≤ 3 0 ≤ x ≤ 2 y ≤ 2 Unbounded LP

x x ≤ 2 y ≤ 2 x ≥ 0 x + y ≤ 3 y min x + 2y

slide-23
SLIDE 23

Basic Definitions (6)

20 / 31

max x + 2y x + y ≤ 3 0 ≤ x ≤ 2 y ≤ 2 LP is bounded, but set of feasible solutions is not

x y x ≤ 2 max x + 2y y ≤ 2 x ≥ 0 (1, 2) x + y ≤ 3

slide-24
SLIDE 24

Bases (1)

21 / 31

Let us denote by a1, ..., an the columns of A Recall that n ≥ m, rank(A) = m.

A matrix of m columns (ak1, ..., akm) is a basis if the columns are linearly independent

Note that a basis is a square matrix!

If (ak1, ..., akm) is a basis, then the variables (xk1, ..., xkm) are called basic

We usually denote by B the list of indices (k1, ..., km), and by R the list of indices (1, 2, ..., n) − B; and by B the matrix (ai | i ∈ B), and by R the matrix (ai | i ∈ R) xB the basic variables, xR the non-basic ones

slide-25
SLIDE 25

Bases (2)

22 / 31

max x + 2y x + y + s1 = 3 x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0 x y s1 s2 s3 A =   1 1 1 1 1 1 1  

(x, s1, s2) do not form a basis:   1 1 1 1   does not have linearly independent columns

(s1, s2, s3) form a basis, where xB = (s1, s2, s3), xR = (x, y) B =   1 1 1   R =   1 1 1 1  

slide-26
SLIDE 26

Bases (3)

23 / 31

If B is a basis, then the following holds BxB + RxR = b Hence: xB = B−1b − B−1RxR Non-basic variables determine values of basic ones

If non-basic variables are set to 0, we get the solution xR = 0, xB = B−1b Such a solution is called a basic solution

If a basic solution satisfies xB ≥ 0 then it is called a basic feasible solution, and the basis is feasible

slide-27
SLIDE 27

Bases (4)

24 / 31

Consider basis (s1, s2, s3) max x + 2y x + y + s1 = 3 x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0 B =   1 1 1   R =   1 1 1 1   Equations xB = B−1b − B−1RxR are    s1 = 3 − x − y s2 = 2 − x s3 = 2 − y Basic solution is σB =   3 2 2   σR =

  • So basis (s1, s2, s3) is feasible
slide-28
SLIDE 28

Bases (5)

25 / 31

Basis (x, y, s1) is not feasible max x + 2y x + y + s1 = 3 x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0 B =   1 1 1 1 1   R =   1 1      x = 2 − s2 y = 2 − s3 s1 = −1 + s2 + s3 σB =   2 2 −1   σR =

slide-29
SLIDE 29

Bases (6)

26 / 31

A basis is called degenerate when at least one component of its basic solution xB is null For example: max x + 2y x + y + s1 = 4 x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0 B =   1 1 1 1 1   R =   1 1      x = 2 + s3 − s1 y = 2 − s3 s2 = s1 − s3 σB =   2 2  

slide-30
SLIDE 30

Geometry of LP’s (1)

27 / 31

Set of feasible solutions of an LP is a convex polyhedron

Basic feasible solutions are vertices of the convex polyhedron

slide-31
SLIDE 31

Geometry of LP’s (2)

28 / 31

max x + 2y x + y + s1 = 3 x + s2 = 2 y + s3 = 2 x, y, s1, s2, s3 ≥ 0

xB1 = (y, s1, s2)

xB2 = (x, y, s2)

xB3 = (x, y, s3)

xB4 = (x, s1, s3)

xB5 = (s1, s2, s3)

x ≤ 2 x ≤ 2 x y x ≤ 2 y ≤ 2 (2, 0) (0, 0) (2, 1) (1, 2) (0, 2) x + y ≤ 3 y ≥ 0 x ≥ 0

2 3 5 4 1

slide-32
SLIDE 32

Geometry of LP’s (3)

29 / 31

Theorem (Minkowski-Weyl) Let P be an LP. A point x is a feasible solution to P iff there exist basic feasible solutions v1, ..., vr ∈ Rn and vectors r1, ..., rs ∈ Rn such that x =

r

  • i=1

λivi +

s

  • j=1

µjrj for certain λi, µj such that r

i=1 λi = 1 and λi, µj ≥ 0.

slide-33
SLIDE 33

Possible Outcomes of an LP (1)

30 / 31

Theorem (Fundamental Theorem of Linear Programming) Let P be an LP. Then exactly one of the following holds: 1. P is infeasible 2. P is unbounded 3. P has an optimal basic feasible solution It is sufficient to investigate basic feasible solutions!

slide-34
SLIDE 34

Possible Outcomes of an LP (2)

31 / 31

Proof: Assume P feasible and with optimal solution x∗. Let us see we can find a basic feasible solution as good as x∗. By Minkowski-Weyl theorem, we can write x∗ = r

i=1 λ∗ i vi + s j=1 µ∗ jrj

where r

i=1 λ∗ i = 1 and λ∗ i , µ∗ j ≥ 0. Then

cT x∗ = r

i=1 λ∗ i cTvi + s j=1 µ∗ jcT rj

slide-35
SLIDE 35

Possible Outcomes of an LP (2)

31 / 31

Proof: Assume P feasible and with optimal solution x∗. Let us see we can find a basic feasible solution as good as x∗. By Minkowski-Weyl theorem, we can write x∗ = r

i=1 λ∗ i vi + s j=1 µ∗ jrj

where r

i=1 λ∗ i = 1 and λ∗ i , µ∗ j ≥ 0. Then

cT x∗ = r

i=1 λ∗ i cTvi + s j=1 µ∗ jcT rj

If there is j such that cT rj < 0 then objective value can be decreased by taking µ∗

j larger. Contradiction!

slide-36
SLIDE 36

Possible Outcomes of an LP (2)

31 / 31

Proof: Assume P feasible and with optimal solution x∗. Let us see we can find a basic feasible solution as good as x∗. By Minkowski-Weyl theorem, we can write x∗ = r

i=1 λ∗ i vi + s j=1 µ∗ jrj

where r

i=1 λ∗ i = 1 and λ∗ i , µ∗ j ≥ 0. Then

cT x∗ = r

i=1 λ∗ i cTvi + s j=1 µ∗ jcT rj

If there is j such that cT rj < 0 then objective value can be decreased by taking µ∗

j larger. Contradiction!

Otherwise cTrj ≥ 0 for all j. Assume cT x∗ < cT vi for all i. cTx∗ ≥ r

i=1 λ∗ i cT vi > r i=1 λ∗ i cT x∗ = cT x∗ r i=1 λ∗ i = cT x∗

slide-37
SLIDE 37

Possible Outcomes of an LP (2)

31 / 31

Proof: Assume P feasible and with optimal solution x∗. Let us see we can find a basic feasible solution as good as x∗. By Minkowski-Weyl theorem, we can write x∗ = r

i=1 λ∗ i vi + s j=1 µ∗ jrj

where r

i=1 λ∗ i = 1 and λ∗ i , µ∗ j ≥ 0. Then

cT x∗ = r

i=1 λ∗ i cTvi + s j=1 µ∗ jcT rj

If there is j such that cT rj < 0 then objective value can be decreased by taking µ∗

j larger. Contradiction!

Otherwise cTrj ≥ 0 for all j. Assume cT x∗ < cT vi for all i. cTx∗ ≥ r

i=1 λ∗ i cT vi > r i=1 λ∗ i cT x∗ = cT x∗ r i=1 λ∗ i = cT x∗

Contradiction! Thus there is i such that cT x∗ ≥ cT vi; in fact, cT x∗ = cT vi by the optimality of x∗.