csci 1951 g optimization methods in finance part 03 mixed
play

CSCI 1951-G Optimization Methods in Finance Part 03: (Mixed) - PowerPoint PPT Presentation

CSCI 1951-G Optimization Methods in Finance Part 03: (Mixed) Integer (Linear) Programming February 9, 2018 1 / 55 Roadmap 1 Introduce Integer Programming and the various sub classes of IP 2 Study the feasible region for IPs 3 Show the


  1. CSCI 1951-G – Optimization Methods in Finance Part 03: (Mixed) Integer (Linear) Programming February 9, 2018 1 / 55

  2. Roadmap 1 Introduce Integer Programming and the various sub classes of IP 2 Study the feasible region for IPs 3 Show the computational complexity of IP 4 Comment on the importance of modeling and formulation 5 Present different algorithms and strategies to solve IPs : • Branch and bound • Cuting planes (Gomory cuts) • Branch and cut 2 / 55

  3. Motivation example: Combinatorial auctions • Auction : a set of items M = { 1 , . . . , m } are available for sale from a seller , and n bidders compete to buy some of them. • Each bidder j makes a bid B j = ( S j , p j ) where S j ⊆ M and p j > 0 to buy the set S j of items at the price p j , 1 ≤ j ≤ n . • The seller wants to maximize her profits . Which bids should she accept? Let’s model the variables, objective function, and constraints. Combinatorial auction n � max p j x j i =1 � x j ≤ 1 for i = 1 , . . . , m j : i ∈ S j x j ∈ { 0 , 1 } for j = 1 , . . . , n Looks like a Linear Program...but the variables are discrete . It is an integer linear program . 3 / 55

  4. Integer Programming Integer Program: some or all the variables are restricted to Z . Integer Linear Program: an integer program with linear constraints and linear objective function. Mixed/Pure (Linear) Integer Program: an IP (or ILP) s.t. some/all variables are restricted to Z , the others to R . 0-1 (Mixed/Pure) (Linear) Integer Program: an MIP/PIP/MILP/PILP s.t. all integer variables are restricted to { 0 , 1 } . We will mostly restrict to (0-1) MILPs. 4 / 55

  5. Feasible region Consider the ILP min c T x s.t. Ax = b x ≥ 0 x ∈ Z Feasible “region”: integer points inside a polyhedron: 1 Finding the optimal LP solution takes polynomial time. Finding the optimal ILP solution? 1Image by Ted Ralphs 5 / 55

  6. Complexity of solving 0-1 PILP Theorem (Karp 1972) 0-1 PILP is NP-Complete. Proof. Reduction from minimum vertex cover : given a graph G = ( V, E ) , find the smallest set S ⊆ V such that each e ∈ E is incident to at least one vertex in S . 0-1 PILP formulation: � min x v v ∈ V x v + x u ≥ 1 for each ( u, v ) ∈ E x v ∈ { 0 , 1 } for each v ∈ V 6 / 55

  7. LP Relaxation Feasible region for the ILP LP defined on the polyhedron: LP Relaxation of the ILP. x L : optimal solution to LPR, with obj. value z L x I : optimal solution to ILP, woth obj. value z I What’s the relationship between z L and z I ? z L ≤ z I . z L = z I iff... x L is integral. Can we use x L obtain x I ? No, rounding the components of x L does not give x I roundings ( c T rounding ( x L ) − z I ) ? Can we say something about min No: any rounding ( x I ) may be arbitrarily far from any optimal solution for the ILP, and the change in objective value may be huge (i.e., rounding has no approximation guarantee!) 7 / 55

  8. Models and formulation Two ways to specify that a variable is binary: • x ∈ { 0 , 1 } • x 2 = x Lead to optimization problems of different classes, with different algorithms to solve them, and different complexity. Qestion Does the formulation/model impact ⌈ x L ⌉ − x I ? Different formulations may result in different polyhedra that beter/worse approximate the convex hull of the feasible solution of the ILP. (Homework?) 8 / 55

  9. Partitioning the feasible region Partition the feasible region S of the ILP into non-overlapping subsets S 1 , . . . , S k . z ∗ : optimal obj. value of the ILP; z ∗ i optimal obj. value of the ILP restricted to S i , 1 ≤ i ≤ k (ILP i ) Can we express z ∗ in terms of z ∗ i ? z ∗ = min 1 ≤ i ≤ k z ∗ i Equivalently: � � x ∈ S c T x = min x ∈ S i c T x, 1 ≤ i ≤ k min min Let x ∗ be such that c T x ∗ = z ∗ . x ∗ must be optimal for some S i . Idea for finding the optimal solution to the ILP Find the optimal solution in each S i (i.e., solve subproblems ) 9 / 55

  10. Branch and bound Branching The process of dividing the original problem into subproblems. • Can branch recursively : creates a hierarchical partitioning of the feasible region. • The partitioning is tree whose nodes are subsets of the feasible region (equivalently: ILPs defined on those subsets). • The root is the whole feasible region, i.e., the original ILP. • The ILP in a node at level i > 0 is obtained by adding constraints to its parent at level i − 1 , according to a branching rule . What happens if we keep branching? Complete enumeration of all solutions. Good? No, let’s avoid it. How: By pruning subtrees using upper bounds to z ∗ . 10 / 55

  11. Lower and upper bounds We saw a Lower Bound (LB) to z ∗ ...the optimal obj. value to LPR. z i of the LPRs of ILP i , 1 ≤ i ≤ k , LBs to z ∗ ? Are the opt. obj. values ˜ z i is in general only a LB to z ∗ No, each ˜ i . z i be an Upper Bound (UB) to z ∗ ? Yes if ˜ Can ˜ z i is integral . 11 / 55

  12. Lower and upper bounds (cont.) Let u be an UB to z ∗ such that there exists and is known x ∗ feasible for ILP such that c T x ∗ = u . Let ˜ z i be the optimal obj. value of the LPR of ILP i , for a specific i . • If ˜ z i ≥ u then S i ...cannot contain a feasible solution to ILP with lower obj. value than x ∗ (Why?).We can prune the node for ILP i • If ˜ z i < u then we must branch recursively inside of S i . But if the solution corresponding to ˜ z i is integral then ... z i is a beter UB to z ∗ than u . ˜ 12 / 55

  13. Initial subproblem Start by solving the LPR of the original ILP (i.e., on S ). 1 The LPR is infeasible ⇒ the ILP is infeasible 2 The optimal solution of the LPR is integral ⇒ It is also the optimal solution for the ILP 3 The optimal solution of the LPR is not integral ⇒ it is a lower bound to the optimal solution of the ILP (useful? no) • In cases 1 and 2, we are done. • In case 3, we must branch : create subproblems by partitioning S , and solve the subproblems recursively. 13 / 55

  14. Branching rule (Brainstorm) 1 Select a single variable x i whose value v i is fractional in the optimal LPR solution of the node to branch 2 Create two subproblems (nodes): • In one subproblem, add the constraint x i ≤ ⌊ v i ⌋ . • In the other subproblem, add the constraint x i ≥ ⌈ v i ⌉ . 1 st Modified ILP 2 nd Modified ILP Original LP rel. min − x 1 + x 2 min − x 1 + x 2 min − x 1 + x 2 − x 1 + x 2 ≤ 2 − x 1 + x 2 ≤ 2 − x 1 + x 2 ≤ 2 8 x 1 + 2 x 2 ≤ 19 8 x 1 + 2 x 2 ≤ 19 8 x 1 + 2 x 2 ≤ 19 x 1 ≤ 1 x 1 , x 2 ≥ 0 x 1 ≥ 2 Optimal solution is x 1 , x 2 ≥ 0 x 2 ≥ 0 x 1 = 1 . 5 , x 2 = 3 . 5 , x 1 , x 2 ∈ Z x 1 , x 2 ∈ Z obj.val. 5 14 / 55

  15. Impact on the feasible region (This is not the feasible region for the problems in the previous slide) 15 / 55

  16. Afer branching Afer branching, we solve each subproblem recursively. We stopping when either: • the optimal obj. value of the LPR of the current problem is larger than the current upper bound to the optimal obj. value of the original ILP; or • the LPR is infeasible ; Branch-and-bound We avoid exploring ( branching into ) subsets of the feasible region because the bounds tell us that they do not contain any beter solution than what found already. 16 / 55

  17. LP-based Branch-and-Bound algorithm L : data structure, will contain pairs ( P, r ) , ( P is a ILP, r is a number) 1 L ← ( P 0 , + ∞ ) , where P 0 is original ILP. 2 Upper bound u ← + ∞ , vector x ∗ ← infeasible 3 Pop an ILP P from L and solve its LPR. Let ˜ z P be the optimal obj. value and ˜ x P the optimal solution for the LPR • If LPR is infeasible or ˜ z P ≥ u , the node for P ...can be pruned (go to Step 4) z P , x ∗ ← ˜ • Else if ˜ z P < u and ˜ x P is integral , then ... u ← ˜ x P , and ...remove all pairs ( X, r ) from L s.t. r ≥ u (go to Step 4). • Else, branch: create subproblems P 1 and P 2 , add ( P 1 , ˜ z P ) and ( P 2 , ˜ z P ) to L . 4 if L � = ∅ , go to Step 3. Else, terminate with output ( x ∗ , u ) . 17 / 55

  18. Details to discuss Details of the algorithm that are lef open: • How do we select which variable to branch on if there are multiple fractional ones? • How do we select the next candidate subproblem to process? Remark The trade-off space for the above choices is extremely large. “Optimizing” branch-and-bound is, in itself, a difficult optimization problem, and still subject to a lot of research. We also want to obtain good upper bounds as fast as possible (why?). How? (Obtaining good upper bounds faster allows us to prune larger regions, hence to terminate earlier.) 18 / 55

  19. Branching In the optimal solution of an LPR, many variables may be fractional . Qestion How do we select which variable to branch on? Qestion What do we look for in a variable to branch on? Brainstorm. Answer We want the two resulting subproblems to both give much beter lower bounds than the subproblem we are branching. We will present heuristics to achieve this goal. 19 / 55

  20. Branching (cont.) Most Infeasible Branching Choose the variable with the fractional value closest to 0 . 5 . Reason: The closest feasible solutions for the LPRs of the subproblems will both have large distances from the optimal solution to the current relaxation. Underlying assumption: large distance implies large difference in objective value. Yes/No? It depends on the objective function! 20 / 55

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend