The point of all this What you should remember about this talk before dozing off Suppose you have a C code with: int a[10]; int i; // various operations on a and i a[i] = 0; If i �∈ [0 , 9] during execution, you might have a bug How do you make sure i ∈ [0 , 9] ? Find the smallest interval [ ℓ, u ] such that i ∈ [ ℓ, u ] during execu- tion, without actually running the program Thanks to Volker Kaibel and Achill Sch¨ urmann for suggesting this “abstract” Aussois, Jan. 2010 – p. 1
Mathematical Programming as a formal language: Turing-Completeness and Applications Leo Liberti LIX, ´ Ecole Polytechnique, France Acknowledgments : P . Belotti, S. Bosio, S. Cafieri, E. Goubault, J. Leconte, S. Le Roux, J. Lee, F . Marinelli Aussois, Jan. 2010 – p. 2
At the interface of two disciplines Foundations of Optimization Computer Science Turing−completeness Mathematical Programming Programming Languages Application: Semantics MINLP MP representation for algorithms Abstract sBB Interpretation Bounds Algorithmic Tightening Invariants Application: Fixed Point FBBT Equations Bounds Tightening in sBB Aussois, Jan. 2010 – p. 3
The left branch Foundations of Optimization Computer Science Mathematical Programming Programming Languages Semantics MINLP Abstract sBB Interpretation Bounds Algorithmic Tightening Invariants Fixed Point FBBT Equations Aussois, Jan. 2010 – p. 4
MINLP min x f ( x ) s.t. g ( x ) ≤ 0 [ MINLP Z ] [ x L , x U ] x ∈ ∀ j ∈ Z ∈ x j Z x : a vector of n decision variables R n → R n → R m : might involve nonlinear f : R , g : terms and define nonconvex functions/sets x L , x U : vectors of n known values, with x L ≤ x U Z : a subset of { 1 , . . . , n } Globally optimal obj. fun. value: f ∗ attained at global optimum x ∗ Aussois, Jan. 2010 – p. 5
sBB objective function subregion 2 subregion 1 convex relaxation in whole space h(c) > f(e) convex relaxation in the subregions discarded a: solution of convex relaxation in whole space b: local solution of problem in whole space f(x) and local solution of problem in subregion 2 c: solution of convex relaxation in subregion 2 h(x) d: solution of convex relaxation in subregion 1 e: local solution of problem in subregion 1 o Finiteness not guaranteed if eps=0 g(x) o Convergence depends on convergence of LB to global optimum o Judicious pruning improves search b c a d e Global phase: 1. Initialise list L to original problem 5. if (UB−LB>epsilon) 2. If L is empty, terminate generate subproblems and insert them in L 3. Select a problem P from L and remove it else 4. Compute LB and UB for P problem P is solved, record solution if best yet endif Aussois, Jan. 2010 – p. 6
Bounds tightening At each node, restrict MINLP to the current box [ x L , x U ] Convex relaxation R ( x L , x U ) with optimum ¯ f sBB convergence property: | x U − x L | → 0 ⇒ ¯ f → f ∗ Tighten variable bounds ⇒ get tighter relaxation O PTIMIZATION B ASED B OUNDS T IGHTENING (OBBT). ¯ F ( x L , x U ) : feasible region of R ( x L , x U ) 1. 2. ∀ j ≤ n x j = F ( x L ,x U ) x j max x ∈ ¯ 3. ∀ j ≤ n x j = min F ( x L ,x U ) x j x ∈ ¯ 4. If [ x L , x U ] = [ x L , x U ] ∩ [ x, x ] return [ x L , x U ] and exit otherwise [ x L , x U ] ← [ x L , x U ] ∩ [ x, x ] 5. Recompute R ( x L , x U ) and repeat from Step 1 - OBBT might not be finitely convergent - It is independent of the order on { x 1 , . . . , x n } [Caprara & Locatelli, MPA, in press] Aussois, Jan. 2010 – p. 7
FBBT – Expression DAGs F EASIBILITY B ASED B OUNDS T IGHTENING (FBBT). Propagation of variable bounds up and down the constraints, tightening via constraint restriction 1. − Represent functions by DAG where leaf nodes + log are constants/variables ÷ × × × 3 � z i y i − log( z 3 /y 3 ) z 1 y 1 z 2 y 2 z 3 y 3 i =1 2. Associate interval [ x L j , x U j ] to leaf node for variable x j 3. Perform interval arithmetic (IA) from leaves up to root 4. Intersect root node interval with constraint restriction 5. Perform inverse IA from root down to leaves Aussois, Jan. 2010 – p. 8
FBBT – Example ax 1 − x 2 = 0 ∈ x 1 [0 , 1] ∈ x 2 [0 , 1] a > 1 Aussois, Jan. 2010 – p. 9
FBBT – Example [0 , 0] − [0 , 1] ↑ × x 2 x 1 a [0 , 1] ↑ [ a, a ] Aussois, Jan. 2010 – p. 10
FBBT – Example [0 , 0] − [0 , a ] ↑ [0 , 1] ↑ × x 2 x 1 a [0 , 1] ↑ [ a, a ] Aussois, Jan. 2010 – p. 10
FBBT – Example [ − 1 , a ] ↑ ( ∩ [0 , 0]) = [0 , 0] − [0 , a ] ↑ [0 , 1] ↑ × x 2 x 1 a [0 , 1] ↑ [ a, a ] Aussois, Jan. 2010 – p. 10
FBBT – Example [ − 1 , a ] ↑ ( ∩ [0 , 0]) = [0 , 0] − [0 , a ] ↑ [0 , 1] ↑ × x 2 [0 , 0] + [0 , 1] = [0 , 1] ↓ x 1 a [0 , 1] ↑ [ a, a ] Aussois, Jan. 2010 – p. 10
FBBT – Example [ − 1 , a ] ↑ ( ∩ [0 , 0]) = [0 , 0] − [0 , a ] ↑ × x 2 [0 , 1] ↑ [0 , 1] ↓ [0 , 0] + [0 , 1] = [0 , 1] ↓ x 1 a [0 , 1] ↑ [ a, a ] Aussois, Jan. 2010 – p. 10
FBBT – Example [ − 1 , a ] ↑ ( ∩ [0 , 0]) = [0 , 0] − [0 , a ] ↑ × x 2 [0 , 1] ↑ [0 , 1] ↓ [0 , 1] ↓ x 1 a [ a, a ] [0 , 1] ↑ [0 , 1] / [ a, a ] = [0 , 1 a ] ↓ Aussois, Jan. 2010 – p. 10
FBBT – Example [ − 1 , a ] ↑ ( ∩ [0 , 0]) = [0 , 0] − [0 , a ] ↑ × x 2 [0 , 1] ↑ [0 , 1] ↓ [0 , 1] ↓ x 1 a [ a, a ] [0 , 1] ↑ [0 , 1 a ] ↓ Further iterations do not change the intervals ⇒ convergence Aussois, Jan. 2010 – p. 10
FBBT – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 x 1 ∈ [0 , 1] x 2 ∈ [0 , 1] a > 1 Aussois, Jan. 2010 – p. 11
FBBT 3 – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 [0 , 0] [0 , 0] − − × [0 , 1] ↑ × x 2 x 1 x 2 a a x 1 [ a, a ] [0 , 1] ↑ [ a, a ] Aussois, Jan. 2010 – p. 12
FBBT 3 – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 [ − 1 , a ] ↑ ( ∩ [0 , 0]) = [0 , 0] [0 , 0] − − × [0 , a ] ↑ × x 2 [0 , 1] ↑ x 1 [0 , 1] ↓ [0 , 1] ↓ x 2 a a x 1 [ a, a ] [ a, a ] [0 , 1] ↑ [0 , 1 a ] ↓ Aussois, Jan. 2010 – p. 12
FBBT 3 – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 [0 , 0] [0 , 0] − − × [0 , a ] ↑ × x 2 [0 , 1] ↑ x 1 [0 , 1] ↓ [0 , 1 [0 , 1] ↓ a ] ↑ x 2 a x 1 a [0 , 1] ↑ [ a, a ] [ a, a ] [0 , 1] ↑ [0 , 1 a ] ↓ Aussois, Jan. 2010 – p. 12
FBBT 3 – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 [ − a, 1 a ] ↑ [0 , 0] ( ∩ [0 , 0]) = [0 , 0] − − × [0 , a ] ↑ [0 , a ] ↑ × x 2 [0 , 1] ↑ x 1 [0 , 1 [0 , 1] ↓ a ] ↑ [0 , 1] ↓ x 2 a a x 1 [0 , 1] ↑ [ a, a ] [ a, a ] [0 , 1] ↑ [0 , 1 a ] ↓ Aussois, Jan. 2010 – p. 12
FBBT 3 – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 [ − a, 1 a ] ↑ [0 , 0] ( ∩ [0 , 0]) = [0 , 0] − − × [0 , a ] ↑ [0 , a ] ↑ × x 2 [0 , 1] ↑ x 1 [0 , 1 [0 , 1] ↓ [0 , 1 a ] ↓ [0 , 1] ↓ a ] ↑ [0 , 1 a ] ↓ x 2 a a x 1 [0 , 1] ↑ [ a, a ] [ a, a ] [0 , 1] ↑ [0 , 1 a ] ↓ Aussois, Jan. 2010 – p. 12
FBBT 3 – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 [ − a, 1 a ] ↑ [0 , 0] ( ∩ [0 , 0]) = [0 , 0] − − × [0 , a ] ↑ [0 , a ] ↑ × x 2 [0 , 1] ↑ x 1 [0 , 1 [0 , 1] ↓ [0 , 1 a ] ↓ [0 , 1] ↓ a ] ↑ [0 , 1 a ] ↓ x 2 a a x 1 [ a, a ] [0 , 1] ↑ [ a, a ] [0 , 1] ↑ [0 , 1 a 2 ] ↓ [0 , 1 a ] ↓ Aussois, Jan. 2010 – p. 12
FBBT 3 – Nonconvergence ax 1 − x 2 = 0 x 1 − ax 2 = 0 [ − a, 1 a ] ↑ [0 , 0] ( ∩ [0 , 0]) = [0 , 0] − − × [0 , a ] ↑ [0 , a ] ↑ × x 2 [0 , 1] ↑ x 1 [0 , 1 [0 , 1] ↓ [0 , 1 a ] ↓ [0 , 1] ↓ a ] ↑ [0 , 1 a ] ↓ x 2 a a x 1 [ a, a ] [0 , 1] ↑ [ a, a ] [0 , 1] ↑ [0 , 1 a 2 ] ↓ [0 , 1 a ] ↓ Repeating the process k times yields sequences 1 1 x 1 ∈ [0 , a 2 k − 1 ] x 2 ∈ [0 , a 2 k ] : NONCONVERGENCE Aussois, Jan. 2010 – p. 12
Motivation During a 2007 meeting at IBM about C OUENNE ’s conception and design, having realized FBBT’s nonconvergent behaviour, the following question was put forward: Can we define an optimization problem whose optimal solution is the FBBT limit? After hours of unsuccessful attempts, we moved over to Andreas’ biscuit tin and forgot all about this. Aussois, Jan. 2010 – p. 13
Can we get a finitely convergent FBBT? Aussois, Jan. 2010 – p. 14
Right branch and connections Foundations of Optimization Computer Science Mathematical Programming Programming Languages Semantics MINLP Abstract sBB Interpretation Bounds Algorithmic Tightening Invariants Fixed Point FBBT Equations Aussois, Jan. 2010 – p. 15
Programming languages Syntax Verifying whether a string of symbols belongs to a lan- guage defined formally over a given alphabet establishes whether a formula is valid ( x + y ) is valid, +) x ( y isn’t Semantics Assignment of sets to variable symbols appearing in valid formulæ defines the meaning of a valid formula Turing-completeness Can the language be used to simulate a Universal Turing Machine (UTM)? establishes the “expressive power” of a language Aussois, Jan. 2010 – p. 16
First connection Is MP Turing-complete? Aussois, Jan. 2010 – p. 17
Recommend
More recommend