algebraic geometric ideas in discrete optimization
play

Algebraic-Geometric ideas in Discrete Optimization Jes us A. De - PowerPoint PPT Presentation

Algebraic-Geometric ideas in Discrete Optimization Jes us A. De Loera, UC Davis new results on several papers joint work with (subsets of): M. K oppe & J. Lee (IBM), U. Rothblum & S. Onn (Technion Haifa), R. Hemmecke (T.Univ.


  1. Integer Linear Programming: The state of the art Traditional Algorithms Dual (polyhedral) techniques Enumeration Adhoc methods ⊤ ⊤ x 0 x 0 max c max c x 1 x 2 Cutting plane algorithms Branch-and-bound special structure – based on polyhedral theory (e.g. network, matroids, etc.) Mathematical modelling – Strong initial IP formulation () November 5, 2011 9 / 36

  2. Integer Linear Programming: The state of the art Traditional Algorithms Dual (polyhedral) techniques Enumeration Adhoc methods ⊤ ⊤ x 0 x 0 max c max c x 1 x 2 Cutting plane algorithms Branch-and-bound special structure – based on polyhedral theory (e.g. network, matroids, etc.) Mathematical modelling – Strong initial IP formulation () November 5, 2011 9 / 36

  3. Integer Linear Programming: The state of the art Traditional Algorithms Dual (polyhedral) techniques Enumeration Adhoc methods ⊤ ⊤ x 0 x 0 max c max c x 1 x 2 Cutting plane algorithms Branch-and-bound special structure – based on polyhedral theory (e.g. network, matroids, etc.) Mathematical modelling – Strong initial IP formulation () November 5, 2011 9 / 36

  4. Integer Linear Programming: The state of the art Traditional Algorithms Dual (polyhedral) techniques Enumeration Adhoc methods ⊤ ⊤ x 0 x 0 max c max c x 1 x 2 Cutting plane algorithms Branch-and-bound special structure – based on polyhedral theory (e.g. network, matroids, etc.) Mathematical modelling – Strong initial IP formulation () November 5, 2011 9 / 36

  5. OUR WISH: Want to handle more complicated Constraints and Objective functions () November 5, 2011 10 / 36

  6. Example: Non-linear transportation polytopes In the traditional transportation problem cost at an edge is a constant. Thus 1 we optimize a linear function. but due to congestion or heavy traffic or heavy communication load the 2 transportation cost on an edge could be a non-linear function of the flow at each edge. For example cost at each edge is f ij ( x ij ) = c ij | x ij | a ij for suitable constant a ij . 3 This results on a non-linear function � f ij which is much harder to minimize. () November 5, 2011 11 / 36

  7. Example: Non-linear transportation polytopes In the traditional transportation problem cost at an edge is a constant. Thus 1 we optimize a linear function. but due to congestion or heavy traffic or heavy communication load the 2 transportation cost on an edge could be a non-linear function of the flow at each edge. For example cost at each edge is f ij ( x ij ) = c ij | x ij | a ij for suitable constant a ij . 3 This results on a non-linear function � f ij which is much harder to minimize. () November 5, 2011 11 / 36

  8. Example: Non-linear transportation polytopes In the traditional transportation problem cost at an edge is a constant. Thus 1 we optimize a linear function. but due to congestion or heavy traffic or heavy communication load the 2 transportation cost on an edge could be a non-linear function of the flow at each edge. For example cost at each edge is f ij ( x ij ) = c ij | x ij | a ij for suitable constant a ij . 3 This results on a non-linear function � f ij which is much harder to minimize. () November 5, 2011 11 / 36

  9. Reality is NON-LINEAR and worse!! BAD NEWS: The problem is Non-linear Discrete Optimization INCREDIBLY HARD Theorem It is UNDECIDABLE max/min f ( x 1 , . . . , x d ) already when f , g i ’s are arbitrary polynomials (Jeroslow, subject to g j ( x 1 , . . . , x d ) ≤ 0 , 1979). for j = 1 . . . s , and with EVEN WORSE with x i integer Theorem: It undecidable even with with f , g j Non-Linear number of variables=10. (Matiyasevich and Davis 1982). THERE IS HOPE with good WHAT CAN BE DONE IN THIS structure: GENERAL CONTEXT?? Theorem: For fixed number of variables AND convex polynomials Prove good theorems? Are there f , g i problem can be solved in efficient algorithms? polynomial time (Khachiyan and Porkolab, 2000) () November 5, 2011 12 / 36

  10. Reality is NON-LINEAR and worse!! BAD NEWS: The problem is Non-linear Discrete Optimization INCREDIBLY HARD Theorem It is UNDECIDABLE max/min f ( x 1 , . . . , x d ) already when f , g i ’s are arbitrary polynomials (Jeroslow, subject to g j ( x 1 , . . . , x d ) ≤ 0 , 1979). for j = 1 . . . s , and with EVEN WORSE with x i integer Theorem: It undecidable even with with f , g j Non-Linear number of variables=10. (Matiyasevich and Davis 1982). THERE IS HOPE with good WHAT CAN BE DONE IN THIS structure: GENERAL CONTEXT?? Theorem: For fixed number of variables AND convex polynomials Prove good theorems? Are there f , g i problem can be solved in efficient algorithms? polynomial time (Khachiyan and Porkolab, 2000) () November 5, 2011 12 / 36

  11. Reality is NON-LINEAR and worse!! BAD NEWS: The problem is Non-linear Discrete Optimization INCREDIBLY HARD Theorem It is UNDECIDABLE max/min f ( x 1 , . . . , x d ) already when f , g i ’s are arbitrary polynomials (Jeroslow, subject to g j ( x 1 , . . . , x d ) ≤ 0 , 1979). for j = 1 . . . s , and with EVEN WORSE with x i integer Theorem: It undecidable even with with f , g j Non-Linear number of variables=10. (Matiyasevich and Davis 1982). THERE IS HOPE with good WHAT CAN BE DONE IN THIS structure: GENERAL CONTEXT?? Theorem: For fixed number of variables AND convex polynomials Prove good theorems? Are there f , g i problem can be solved in efficient algorithms? polynomial time (Khachiyan and Porkolab, 2000) () November 5, 2011 12 / 36

  12. Reality is NON-LINEAR and worse!! BAD NEWS: The problem is Non-linear Discrete Optimization INCREDIBLY HARD Theorem It is UNDECIDABLE max/min f ( x 1 , . . . , x d ) already when f , g i ’s are arbitrary polynomials (Jeroslow, subject to g j ( x 1 , . . . , x d ) ≤ 0 , 1979). for j = 1 . . . s , and with EVEN WORSE with x i integer Theorem: It undecidable even with with f , g j Non-Linear number of variables=10. (Matiyasevich and Davis 1982). THERE IS HOPE with good WHAT CAN BE DONE IN THIS structure: GENERAL CONTEXT?? Theorem: For fixed number of variables AND convex polynomials Prove good theorems? Are there f , g i problem can be solved in efficient algorithms? polynomial time (Khachiyan and Porkolab, 2000) () November 5, 2011 12 / 36

  13. Reality is NON-LINEAR and worse!! BAD NEWS: The problem is Non-linear Discrete Optimization INCREDIBLY HARD Theorem It is UNDECIDABLE max/min f ( x 1 , . . . , x d ) already when f , g i ’s are arbitrary polynomials (Jeroslow, subject to g j ( x 1 , . . . , x d ) ≤ 0 , 1979). for j = 1 . . . s , and with EVEN WORSE with x i integer Theorem: It undecidable even with with f , g j Non-Linear number of variables=10. (Matiyasevich and Davis 1982). THERE IS HOPE with good WHAT CAN BE DONE IN THIS structure: GENERAL CONTEXT?? Theorem: For fixed number of variables AND convex polynomials Prove good theorems? Are there f , g i problem can be solved in efficient algorithms? polynomial time (Khachiyan and Porkolab, 2000) () November 5, 2011 12 / 36

  14. How about polyhedral constraints non-linear objective?? Let f be a multivariate polynomial function, Special programs max f ( x ) max f ( x ) max f ( x ) s . t . Ax ≤ b s . t . Ax ≤ b s . t . Ax ≤ b all x i integer all x i integer Matrix A is SPECIAL! () November 5, 2011 13 / 36

  15. How about polyhedral constraints non-linear objective?? Let f be a multivariate polynomial function, Special programs max f ( x ) max f ( x ) max f ( x ) s . t . Ax ≤ b s . t . Ax ≤ b s . t . Ax ≤ b all x i integer all x i integer Hard Hard (NP-hard) (NP-hard) Matrix A is SPECIAL! () November 5, 2011 13 / 36

  16. How about polyhedral constraints non-linear objective?? Let f be a multivariate polynomial function, Special programs max f ( x ) max f ( x ) max f ( x ) s . t . Ax ≤ b s . t . Ax ≤ b s . t . Ax ≤ b all x i integer all x i integer Hard Hard (NP-hard) (NP-hard) Matrix A is SPECIAL! () November 5, 2011 13 / 36

  17. How about polyhedral constraints non-linear objective?? Let f be a multivariate polynomial function, Special programs max f ( x ) max f ( x ) max f ( x ) s . t . Ax ≤ b s . t . Ax ≤ b s . t . Ax ≤ b all x i integer all x i integer Hard Hard (NP-hard) (NP-hard) Matrix A is SPECIAL! ??? () November 5, 2011 13 / 36

  18. How about polyhedral constraints non-linear objective?? Let f be a multivariate polynomial function, Special programs max f ( x ) max f ( x ) max f ( x ) s . t . Ax ≤ b s . t . Ax ≤ b s . t . Ax ≤ b all x i integer all x i integer Hard Hard (NP-hard) (NP-hard) Matrix A is SPECIAL! ??? We study TWO special cases () November 5, 2011 13 / 36

  19. Algebraic Geometric Ideas in Optimization () November 5, 2011 14 / 36

  20. Special Assumption I : FIXED DIMENSION Problem type Prior Work Integer Linear Programming can be max f ( x 1 , . . . , x d ) solved in polynomial time (H. W. Lenstra Jr, 1983) ( x 1 , . . . , x d ) ∈ P ∩ Z d , subject to Convex polynomials f can be where minimized in polynomial time (Khachiyan and Porkolab, 2000) P is a polytope (bounded polyhedron) given by linear Optimizing an arbitrary degree-4 constraints, polynomial f for d = 2 is NP-hard f is a (multivariate) polynomial function WHAT CAN BE PROVED IN THIS non-negative over P ∩ Z d , CASE?? the dimension d is fixed. () November 5, 2011 15 / 36

  21. Special Assumption I : FIXED DIMENSION Problem type Prior Work Integer Linear Programming can be max f ( x 1 , . . . , x d ) solved in polynomial time (H. W. Lenstra Jr, 1983) ( x 1 , . . . , x d ) ∈ P ∩ Z d , subject to Convex polynomials f can be where minimized in polynomial time (Khachiyan and Porkolab, 2000) P is a polytope (bounded polyhedron) given by linear Optimizing an arbitrary degree-4 constraints, polynomial f for d = 2 is NP-hard f is a (multivariate) polynomial function WHAT CAN BE PROVED IN THIS non-negative over P ∩ Z d , CASE?? the dimension d is fixed. () November 5, 2011 15 / 36

  22. Special Assumption I : FIXED DIMENSION Problem type Prior Work Integer Linear Programming can be max f ( x 1 , . . . , x d ) solved in polynomial time (H. W. Lenstra Jr, 1983) ( x 1 , . . . , x d ) ∈ P ∩ Z d , subject to Convex polynomials f can be where minimized in polynomial time (Khachiyan and Porkolab, 2000) P is a polytope (bounded polyhedron) given by linear Optimizing an arbitrary degree-4 constraints, polynomial f for d = 2 is NP-hard f is a (multivariate) polynomial function WHAT CAN BE PROVED IN THIS non-negative over P ∩ Z d , CASE?? the dimension d is fixed. () November 5, 2011 15 / 36

  23. Special Assumption I : FIXED DIMENSION Problem type Prior Work Integer Linear Programming can be max f ( x 1 , . . . , x d ) solved in polynomial time (H. W. Lenstra Jr, 1983) ( x 1 , . . . , x d ) ∈ P ∩ Z d , subject to Convex polynomials f can be where minimized in polynomial time (Khachiyan and Porkolab, 2000) P is a polytope (bounded polyhedron) given by linear Optimizing an arbitrary degree-4 constraints, polynomial f for d = 2 is NP-hard f is a (multivariate) polynomial function WHAT CAN BE PROVED IN THIS non-negative over P ∩ Z d , CASE?? the dimension d is fixed. () November 5, 2011 15 / 36

  24. Special Assumption I : FIXED DIMENSION Problem type Prior Work Integer Linear Programming can be max f ( x 1 , . . . , x d ) solved in polynomial time (H. W. Lenstra Jr, 1983) ( x 1 , . . . , x d ) ∈ P ∩ Z d , subject to Convex polynomials f can be where minimized in polynomial time (Khachiyan and Porkolab, 2000) P is a polytope (bounded polyhedron) given by linear Optimizing an arbitrary degree-4 constraints, polynomial f for d = 2 is NP-hard f is a (multivariate) polynomial function WHAT CAN BE PROVED IN THIS non-negative over P ∩ Z d , CASE?? the dimension d is fixed. () November 5, 2011 15 / 36

  25. Applications of Barvinok’s Algorithms () November 5, 2011 16 / 36

  26. Idea: New Representation of Lattice Points Given K ⊂ R d we define the formal power series � z α 1 1 z α 2 2 . . . z α n f ( K ) = n . α ∈ K ∩ Z d Think of the lattice points as monomials!!! EXAMPLE: (7 , 4 , − 3) is z 7 1 z 4 2 z − 3 3 . Theorem (see R. Stanley EC Vol 1) Given K = { x ∈ R n | Ax = b , Bx ≤ b ′ } where A , B are integral matrices and b , b ′ are integral vectors, The generating function f ( K ) can be encoded as rational function. GOOD NEWS: ALL the lattice points of the polyhedron K , be encoded in a sum of rational functions efficiently!!! () November 5, 2011 17 / 36

  27. Idea: New Representation of Lattice Points Given K ⊂ R d we define the formal power series � z α 1 1 z α 2 2 . . . z α n f ( K ) = n . α ∈ K ∩ Z d Think of the lattice points as monomials!!! EXAMPLE: (7 , 4 , − 3) is z 7 1 z 4 2 z − 3 3 . Theorem (see R. Stanley EC Vol 1) Given K = { x ∈ R n | Ax = b , Bx ≤ b ′ } where A , B are integral matrices and b , b ′ are integral vectors, The generating function f ( K ) can be encoded as rational function. GOOD NEWS: ALL the lattice points of the polyhedron K , be encoded in a sum of rational functions efficiently!!! () November 5, 2011 17 / 36

  28. Idea: New Representation of Lattice Points Given K ⊂ R d we define the formal power series � z α 1 1 z α 2 2 . . . z α n f ( K ) = n . α ∈ K ∩ Z d Think of the lattice points as monomials!!! EXAMPLE: (7 , 4 , − 3) is z 7 1 z 4 2 z − 3 3 . Theorem (see R. Stanley EC Vol 1) Given K = { x ∈ R n | Ax = b , Bx ≤ b ′ } where A , B are integral matrices and b , b ′ are integral vectors, The generating function f ( K ) can be encoded as rational function. GOOD NEWS: ALL the lattice points of the polyhedron K , be encoded in a sum of rational functions efficiently!!! () November 5, 2011 17 / 36

  29. Barvinok’s short rational generating functions Generating functions g P ( z ) = z 0 + z 1 + z 2 + z 3 + . . . z M Theorem (Alexander Barvinok, 1994) Corollary In particular, Let the dimension d be fixed. There is a polynomial-time algorithm for computing a N = | P ∩ Z d | = g P ( 1 ) representation of the generating function can be computed in � � z α 1 1 · · · z α d z α g P ( z 1 , . . . , z d ) = = d polynomial time (in ( α 1 ,...,α d ) ∈ P ∩ Z d α ∈ P ∩ Z d fixed dimension). of the integer points P ∩ Z d of a polyhedron P ⊂ R d (given by rational inequalities) in the form of a rational function. () November 5, 2011 18 / 36

  30. Barvinok’s short rational generating functions Generating functions g P ( z ) = z 0 + z 1 + z 2 + z 3 + . . . z M = 1 − z M for z � = 1 1 − z Theorem (Alexander Barvinok, 1994) Corollary In particular, Let the dimension d be fixed. There is a polynomial-time algorithm for computing a N = | P ∩ Z d | = g P ( 1 ) representation of the generating function can be computed in � � z α 1 1 · · · z α d z α g P ( z 1 , . . . , z d ) = = d polynomial time (in ( α 1 ,...,α d ) ∈ P ∩ Z d α ∈ P ∩ Z d fixed dimension). of the integer points P ∩ Z d of a polyhedron P ⊂ R d (given by rational inequalities) in the form of a rational function. () November 5, 2011 18 / 36

  31. Barvinok’s short rational generating functions Generating functions g P ( z ) = z 0 + z 1 + z 2 + z 3 + . . . z M = 1 − z M for z � = 1 1 − z Theorem (Alexander Barvinok, 1994) Corollary In particular, Let the dimension d be fixed. There is a polynomial-time algorithm for computing a N = | P ∩ Z d | = g P ( 1 ) representation of the generating function can be computed in � � z α 1 1 · · · z α d z α g P ( z 1 , . . . , z d ) = = d polynomial time (in ( α 1 ,...,α d ) ∈ P ∩ Z d α ∈ P ∩ Z d fixed dimension). of the integer points P ∩ Z d of a polyhedron P ⊂ R d (given by rational inequalities) in the form of a rational function. () November 5, 2011 18 / 36

  32. Barvinok’s short rational generating functions Generating functions g P ( z ) = z 0 + z 1 + z 2 + z 3 + . . . z M = 1 − z M for z � = 1 1 − z Theorem (Alexander Barvinok, 1994) Corollary In particular, Let the dimension d be fixed. There is a polynomial-time algorithm for computing a N = | P ∩ Z d | = g P ( 1 ) representation of the generating function can be computed in � � z α 1 1 · · · z α d z α g P ( z 1 , . . . , z d ) = = d polynomial time (in ( α 1 ,...,α d ) ∈ P ∩ Z d α ∈ P ∩ Z d fixed dimension). of the integer points P ∩ Z d of a polyhedron P ⊂ R d (given by rational inequalities) in the form of a rational function. () November 5, 2011 18 / 36

  33. Example Let P be the square with vertices V 1 = (0 , 0), V 2 = (5000 , 0), V 3 = (5000 , 5000), and V 4 = (0 , 5000). The generating function f ( P ) has over 25,000,000 monomials, f ( P ) = 1 + z 1 + z 2 + z 1 1 z 2 2 + z 2 1 z 2 + · · · + z 5000 z 5000 , 1 2 () November 5, 2011 19 / 36

  34. But it can be written using only four rational functions 5000 5000 5000 z 2 5000 1 z 1 z 2 z 1 (1 − z 1 ) (1 − z 2 ) + (1 − z 1 − 1 ) (1 − z 2 ) + (1 − z 2 − 1 ) (1 − z 1 ) + (1 − z 1 − 1 ) (1 − z 2 − 1 ) Also, f ( tP , z ) is 5000 · t 5000 · t 5000 · t z 2 5000 · t 1 z 1 z 2 z 1 (1 − z 1 ) (1 − z 2 ) + (1 − z 1 − 1 ) (1 − z 2 ) + (1 − z 2 − 1 ) (1 − z 1 ) + (1 − z 1 − 1 ) (1 − z 2 − 1 ) () November 5, 2011 20 / 36

  35. Rational Function of a pointed Cone EXAMPLE: we have d = 2 and c 1 = (1 , 2), c 2 = (4 , − 1). We have: f ( K ) = z 4 1 z 2 + z 3 1 z 2 + z 2 1 z 2 + z 1 z 2 + z 4 1 + z 3 1 + z 2 1 + z 1 + 1 . 1 z − 1 (1 − z 1 z 2 2 )(1 − z 4 2 ) () November 5, 2011 21 / 36

  36. () November 5, 2011 22 / 36

  37. Theorem (FPTAS for Integer Polynomial Maximization) Let the dimension d be fixed. There exists an algorithm whose input data are a polytope P ⊂ R d , given by rational linear inequalities, and a polynomial f ∈ Z [ x 1 , . . . , x d ] with integer coefficients and maximum total degree D that is non-negative on P ∩ Z d with the following properties. For a given k , it computes in running time polynomial in k , the encoding size 1 of P and f , and D lower and upper bounds L k ≤ f ( x max ) ≤ U k satisfying � � � · f ( x max ) . k | P ∩ Z d | − 1 U k − L k ≤ For k = (1 + 1 /ǫ ) log( | P ∩ Z d | ), the bounds satisfy 2 U k − L k ≤ ǫ f ( x max ) , and they can be computed in time polynomial in the input size, the total degree D , and 1 /ǫ . By iterated bisection of P ∩ Z d , it constructs a feasible solution x ǫ ∈ P ∩ Z d 3 with � f ( x ǫ ) − f ( x max ) � ≤ ǫ f ( x max ) . � � () November 5, 2011 23 / 36

  38. Theorem (FPTAS for Integer Polynomial Maximization) Let the dimension d be fixed. There exists an algorithm whose input data are a polytope P ⊂ R d , given by rational linear inequalities, and a polynomial f ∈ Z [ x 1 , . . . , x d ] with integer coefficients and maximum total degree D that is non-negative on P ∩ Z d with the following properties. For a given k , it computes in running time polynomial in k , the encoding size 1 of P and f , and D lower and upper bounds L k ≤ f ( x max ) ≤ U k satisfying � � � · f ( x max ) . k | P ∩ Z d | − 1 U k − L k ≤ For k = (1 + 1 /ǫ ) log( | P ∩ Z d | ), the bounds satisfy 2 U k − L k ≤ ǫ f ( x max ) , and they can be computed in time polynomial in the input size, the total degree D , and 1 /ǫ . By iterated bisection of P ∩ Z d , it constructs a feasible solution x ǫ ∈ P ∩ Z d 3 with � f ( x ǫ ) − f ( x max ) � ≤ ǫ f ( x max ) . � � () November 5, 2011 23 / 36

  39. Theorem (FPTAS for Integer Polynomial Maximization) Let the dimension d be fixed. There exists an algorithm whose input data are a polytope P ⊂ R d , given by rational linear inequalities, and a polynomial f ∈ Z [ x 1 , . . . , x d ] with integer coefficients and maximum total degree D that is non-negative on P ∩ Z d with the following properties. For a given k , it computes in running time polynomial in k , the encoding size 1 of P and f , and D lower and upper bounds L k ≤ f ( x max ) ≤ U k satisfying � � � · f ( x max ) . k | P ∩ Z d | − 1 U k − L k ≤ For k = (1 + 1 /ǫ ) log( | P ∩ Z d | ), the bounds satisfy 2 U k − L k ≤ ǫ f ( x max ) , and they can be computed in time polynomial in the input size, the total degree D , and 1 /ǫ . By iterated bisection of P ∩ Z d , it constructs a feasible solution x ǫ ∈ P ∩ Z d 3 with � f ( x ǫ ) − f ( x max ) � ≤ ǫ f ( x max ) . � � () November 5, 2011 23 / 36

  40. Theorem (FPTAS for Integer Polynomial Maximization) Let the dimension d be fixed. There exists an algorithm whose input data are a polytope P ⊂ R d , given by rational linear inequalities, and a polynomial f ∈ Z [ x 1 , . . . , x d ] with integer coefficients and maximum total degree D that is non-negative on P ∩ Z d with the following properties. For a given k , it computes in running time polynomial in k , the encoding size 1 of P and f , and D lower and upper bounds L k ≤ f ( x max ) ≤ U k satisfying � � � · f ( x max ) . k | P ∩ Z d | − 1 U k − L k ≤ For k = (1 + 1 /ǫ ) log( | P ∩ Z d | ), the bounds satisfy 2 U k − L k ≤ ǫ f ( x max ) , and they can be computed in time polynomial in the input size, the total degree D , and 1 /ǫ . By iterated bisection of P ∩ Z d , it constructs a feasible solution x ǫ ∈ P ∩ Z d 3 with � f ( x ǫ ) − f ( x max ) � ≤ ǫ f ( x max ) . � � () November 5, 2011 23 / 36

  41. Differential operators on generating functions � z d � The Euler differential operator maps: d z D D z d � � g j z j ( j · g j ) z j g ( z ) = �− → d z g ( z ) = j =0 j =0 g P ( z ) = z 0 + z 1 + z 2 + z 3 + z 4 Apply differential operator: � � z d g P ( z ) = 1 z 1 + 2 z 2 + 3 z 3 + 4 z 4 d z Apply differential operator again: � � � � z d z d g P ( z ) = 1 z 1 +4 z 2 +9 z 3 +16 z 4 d z d z () November 5, 2011 24 / 36

  42. Differential operators on generating functions � z d � The Euler differential operator maps: d z D D z d � � g j z j ( j · g j ) z j g ( z ) = �− → d z g ( z ) = j =0 j =0 g P ( z ) = z 0 + z 1 + z 2 + z 3 + z 4 Apply differential operator: � � z d g P ( z ) = 1 z 1 + 2 z 2 + 3 z 3 + 4 z 4 d z Apply differential operator again: � � � � z d z d g P ( z ) = 1 z 1 +4 z 2 +9 z 3 +16 z 4 d z d z () November 5, 2011 24 / 36

  43. Differential operators on generating functions � z d � The Euler differential operator maps: d z D D z d � � g j z j ( j · g j ) z j g ( z ) = �− → d z g ( z ) = j =0 j =0 z 5 1 g P ( z ) = z 0 + z 1 + z 2 + z 3 + z 4 = 1 − z − 1 − z Apply differential operator: (1 − z ) 2 − − 4 z 5 + 5 z 4 � � 1 z d g P ( z ) = 1 z 1 + 2 z 2 + 3 z 3 + 4 z 4 = (1 − z ) 2 d z Apply differential operator again: (1 − z ) 3 − 25 z 5 − 39 z 6 + 16 z 7 g P ( z ) = 1 z 1 +4 z 2 +9 z 3 +16 z 4 = z + z 2 � � � � z d z d (1 − z ) 3 d z d z () November 5, 2011 24 / 36

  44. Differential operators on generating functions Lemma c β x β ∈ Z [ x 1 , . . . , x d ] � f ( x 1 , . . . , x d ) = β can be converted to a differential operator � β 1 � β d � ∂ ∂ � � ∂ � ∂ � D f = f z 1 , . . . , z d = c β z 1 . . . z d ∂ z 1 ∂ z d ∂ z 1 ∂ z d β which maps � � z α f ( α ) z α . g ( z ) = �− → ( D f g )( z ) = α ∈ S α ∈ S Theorem Let g P ( z ) be the Barvinok generating function of the lattice points of P. Let f be a polynomial in Z [ x 1 , . . . , x d ] of maximum total degree D. We can compute, in polynomial time in D and the size of the input data, a Barvinok rational function representation g P , f ( z ) for � α ∈ P ∩ Z d f ( α ) z α . () November 5, 2011 25 / 36

  45. Differential operators on generating functions Lemma c β x β ∈ Z [ x 1 , . . . , x d ] � f ( x 1 , . . . , x d ) = β can be converted to a differential operator � β 1 � β d � ∂ ∂ � � ∂ � ∂ � D f = f z 1 , . . . , z d = c β z 1 . . . z d ∂ z 1 ∂ z d ∂ z 1 ∂ z d β which maps � � z α f ( α ) z α . g ( z ) = �− → ( D f g )( z ) = α ∈ S α ∈ S Theorem Let g P ( z ) be the Barvinok generating function of the lattice points of P. Let f be a polynomial in Z [ x 1 , . . . , x d ] of maximum total degree D. We can compute, in polynomial time in D and the size of the input data, a Barvinok rational function representation g P , f ( z ) for � α ∈ P ∩ Z d f ( α ) z α . () November 5, 2011 25 / 36

  46. Graver Bases () November 5, 2011 26 / 36

  47. Graver Bases Algorithms We are interested on optimization of a convex function over { x ∈ Z n : Ax = b , x ≥ 0 } . We will use basic Algebraic Geometry. For the lattice L ( A ) = { x ∈ Z n : Ax = 0 } introduce a natural partial order on the lattice vectors. For u , v ∈ Z n . u is conformally smaller than v , denoted u ❁ v , if | u i | ≤ | v i | and u i v i ≥ 0 for i = 1 , . . . , n . Eg: (3 , − 2 , − 8 , 0 , 8) ❁ (4 , − 3 , − 9 , 0 , 9), incomparable to ( − 4 , − 3 , 9 , 1 , − 8). �� �� � �� �� �� �� � � �� �� � � �� �� �� �� � � �� �� � � �� �� �� �� � �� �� � �� �� � �� �� � �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� �� �� �� �� �� �� Equivalent to the computation of several Hilbert bases computations. () November 5, 2011 27 / 36

  48. Graver Bases Algorithms We are interested on optimization of a convex function over { x ∈ Z n : Ax = b , x ≥ 0 } . We will use basic Algebraic Geometry. For the lattice L ( A ) = { x ∈ Z n : Ax = 0 } introduce a natural partial order on the lattice vectors. For u , v ∈ Z n . u is conformally smaller than v , denoted u ❁ v , if | u i | ≤ | v i | and u i v i ≥ 0 for i = 1 , . . . , n . Eg: (3 , − 2 , − 8 , 0 , 8) ❁ (4 , − 3 , − 9 , 0 , 9), incomparable to ( − 4 , − 3 , 9 , 1 , − 8). �� �� � �� �� �� �� � � �� �� � � �� �� �� �� � � �� �� � � �� �� �� �� � �� �� � �� �� � �� �� � �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� �� �� �� �� �� �� Equivalent to the computation of several Hilbert bases computations. () November 5, 2011 27 / 36

  49. Graver Bases Algorithms We are interested on optimization of a convex function over { x ∈ Z n : Ax = b , x ≥ 0 } . We will use basic Algebraic Geometry. For the lattice L ( A ) = { x ∈ Z n : Ax = 0 } introduce a natural partial order on the lattice vectors. For u , v ∈ Z n . u is conformally smaller than v , denoted u ❁ v , if | u i | ≤ | v i | and u i v i ≥ 0 for i = 1 , . . . , n . Eg: (3 , − 2 , − 8 , 0 , 8) ❁ (4 , − 3 , − 9 , 0 , 9), incomparable to ( − 4 , − 3 , 9 , 1 , − 8). �� �� � �� �� �� �� � � �� �� � � �� �� �� �� � � �� �� � � �� �� �� �� � �� �� � �� �� � �� �� � �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� �� �� �� �� �� �� Equivalent to the computation of several Hilbert bases computations. () November 5, 2011 27 / 36

  50. Graver Bases Algorithms We are interested on optimization of a convex function over { x ∈ Z n : Ax = b , x ≥ 0 } . We will use basic Algebraic Geometry. For the lattice L ( A ) = { x ∈ Z n : Ax = 0 } introduce a natural partial order on the lattice vectors. For u , v ∈ Z n . u is conformally smaller than v , denoted u ❁ v , if | u i | ≤ | v i | and u i v i ≥ 0 for i = 1 , . . . , n . Eg: (3 , − 2 , − 8 , 0 , 8) ❁ (4 , − 3 , − 9 , 0 , 9), incomparable to ( − 4 , − 3 , 9 , 1 , − 8). �� �� � �� �� �� �� � � �� �� � � �� �� �� �� � � �� �� � � �� �� �� �� � �� �� � �� �� � �� �� � �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� � � �� �� �� �� �� �� � � �� �� �� �� �� �� �� �� Equivalent to the computation of several Hilbert bases computations. () November 5, 2011 27 / 36

  51. The Graver basis of an integer matrix A is the set of conformal-minimal nonzero integer dependencies on A . Example: If A = [1 2 1] then its Graver basis is ±{ [2 , − 1 , 0] , [0 , − 1 , 2] , [1 , 0 , − 1] , [1 , − 1 , 1] } . The fastest algorithm to compute Graver bases is based on a completion and project-and-lift method (Got Groebner bases? ). Implemented in 4ti2 (by R. Hemmecke and P. Malkin). Graver bases contain, and generalize, the LP test set given by the circuits of the matrix A . Circuits contain all possible edges of polyhedra in the family P ( b ) := { x | Ax = b , x ≥ 0 } . Theorem The Graver basis contains all edges for all integer hulls conv ( { x | Ax = b , x ≥ 0 , x ∈ Z n } ) as b changes. () November 5, 2011 28 / 36

  52. The Graver basis of an integer matrix A is the set of conformal-minimal nonzero integer dependencies on A . Example: If A = [1 2 1] then its Graver basis is ±{ [2 , − 1 , 0] , [0 , − 1 , 2] , [1 , 0 , − 1] , [1 , − 1 , 1] } . The fastest algorithm to compute Graver bases is based on a completion and project-and-lift method (Got Groebner bases? ). Implemented in 4ti2 (by R. Hemmecke and P. Malkin). Graver bases contain, and generalize, the LP test set given by the circuits of the matrix A . Circuits contain all possible edges of polyhedra in the family P ( b ) := { x | Ax = b , x ≥ 0 } . Theorem The Graver basis contains all edges for all integer hulls conv ( { x | Ax = b , x ≥ 0 , x ∈ Z n } ) as b changes. () November 5, 2011 28 / 36

  53. The Graver basis of an integer matrix A is the set of conformal-minimal nonzero integer dependencies on A . Example: If A = [1 2 1] then its Graver basis is ±{ [2 , − 1 , 0] , [0 , − 1 , 2] , [1 , 0 , − 1] , [1 , − 1 , 1] } . The fastest algorithm to compute Graver bases is based on a completion and project-and-lift method (Got Groebner bases? ). Implemented in 4ti2 (by R. Hemmecke and P. Malkin). Graver bases contain, and generalize, the LP test set given by the circuits of the matrix A . Circuits contain all possible edges of polyhedra in the family P ( b ) := { x | Ax = b , x ≥ 0 } . Theorem The Graver basis contains all edges for all integer hulls conv ( { x | Ax = b , x ≥ 0 , x ∈ Z n } ) as b changes. () November 5, 2011 28 / 36

  54. The Graver basis of an integer matrix A is the set of conformal-minimal nonzero integer dependencies on A . Example: If A = [1 2 1] then its Graver basis is ±{ [2 , − 1 , 0] , [0 , − 1 , 2] , [1 , 0 , − 1] , [1 , − 1 , 1] } . The fastest algorithm to compute Graver bases is based on a completion and project-and-lift method (Got Groebner bases? ). Implemented in 4ti2 (by R. Hemmecke and P. Malkin). Graver bases contain, and generalize, the LP test set given by the circuits of the matrix A . Circuits contain all possible edges of polyhedra in the family P ( b ) := { x | Ax = b , x ≥ 0 } . Theorem The Graver basis contains all edges for all integer hulls conv ( { x | Ax = b , x ≥ 0 , x ∈ Z n } ) as b changes. () November 5, 2011 28 / 36

  55. The Graver basis of an integer matrix A is the set of conformal-minimal nonzero integer dependencies on A . Example: If A = [1 2 1] then its Graver basis is ±{ [2 , − 1 , 0] , [0 , − 1 , 2] , [1 , 0 , − 1] , [1 , − 1 , 1] } . The fastest algorithm to compute Graver bases is based on a completion and project-and-lift method (Got Groebner bases? ). Implemented in 4ti2 (by R. Hemmecke and P. Malkin). Graver bases contain, and generalize, the LP test set given by the circuits of the matrix A . Circuits contain all possible edges of polyhedra in the family P ( b ) := { x | Ax = b , x ≥ 0 } . Theorem The Graver basis contains all edges for all integer hulls conv ( { x | Ax = b , x ≥ 0 , x ∈ Z n } ) as b changes. () November 5, 2011 28 / 36

  56. For a fixed cost vector c , we can visualize a Graver basis of of an integer program by creating a graph!! Here is how to construct it, consider L ( b ) := { x | Ax = b , x ≥ 0 , x ∈ Z n } . Nodes are lattice points in L ( b ) and the Graver basis elements give directed edges departing from each lattice point u ∈ L ( b ). () November 5, 2011 29 / 36

  57. For a fixed cost vector c , we can visualize a Graver basis of of an integer program by creating a graph!! Here is how to construct it, consider L ( b ) := { x | Ax = b , x ≥ 0 , x ∈ Z n } . Nodes are lattice points in L ( b ) and the Graver basis elements give directed edges departing from each lattice point u ∈ L ( b ). () November 5, 2011 29 / 36

  58. GOOD NEWS: Test Sets and Augmentation Method A TEST SET is a finite collection of integral vectors with the property that every feasible non-optimal solution of an integer program can be improved by adding a vector in the test set. Theorem [J. Graver 1975] Graver bases for A can be used to solve the augmentation problem Given A ∈ Z m × n , x ∈ N n and c ∈ Z n , either find an improving direction g ∈ Z n , namely one with x − g ∈ { y ∈ N n : Ay = Ax } and cg > 0, or assert that no such g exists. () November 5, 2011 30 / 36

  59. GOOD NEWS: Test Sets and Augmentation Method A TEST SET is a finite collection of integral vectors with the property that every feasible non-optimal solution of an integer program can be improved by adding a vector in the test set. Theorem [J. Graver 1975] Graver bases for A can be used to solve the augmentation problem Given A ∈ Z m × n , x ∈ N n and c ∈ Z n , either find an improving direction g ∈ Z n , namely one with x − g ∈ { y ∈ N n : Ay = Ax } and cg > 0, or assert that no such g exists. () November 5, 2011 30 / 36

  60. GOOD NEWS: Test Sets and Augmentation Method A TEST SET is a finite collection of integral vectors with the property that every feasible non-optimal solution of an integer program can be improved by adding a vector in the test set. Theorem [J. Graver 1975] Graver bases for A can be used to solve the augmentation problem Given A ∈ Z m × n , x ∈ N n and c ∈ Z n , either find an improving direction g ∈ Z n , namely one with x − g ∈ { y ∈ N n : Ay = Ax } and cg > 0, or assert that no such g exists. () November 5, 2011 30 / 36

  61. BAD NEWS!! Graver bases contain a Gr¨ obner bases, Hilbert bases. Work by Hosten, Graver, Scarf, Sturmfels, Sullivant, Thomas, Weismantel et al. and many others. Graver test sets can be exponentially large even in fixed dimension! Very hard to compute, you don’t want to do this too often. People typically stored as a list of the whole test set and has to search within. NP-complete problem to decide whether a list of vectors is a complete Graver bases. New Results: There are useful cases where Graver bases become very manageable and efficient. BUT WE NEED HIGHLY STRUCTURED MATRICES!! () November 5, 2011 31 / 36

  62. BAD NEWS!! Graver bases contain a Gr¨ obner bases, Hilbert bases. Work by Hosten, Graver, Scarf, Sturmfels, Sullivant, Thomas, Weismantel et al. and many others. Graver test sets can be exponentially large even in fixed dimension! Very hard to compute, you don’t want to do this too often. People typically stored as a list of the whole test set and has to search within. NP-complete problem to decide whether a list of vectors is a complete Graver bases. New Results: There are useful cases where Graver bases become very manageable and efficient. BUT WE NEED HIGHLY STRUCTURED MATRICES!! () November 5, 2011 31 / 36

  63. BAD NEWS!! Graver bases contain a Gr¨ obner bases, Hilbert bases. Work by Hosten, Graver, Scarf, Sturmfels, Sullivant, Thomas, Weismantel et al. and many others. Graver test sets can be exponentially large even in fixed dimension! Very hard to compute, you don’t want to do this too often. People typically stored as a list of the whole test set and has to search within. NP-complete problem to decide whether a list of vectors is a complete Graver bases. New Results: There are useful cases where Graver bases become very manageable and efficient. BUT WE NEED HIGHLY STRUCTURED MATRICES!! () November 5, 2011 31 / 36

  64. BAD NEWS!! Graver bases contain a Gr¨ obner bases, Hilbert bases. Work by Hosten, Graver, Scarf, Sturmfels, Sullivant, Thomas, Weismantel et al. and many others. Graver test sets can be exponentially large even in fixed dimension! Very hard to compute, you don’t want to do this too often. People typically stored as a list of the whole test set and has to search within. NP-complete problem to decide whether a list of vectors is a complete Graver bases. New Results: There are useful cases where Graver bases become very manageable and efficient. BUT WE NEED HIGHLY STRUCTURED MATRICES!! () November 5, 2011 31 / 36

  65. BAD NEWS!! Graver bases contain a Gr¨ obner bases, Hilbert bases. Work by Hosten, Graver, Scarf, Sturmfels, Sullivant, Thomas, Weismantel et al. and many others. Graver test sets can be exponentially large even in fixed dimension! Very hard to compute, you don’t want to do this too often. People typically stored as a list of the whole test set and has to search within. NP-complete problem to decide whether a list of vectors is a complete Graver bases. New Results: There are useful cases where Graver bases become very manageable and efficient. BUT WE NEED HIGHLY STRUCTURED MATRICES!! () November 5, 2011 31 / 36

  66. BAD NEWS!! Graver bases contain a Gr¨ obner bases, Hilbert bases. Work by Hosten, Graver, Scarf, Sturmfels, Sullivant, Thomas, Weismantel et al. and many others. Graver test sets can be exponentially large even in fixed dimension! Very hard to compute, you don’t want to do this too often. People typically stored as a list of the whole test set and has to search within. NP-complete problem to decide whether a list of vectors is a complete Graver bases. New Results: There are useful cases where Graver bases become very manageable and efficient. BUT WE NEED HIGHLY STRUCTURED MATRICES!! () November 5, 2011 31 / 36

  67. Special Assumption II : Highly structured Matrices Fix any pair of integer matrices A and B with the same number of columns, of dimensions r × q and s × q , respectively. The n-fold matrix of the ordered pair A , B is the following ( s + nr ) × nq matrix,   B B B · · · B A 0 0 · · · 0     0 A 0 · · · 0 [ A , B ] ( n ) := ( 1 n ⊗ B ) ⊕ ( I n ⊗ A ) = .    . . . .  ... . . . .   . . . .   0 0 0 · · · A N -fold systems DO appear in applications! Yes, Transportation problems with fixed number of suppliers! Theorem Fix any integer matrices A , B of sizes r × q and s × q , respectively. Then there is a polynomial time algorithm that, given any n , an integer vectors b , cost vector c , and a convex function f , solves the corresponding n-fold integer programming problem. max { f ( cx ) : [ A , B ] ( n ) x = b , x ∈ N nq } . () November 5, 2011 32 / 36

  68. Special Assumption II : Highly structured Matrices Fix any pair of integer matrices A and B with the same number of columns, of dimensions r × q and s × q , respectively. The n-fold matrix of the ordered pair A , B is the following ( s + nr ) × nq matrix,   B B B · · · B A 0 0 · · · 0     0 A 0 · · · 0 [ A , B ] ( n ) := ( 1 n ⊗ B ) ⊕ ( I n ⊗ A ) = .    . . . .  ... . . . .   . . . .   0 0 0 · · · A N -fold systems DO appear in applications! Yes, Transportation problems with fixed number of suppliers! Theorem Fix any integer matrices A , B of sizes r × q and s × q , respectively. Then there is a polynomial time algorithm that, given any n , an integer vectors b , cost vector c , and a convex function f , solves the corresponding n-fold integer programming problem. max { f ( cx ) : [ A , B ] ( n ) x = b , x ∈ N nq } . () November 5, 2011 32 / 36

  69. Special Assumption II : Highly structured Matrices Fix any pair of integer matrices A and B with the same number of columns, of dimensions r × q and s × q , respectively. The n-fold matrix of the ordered pair A , B is the following ( s + nr ) × nq matrix,   B B B · · · B A 0 0 · · · 0     0 A 0 · · · 0 [ A , B ] ( n ) := ( 1 n ⊗ B ) ⊕ ( I n ⊗ A ) = .    . . . .  ... . . . .   . . . .   0 0 0 · · · A N -fold systems DO appear in applications! Yes, Transportation problems with fixed number of suppliers! Theorem Fix any integer matrices A , B of sizes r × q and s × q , respectively. Then there is a polynomial time algorithm that, given any n , an integer vectors b , cost vector c , and a convex function f , solves the corresponding n-fold integer programming problem. max { f ( cx ) : [ A , B ] ( n ) x = b , x ∈ N nq } . () November 5, 2011 32 / 36

  70. Key Lemma Fix any pair of integer matrices A ∈ Z r × q and B ∈ Z s × q . Then there is a polynomial time algorithm that, given n , computes the Graver basis G ([ A , B ] ( n ) ) of the n-fold matrix [ A , B ] ( n ) . In particular, the cardinality and the bit size of G ([ A , B ] ( n ) ) are bounded by a polynomial function of n . Key Idea (from Algebraic Geometry) [Aoki-Takemura, Santos-Sturmfels, Hosten-Sullivant] For every pair of integer matrices A ∈ Z r × q and B ∈ Z s × q , there exists a constant g ( A , B ) such that for all n , the Graver basis of [ A , B ] ( n ) consists of vectors with at most g ( A , B ) the number nonzero components. The smallest constant g ( A , B ) possible is the Graver complexity of A , B . () November 5, 2011 33 / 36

  71. Key Lemma Fix any pair of integer matrices A ∈ Z r × q and B ∈ Z s × q . Then there is a polynomial time algorithm that, given n , computes the Graver basis G ([ A , B ] ( n ) ) of the n-fold matrix [ A , B ] ( n ) . In particular, the cardinality and the bit size of G ([ A , B ] ( n ) ) are bounded by a polynomial function of n . Key Idea (from Algebraic Geometry) [Aoki-Takemura, Santos-Sturmfels, Hosten-Sullivant] For every pair of integer matrices A ∈ Z r × q and B ∈ Z s × q , there exists a constant g ( A , B ) such that for all n , the Graver basis of [ A , B ] ( n ) consists of vectors with at most g ( A , B ) the number nonzero components. The smallest constant g ( A , B ) possible is the Graver complexity of A , B . () November 5, 2011 33 / 36

  72. Proof by Example Consider the matrices A = [1 1] and B = I 2 . The Graver complexity of the pair A , B is g ( A , B ) = 2.   1 0 1 0 0 1 0 1 [ A , B ] (2)    , G ([ A , B ] (2) ) = ± � � = 1 − 1 − 1 1 .   1 1 0 0  0 0 1 1 By our theorem, the Graver basis of the 4-fold matrix   1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1     1 1 0 0 0 0 0 0 [ A , B ] (4) =   ,   0 0 1 1 0 0 0 0     0 0 0 0 1 1 0 0   0 0 0 0 0 0 1 1   1 − 1 − 1 1 0 0 0 0 1 − 1 0 0 − 1 1 0 0     1 − 1 0 0 0 0 − 1 1 G ([ A , B ] (4) ) = ±   .   0 0 1 − 1 − 1 1 0 0     0 0 1 − 1 0 0 − 1 1   0 0 0 0 1 − 1 − 1 1 () November 5, 2011 34 / 36

  73. Conclusions and Future work LINEAR Methods are not sufficient to solve all current integer optimization models, even the simple linear ones! There is demand to solve NON-LINEAR optimization problems, not just model things linearly anymore. In fact NON-LINEAR ideas can be applied in classical problems too! (ASK ME about them!): Hilbert’s Nullstellensatz Algorithm in Graph Optimization problems Central Paths of Interior point methods as Algebraic Curves Santos’ topological thinking for the Hirsch conjecture Tools from Algebra, Number Theory, Functional Analysis, Probability, and Convex Geometry are bound to play a stronger role in the foundations of new algorithmic tools! Not just the foundations need to be studied, new software is beginning to appear that uses all these ideas: 4ti2, LattE . () November 5, 2011 35 / 36

  74. Conclusions and Future work LINEAR Methods are not sufficient to solve all current integer optimization models, even the simple linear ones! There is demand to solve NON-LINEAR optimization problems, not just model things linearly anymore. In fact NON-LINEAR ideas can be applied in classical problems too! (ASK ME about them!): Hilbert’s Nullstellensatz Algorithm in Graph Optimization problems Central Paths of Interior point methods as Algebraic Curves Santos’ topological thinking for the Hirsch conjecture Tools from Algebra, Number Theory, Functional Analysis, Probability, and Convex Geometry are bound to play a stronger role in the foundations of new algorithmic tools! Not just the foundations need to be studied, new software is beginning to appear that uses all these ideas: 4ti2, LattE . () November 5, 2011 35 / 36

  75. Conclusions and Future work LINEAR Methods are not sufficient to solve all current integer optimization models, even the simple linear ones! There is demand to solve NON-LINEAR optimization problems, not just model things linearly anymore. In fact NON-LINEAR ideas can be applied in classical problems too! (ASK ME about them!): Hilbert’s Nullstellensatz Algorithm in Graph Optimization problems Central Paths of Interior point methods as Algebraic Curves Santos’ topological thinking for the Hirsch conjecture Tools from Algebra, Number Theory, Functional Analysis, Probability, and Convex Geometry are bound to play a stronger role in the foundations of new algorithmic tools! Not just the foundations need to be studied, new software is beginning to appear that uses all these ideas: 4ti2, LattE . () November 5, 2011 35 / 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend