emprircal performace models
play

Emprircal Performace Models B.A. Rachunok School of Industrial - PowerPoint PPT Presentation

Emprircal Performace Models B.A. Rachunok School of Industrial Engineering Purdue University September 10, 2016 B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 1 / 12 History Early work by


  1. Emprircal Performace Models B.A. Rachunok School of Industrial Engineering Purdue University September 10, 2016 B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 1 / 12

  2. History ◮ Early work by Cheeseman et al. (1991) → Varied paramters and looked at results ◮ Fink (1998) → Used regression to predict which of three algorithms would work best ◮ Leyton-Brown et al. (2003) → Predicted runtimes for several solvers. ◮ I read the big papers by Leyton-Brown et al. and their work will be the focus of this presentation B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 2 / 12

  3. History ◮ Early work by Cheeseman et al. (1991) → Varied paramters and looked at results ◮ Fink (1998) → Used regression to predict which of three algorithms would work best ◮ Leyton-Brown et al. (2003) → Predicted runtimes for several solvers. ◮ I read the big papers by Leyton-Brown et al. and their work will be the focus of this presentation B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 2 / 12

  4. History ◮ Early work by Cheeseman et al. (1991) → Varied paramters and looked at results ◮ Fink (1998) → Used regression to predict which of three algorithms would work best ◮ Leyton-Brown et al. (2003) → Predicted runtimes for several solvers. ◮ I read the big papers by Leyton-Brown et al. and their work will be the focus of this presentation B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 2 / 12

  5. History ◮ Early work by Cheeseman et al. (1991) → Varied paramters and looked at results ◮ Fink (1998) → Used regression to predict which of three algorithms would work best ◮ Leyton-Brown et al. (2003) → Predicted runtimes for several solvers. ◮ I read the big papers by Leyton-Brown et al. and their work will be the focus of this presentation B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 2 / 12

  6. EPH Motivation ◮ We already have complexity theory to analyze algorithm performance. ◮ Why this too? Consider ◮ TSP is O ( n !) (if solved via brute force) ◮ However I can solve instances with 5000 points on my gross old laptop? ◮ Imagine an algorithm with O ( n 4 e 10000 ) performance? ◮ . . . still polynomial B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 3 / 12

  7. EPH Motivation ◮ We already have complexity theory to analyze algorithm performance. ◮ Why this too? Consider ◮ TSP is O ( n !) (if solved via brute force) ◮ However I can solve instances with 5000 points on my gross old laptop? ◮ Imagine an algorithm with O ( n 4 e 10000 ) performance? ◮ . . . still polynomial B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 3 / 12

  8. EPH Motivation ◮ We already have complexity theory to analyze algorithm performance. ◮ Why this too? Consider ◮ TSP is O ( n !) (if solved via brute force) ◮ However I can solve instances with 5000 points on my gross old laptop? ◮ Imagine an algorithm with O ( n 4 e 10000 ) performance? ◮ . . . still polynomial B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 3 / 12

  9. EPH Motivation ◮ We already have complexity theory to analyze algorithm performance. ◮ Why this too? Consider ◮ TSP is O ( n !) (if solved via brute force) ◮ However I can solve instances with 5000 points on my gross old laptop? ◮ Imagine an algorithm with O ( n 4 e 10000 ) performance? ◮ . . . still polynomial B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 3 / 12

  10. EPH Motivation ◮ We already have complexity theory to analyze algorithm performance. ◮ Why this too? Consider ◮ TSP is O ( n !) (if solved via brute force) ◮ However I can solve instances with 5000 points on my gross old laptop? ◮ Imagine an algorithm with O ( n 4 e 10000 ) performance? ◮ . . . still polynomial B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 3 / 12

  11. EPH Motivation ◮ We already have complexity theory to analyze algorithm performance. ◮ Why this too? Consider ◮ TSP is O ( n !) (if solved via brute force) ◮ However I can solve instances with 5000 points on my gross old laptop? ◮ Imagine an algorithm with O ( n 4 e 10000 ) performance? ◮ . . . still polynomial B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 3 / 12

  12. EPH Motivation ◮ We already have complexity theory to analyze algorithm performance. ◮ Why this too? Consider ◮ TSP is O ( n !) (if solved via brute force) ◮ However I can solve instances with 5000 points on my gross old laptop? ◮ Imagine an algorithm with O ( n 4 e 10000 ) performance? ◮ . . . still polynomial B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 3 / 12

  13. Motivation Pt 2 ◮ Knowing how a particular instance of a problem will work gives us some options ◮ Potentially select the ”best” predicted solver for a given instance (more on this) ◮ Provide some insights into how solvers handle instances ( eg 4.3 clauses-to-literal ratio) B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 4 / 12

  14. Motivation Pt 2 ◮ Knowing how a particular instance of a problem will work gives us some options ◮ Potentially select the ”best” predicted solver for a given instance (more on this) ◮ Provide some insights into how solvers handle instances ( eg 4.3 clauses-to-literal ratio) B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 4 / 12

  15. Motivation Pt 2 ◮ Knowing how a particular instance of a problem will work gives us some options ◮ Potentially select the ”best” predicted solver for a given instance (more on this) ◮ Provide some insights into how solvers handle instances ( eg 4.3 clauses-to-literal ratio) B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 4 / 12

  16. Clause-to-Literal Ratio Graph taken from Mitchell, Selman, Levesque 1992 B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 5 / 12

  17. Marker time To the board B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 6 / 12

  18. Types of Models ◮ Ridge Regression ◮ Neural Networks ◮ Gaussian Process Regression ◮ Regression Trees ◮ INSERT WHATEVER I USED HERE B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 7 / 12

  19. Types of Models ◮ Ridge Regression ◮ Neural Networks ◮ Gaussian Process Regression ◮ Regression Trees ◮ INSERT WHATEVER I USED HERE B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 7 / 12

  20. Types of Models ◮ Ridge Regression ◮ Neural Networks ◮ Gaussian Process Regression ◮ Regression Trees ◮ INSERT WHATEVER I USED HERE B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 7 / 12

  21. Types of Models ◮ Ridge Regression ◮ Neural Networks ◮ Gaussian Process Regression ◮ Regression Trees ◮ INSERT WHATEVER I USED HERE B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 7 / 12

  22. Types of Models ◮ Ridge Regression ◮ Neural Networks ◮ Gaussian Process Regression ◮ Regression Trees ◮ INSERT WHATEVER I USED HERE B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 7 / 12

  23. Types of Features for MIP ◮ Number of constraints and variables ◮ Variable types ◮ ≤ , ≥ , or = for the RHS ◮ Mean of objective function coefficients B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 8 / 12

  24. Types of Features for MIP ◮ Number of constraints and variables ◮ Variable types ◮ ≤ , ≥ , or = for the RHS ◮ Mean of objective function coefficients B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 8 / 12

  25. Types of Features for MIP ◮ Number of constraints and variables ◮ Variable types ◮ ≤ , ≥ , or = for the RHS ◮ Mean of objective function coefficients B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 8 / 12

  26. Types of Features for MIP ◮ Number of constraints and variables ◮ Variable types ◮ ≤ , ≥ , or = for the RHS ◮ Mean of objective function coefficients B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 8 / 12

  27. Types of Features for TSP ◮ Number of nodes ◮ Cluster distance ◮ Area spanned by nodes ◮ Centroid of points ◮ Nearest Neighbor path length B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 9 / 12

  28. Types of Features for TSP ◮ Number of nodes ◮ Cluster distance ◮ Area spanned by nodes ◮ Centroid of points ◮ Nearest Neighbor path length B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 9 / 12

  29. Types of Features for TSP ◮ Number of nodes ◮ Cluster distance ◮ Area spanned by nodes ◮ Centroid of points ◮ Nearest Neighbor path length B.A. Rachunok (IE590 - Purdue University) Empirical Performance Models September 10, 2016 9 / 12

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend