an efficient evolutionary algorithm for solving
play

An Efficient Evolutionary Algorithm for Solving Incrementally - PowerPoint PPT Presentation

An Efficient Evolutionary Algorithm for Solving Incrementally Structured Problems Jason Ansel Maciej Pacula Saman Amarasinghe Una-May OReilly MIT - CSAIL July 14, 2011 Jason Ansel (MIT) PetaBricks July 14, 2011 1 / 30 Who are we? I


  1. An Efficient Evolutionary Algorithm for Solving Incrementally Structured Problems Jason Ansel Maciej Pacula Saman Amarasinghe Una-May O’Reilly MIT - CSAIL July 14, 2011 Jason Ansel (MIT) PetaBricks July 14, 2011 1 / 30

  2. Who are we? I do research in programming languages (PL) and compilers The PetaBricks language is a collaboration between: A PL / compiler research group A evolutionary algorithms research group A applied mathematics research group Jason Ansel (MIT) PetaBricks July 14, 2011 2 / 30

  3. Who are we? I do research in programming languages (PL) and compilers The PetaBricks language is a collaboration between: A PL / compiler research group A evolutionary algorithms research group A applied mathematics research group Our goal is to make programs run faster We use evolutionary algorithms to search for faster programs Jason Ansel (MIT) PetaBricks July 14, 2011 2 / 30

  4. Who are we? I do research in programming languages (PL) and compilers The PetaBricks language is a collaboration between: A PL / compiler research group A evolutionary algorithms research group A applied mathematics research group Our goal is to make programs run faster We use evolutionary algorithms to search for faster programs The PetaBricks language defines search spaces of algorithmic choices Jason Ansel (MIT) PetaBricks July 14, 2011 2 / 30

  5. A motivating example How would you write a fast sorting algorithm? Jason Ansel (MIT) PetaBricks July 14, 2011 3 / 30

  6. A motivating example How would you write a fast sorting algorithm? Insertion sort Quick sort Merge sort Radix sort Jason Ansel (MIT) PetaBricks July 14, 2011 3 / 30

  7. A motivating example How would you write a fast sorting algorithm? Insertion sort Quick sort Merge sort Radix sort Binary tree sort, Bitonic sort, Bubble sort, Bucket sort, Burstsort, Cocktail sort, Comb sort, Counting Sort, Distribution sort, Flashsort, Heapsort, Introsort, Library sort, Odd-even sort, Postman sort, Samplesort, Selection sort, Shell sort, Stooge sort, Strand sort, Timsort? Jason Ansel (MIT) PetaBricks July 14, 2011 3 / 30

  8. A motivating example How would you write a fast sorting algorithm? Insertion sort Quick sort Merge sort Radix sort Binary tree sort, Bitonic sort, Bubble sort, Bucket sort, Burstsort, Cocktail sort, Comb sort, Counting Sort, Distribution sort, Flashsort, Heapsort, Introsort, Library sort, Odd-even sort, Postman sort, Samplesort, Selection sort, Shell sort, Stooge sort, Strand sort, Timsort? Poly-algorithms Jason Ansel (MIT) PetaBricks July 14, 2011 3 / 30

  9. std::stable sort /usr/include/c++/4.5.2/bits/stl algo.h lines 3350-3367 Jason Ansel (MIT) PetaBricks July 14, 2011 4 / 30

  10. std::stable sort /usr/include/c++/4.5.2/bits/stl algo.h lines 3350-3367 Jason Ansel (MIT) PetaBricks July 14, 2011 4 / 30

  11. Is 15 the right number? The best cutoff (CO) changes Depends on competing costs: Cost of computation ( < operator, call overhead, etc) Cost of communication (swaps) Cache behavior (misses, prefetcher, locality) Jason Ansel (MIT) PetaBricks July 14, 2011 5 / 30

  12. Is 15 the right number? The best cutoff (CO) changes Depends on competing costs: Cost of computation ( < operator, call overhead, etc) Cost of communication (swaps) Cache behavior (misses, prefetcher, locality) Sorting 100000 doubles with std::stable sort : CO ≈ 200 optimal on a Phenom 905e (15% speedup over CO = 15) CO ≈ 400 optimal on a Opteron 6168 (15% speedup over CO = 15) CO ≈ 500 optimal on a Xeon E5320 (34% speedup over CO = 15) CO ≈ 700 optimal on a Xeon X5460 (25% speedup over CO = 15) Jason Ansel (MIT) PetaBricks July 14, 2011 5 / 30

  13. Is 15 the right number? The best cutoff (CO) changes Depends on competing costs: Cost of computation ( < operator, call overhead, etc) Cost of communication (swaps) Cache behavior (misses, prefetcher, locality) Sorting 100000 doubles with std::stable sort : CO ≈ 200 optimal on a Phenom 905e (15% speedup over CO = 15) CO ≈ 400 optimal on a Opteron 6168 (15% speedup over CO = 15) CO ≈ 500 optimal on a Xeon E5320 (34% speedup over CO = 15) CO ≈ 700 optimal on a Xeon X5460 (25% speedup over CO = 15) If the best cutoff has changed, perhaps best algorithm has also changed Jason Ansel (MIT) PetaBricks July 14, 2011 5 / 30

  14. Algorithmic choices Language either { I n s e r t i o n S o r t ( out , in ) ; } or { QuickSort ( out , in ) ; } or { MergeSort ( out , in ) ; } or { RadixSort ( out , in ) ; } Jason Ansel (MIT) PetaBricks July 14, 2011 6 / 30

  15. Algorithmic choices Language either { I n s e r t i o n S o r t ( out , in ) ; } or { Representation QuickSort ( out , in ) ; ⇒ Decision tree synthesized by } or { our evolutionary algorithm MergeSort ( out , in ) ; } or { RadixSort ( out , in ) ; } Jason Ansel (MIT) PetaBricks July 14, 2011 6 / 30

  16. Decision trees Optimized for a Xeon E7340 ( 8 cores): N < 600 Insertion Sort N < 1420 Merge Sort Quick Sort (2-way) Text notation (will be used later): I 600 Q 1420 M 2 Jason Ansel (MIT) PetaBricks July 14, 2011 7 / 30

  17. Decision trees Optimized for Sun Fire T200 Niagara ( 8 cores): N < 1461 N < 75 N < 2400 Merge Sort Merge Sort Merge Sort Merge Sort (16-way) (8-way) (4-way) (2-way) Text notation: M 16 75 M 8 1461 M 4 2400 M 2 Jason Ansel (MIT) PetaBricks July 14, 2011 8 / 30

  18. The configuration encoded by the genome Decision trees Algorithm parameters (integers, floats) Parallel scheduling / blocking parameters (integers) Synthesized scalar functions (not used in the benchmarks shown) The average PetaBricks benchmark’s genome has: 1.9 decision trees 10.1 algorithm/parallelism/blocking parameters 0.6 synthesized scalar functions 2 3107 possible configurations Jason Ansel (MIT) PetaBricks July 14, 2011 9 / 30

  19. Outline PetaBricks Language 1 Autotuning Problem 2 INCREA 3 Evaluation 4 Conclusions 5 Jason Ansel (MIT) PetaBricks July 14, 2011 10 / 30

  20. PetaBricks programs at runtime Request Response Program Jason Ansel (MIT) PetaBricks July 14, 2011 11 / 30

  21. PetaBricks programs at runtime Request Response Program Measurement: Configuration: - performance - point in ~100D space - accuracy (QoS) Jason Ansel (MIT) PetaBricks July 14, 2011 11 / 30

  22. PetaBricks programs at runtime Request Response Program Measurement: Configuration: - performance - point in ~100D space - accuracy (QoS) Offline Autotuning Jason Ansel (MIT) PetaBricks July 14, 2011 11 / 30

  23. The challenges Evaluating objective function is expensive Must run the program (at least once) More expensive for unfit solutions Scales poorly with larger problem sizes Fitness is noisy Randomness from parallel races and system noise Testing each candidate only once often produces an worse algorithm Running many trials is expensive Decision tree structures are complex Theoretically infinite size We artificially bound them to 2 736 bits (23 ints) each Jason Ansel (MIT) PetaBricks July 14, 2011 12 / 30

  24. Contrast two evolutionary approaches GPEA: General Purpose Evolutionary Algorithm Used as a baseline INCREA: Incremental Evolutionary Algorithm Bottom-up approach Noisy fitness evaluation strategy Domain informed mutation operators Jason Ansel (MIT) PetaBricks July 14, 2011 13 / 30

  25. General purpose evolution algorithm (GPEA) Initial population ? ? ? ? Cost = 0 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  26. General purpose evolution algorithm (GPEA) Initial population 72.7s ? ? ? Cost = 72.7 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  27. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s ? ? Cost = 83.2 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  28. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s 4.1s ? Cost = 87.3 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  29. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s 4.1s 31.2s Cost = 118.5 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  30. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s 4.1s 31.2s Cost = 118.5 Generation 2 ? ? ? ? Cost = 0 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  31. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s 4.1s 31.2s Cost = 118.5 Generation 2 4.2s 5.1s 2.6s 13.2s Cost = 25.1 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  32. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s 4.1s 31.2s Cost = 118.5 Generation 2 4.2s 5.1s 2.6s 13.2s Cost = 25.1 Generation 3 ? ? ? ? Cost = 0 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  33. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s 4.1s 31.2s Cost = 118.5 Generation 2 4.2s 5.1s 2.6s 13.2s Cost = 25.1 Generation 3 2.8s 0.1s 3.8s 2.3s Cost = 9.0 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

  34. General purpose evolution algorithm (GPEA) Initial population 72.7s 10.5s 4.1s 31.2s Cost = 118.5 Generation 2 4.2s 5.1s 2.6s 13.2s Cost = 25.1 Generation 3 2.8s 0.1s 3.8s 2.3s Cost = 9.0 Generation 4 ? ? ? ? Cost = 0 Jason Ansel (MIT) PetaBricks July 14, 2011 14 / 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend