pgo and llvm
play

PGO and LLVM Status and Current Work Bob Wilson Diego Novillo - PowerPoint PPT Presentation

PGO and LLVM Status and Current Work Bob Wilson Diego Novillo Chandler Carruth PGO: What Is It? PGO: What Is It? PGO = Profile Guided Optimization PGO: What Is It? PGO = Profile Guided Optimization More information -> better


  1. PGO and LLVM Status and Current Work Bob Wilson Diego Novillo Chandler Carruth

  2. PGO: What Is It?

  3. PGO: What Is It? • PGO = Profile Guided Optimization

  4. PGO: What Is It? • PGO = Profile Guided Optimization • More information -> better optimization

  5. PGO: What Is It? • PGO = Profile Guided Optimization • More information -> better optimization • Profile data • Control flow: e.g., execution counts • Future extensions: object types, etc.

  6. What Is It Good For? • Some examples:

  7. What Is It Good For? • Some examples: • Block layout

  8. What Is It Good For? • Some examples: • Block layout • Spill placement

  9. What Is It Good For? • Some examples: • Block layout • Spill placement • Inlining heuristics

  10. What Is It Good For? • Some examples: • Block layout • Spill placement • Inlining heuristics • Hot/cold partitioning

  11. What Is It Good For? • Some examples: • Block layout • Spill placement • Inlining heuristics • Hot/cold partitioning • Can significantly improve performance

  12. What’s the Catch? • Assumes program behavior is always the same • PGO may hurt performance if behavior changes • May require some extra build steps

  13. History of PGO in LLVM

  14. History of PGO in LLVM • Instrumentation, profile info and block placement (2004, Chris Lattner)

  15. History of PGO in LLVM • Instrumentation, profile info and block placement (2004, Chris Lattner) • Branch weights and block frequencies (2011, Jakub Staszak)

  16. History of PGO in LLVM • Instrumentation, profile info and block placement (2004, Chris Lattner) • Branch weights and block frequencies (2011, Jakub Staszak) • Setting branch weights from execution counts (2012, Alastair Murray)

  17. Outline • Front-end instrumentation • Profiles from sampling • Using profile info in the optimizer and back-end

  18. Profiling with Instrumentation

  19. Profiling with Instrumentation • Pros: • Detailed information • Predictability • Resilient against changes

  20. Profiling with Instrumentation • Pros: • Detailed information • Predictability • Resilient against changes • Cons: • Need to build instrumented version • Running with instrumentation is slower

  21. Design Goals

  22. Design Goals • Degrade gracefully when code changes

  23. Design Goals • Degrade gracefully when code changes • Profile data not tied to specific compiler version

  24. Design Goals • Degrade gracefully when code changes • Profile data not tied to specific compiler version • Minimize instrumentation overhead

  25. Design Goals • Degrade gracefully when code changes • Profile data not tied to specific compiler version • Minimize instrumentation overhead • Execution counts accurately mapped to source

  26. Dealing with Change

  27. Dealing with Change • Project source code changes • Detect functions that have changed • Ignore profile data for those functions only

  28. Dealing with Change • Project source code changes • Detect functions that have changed • Ignore profile data for those functions only • Some changes are OK • Minimum requirement: same control-flow structure

  29. Compiler Changes • Compiler updates should not invalidate profiles • LLVM IR generated by front-end often changes • Associating profiles with IR can be a problem

  30. Source-level Accuracy • PGO vs. code coverage testing • Should only have one profile format for both • Profile data for PGO should be viewable • Requires profiles to map accurately to source

  31. Use the Source • Solution: associate profile data with clang ASTs • Compiler changes are (almost) irrelevant • Provides info to detect source changes • Independent of optimization and debug info

  32. Counters on ASTs • Walk through ASTs in program order • Assign counters to control-flow constructs • Compare number of counters to detect changes • Can add a hash of ASTs to be more sensitive

  33. Example CompoundStmt WhileStmt Cond Body Expr IfStmt Then Stmt

  34. Example C0 CompoundStmt WhileStmt Cond Body Expr IfStmt Then Stmt

  35. Example C0 CompoundStmt WhileStmt Cond Body C1 C2 C3 Expr IfStmt Then Stmt

  36. Example C0 CompoundStmt WhileStmt Cond Body C1 C2 C3 Expr IfStmt Then Stmt C4

  37. Minimizing Overhead • Not every block needs a counter • CFG-based approach: compute a spanning tree • Can often do as well by following AST structure

  38. Example CompoundStmt IfStmt Stmt Else Then Stmt Stmt

  39. Example C0 CompoundStmt IfStmt Stmt Else Then Stmt Stmt

  40. Example C0 CompoundStmt IfStmt Stmt Else Then Stmt Stmt C1

  41. No-Return Calls • Important for code coverage • Not an issue for PGO (we don’t have a “likely no-return” attribute) • A counter after every call would be expensive • Can we get away with ignoring this?

  42. Instrumentation Overhead: Compile Time PGO GCOV 68% 60 45 Percent Slowdown 30 15 0 400.perlbench 401.bzip2 429.mcf 445.gobmk 456.hmmer 458.sjeng 462.libquantum 471.omnetpp 473.astar 483.xalancbmk

  43. Instrumentation Overhead: Execution Time PGO GCOV 239% 150 120 Percent Slowdown 90 60 30 0 400.perlbench 401.bzip2 429.mcf 445.gobmk 456.hmmer 458.sjeng 462.libquantum 471.omnetpp 473.astar 483.xalancbmk

  44. PGO with External Profiling Diego Novillo

  45. External Profilers • No changes needed to user application • Profilers using HW counters → low overhead • Binary runs under control of profiler • Profiler saves profile results in a file • binary instrumentation (valgrind, • Used as input to analysis tools cachegrind) • hardware counters (perf, oprofile) • Why not use it as input to the compiler?

  46. $ perf annotate -l [ … ] : for (int i = 0; i < N; i++) { : A *= i / 32; /home/dnovillo/prog.cc:5 9.18% : 400520: mov %eax,%ecx 0.00% : 400522: sar $0x1f,%ecx 0.00% : 400525: shr $0x1b,%ecx 0.00% : 400528: add %eax,%ecx 7.89% : 40052a: sar $0x5,%ecx 0.00% : 40052d: xorps %xmm0,%xmm0 0.00% : 400530: cvtsi2sd %ecx,%xmm0 8.23% : 400534: mulsd 0x200aec(%rip),%xmm0 # 601028 <A> 66.10% : 40053c: movsd %xmm0,0x200ae4(%rip) # 601028 <A> [ … ] GOAL: Use all the collected runtime knowledge as input to the optimizers

  47. Why External Profiler? • No need for instrumented builds • Simplifies build rules for user application • No build time overhead

  48. Why External Profiler? • Very low runtime overhead (< 1%) • Profiles can be collected in production environments • Profile data is more representative • Training is done on actual production loads

  49. Why External Profiler? • Allows application-specific profilers • e.g., game engines • Anything that can be converted into hints to the compiler

  50. User Model Base Optimized Source Code -O2 -gline-tables-only Binary Execute under profiler (low overhead) Peak Profile -O2 -fprofile-sample-use -gline-tables-only Optimized Binary

  51. Design • Profile data often needs conversion • Scalar pass incorporates profile into IR • Samples are associated with • Source locations mapped to IR processor instructions instructions • External tool converts into mapping • Profile kind dictates representation to source LOCs • Optimizers query via standard • Bad/stale/missing profiles analysis pass API • Never affect correctness • Analysis routines fallback on static heuristics • Only affect performance

  52. Current Implementation 1. Conversion tool for Linux Perf (Sample-based profiles) 2. Samples converted to branch weights 3. Profile pass simply annotates the IR 4. Analysis uses IR metadata for estimates 5. Optimizers automatically adjust cost models (Provided they use the Analysis API properly) (Work is needed in this area)

  53. Limitations & Restrictions Profile says “LIAR!” • Program behaviour must coincide foo(int x) { with profile • Stale profiles degrade if (__builtin_expect(x > 100, 1)) performance (significantly) hot(); • Non-representative runs mislead else optimizers cold(); • Who do we listen to? } • Warn the user? main() { • Silently override? while (true) foo(rand() % 100); • Is the profile representative? }

  54. Limitations & Restrictions • HW counters → IR mapping is Line 2 is HOT according to profile lossy Need to know where in the line • Requires good line table ● Column numbers ● DWARF discriminators information • Many instructions on the same line of code 1 foo(int x) { 2 if (x < 100) hot(); else cold(); 3 } 4 5 main() { 6 while (true) foo(rand() % 100); 7 }

  55. Limitations & Restrictions • The optimizer must use profiles! • Notably, the inliner

  56. Early Results NOT 0-BASED!

  57. Early Results NOT 0-BASED!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend