energy aware software development
play

Energy-aware Software Development for Massive-Scale Systems Torsten - PowerPoint PPT Presentation

Energy-aware Software Development for Massive-Scale Systems Torsten Hoefler With input from Marc Snir, Bill Gropp and Wen-mei Hwu Keynote at EnA-HPC, Sept 9 th 2011, Hamburg, Germany Outline The HPC Energy Crisis Computer Architecture


  1. Energy-aware Software Development for Massive-Scale Systems Torsten Hoefler With input from Marc Snir, Bill Gropp and Wen-mei Hwu Keynote at EnA-HPC, Sept 9 th 2011, Hamburg, Germany

  2. Outline • The HPC Energy Crisis • Computer Architecture Speculations • Algorithmic Power Estimates • Network Power Consumption • Power-aware Programming • Quick Primer on Power Modeling • This is not an Exascale talk! But it’s fun to look at! • All images used in this talk belong to the owner! T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 2/48

  3. Some Ammunition for Politics • US EPA Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431 • Data centers consumed 61 billion kilowatt-hours (kWh) in 2006 (1.5% of total U.S. electricity consumption) • Electricity cost of $4.5 billion (~15 power plants) • Doubled from 2000-2006 • Koomey’s report (Jul. 2011) • Only 56% increase through 2006-2011 though • Attributed to virtualization and economic crisis in 2008 • Well, we’re still on an exponential curve! T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 3/48

  4. Development and Projection of Energy Costs • Exponential requirements times linear cost growth:  Source: T. Hoefler: Software and Hardware Techniques for Power-Efficient HPC Networking T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 4/48

  5. What is this “Energy Crisis”? (Short Story) • Expectation: double performance every 18 months at roughly equal costs (including energy) • Realization: Explicit parallelism at all levels • Instruction (out-of-order execution comes to an end) • Memory (implicit caching and HW prefetch end) • Thread (simple tasking may not be efficient) • Process (oversubscription overheads unaffordable?) SMP MPP Many Core Many Thread • Not only parallelism!  more parallelism! T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 5/48

  6. System Power Breakdown Today (Longer Story) Memory 9% Network 33% CPU 56% inefficient! Source: Kogge et al. Exascale Computing Study T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 6/48

  7. CPU Power Consumption Prediction (56%) 2500 2000 Huge Overheads! Local 1500 Off-Chip On-Chip 1000 Op Overhead 500 0 Now Scaled Ideal Localized • Overhead: Branch prediction, reg. renaming, spec. execution, ILP, decoding (x86), caches, … Source: Bill Dally, 2011 T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 7/48

  8. Current Commodity Architectural Solutions Multi-threaded Vector pipe Shared units Many registers Parallel memory Pipelined mem. Many core Low power Specialized Cheap Very Low power Commodity GPGPU Very Cheap “Cell phone” Server Superscalar OOO issue Superscalar Vector High power OOO issue Low perf. Low power VLIW/EPIC? Very cheap High perf. Med. power Expensive High perf. Expensive T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 8/48

  9. Future Power-aware Architectures? • Overheads are too large! • Especially complex logic inside the CPU • Too complex instruction decode (esp. x86) • OOO moves data needlessly • Architectures are simplified • E.g., Cell, SCC • Small or no OOO fetch and instruction window • Emphasize vector operations • Fix as much as possible during compile time • VLIW/EPIC comeback? T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 9/48

  10. (V)LIW/EPIC to the Rescue? • (Very) Large Instruction Word ((V)LIW) • No dynamic operation scheduling (i.e., Superscalar) • Static scheduling, simple decode logic • Explicit Parallel Instruction Computing (EPIC) • Groups of operations (bundles) • Stop bit indicates if bundle depends on previous bundles • Complexity moved to compiler • Very popular in low-power devices (AMD/ATI GPUs) • But non-deterministic memory/cache times make static scheduling hard! T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 10/48

  11. Trends in Algorithms (Towards Co-Design) • Most early HPC applications used regular grids • Simple implementation and execution, structured • However, often not efficient T • Needs to compute all grid points at full precision R E • Adaptive Methods N D • Less FLOPs, more science! • Semi-structured • Data-driven Methods • “Informatics” applications • Completely unstructured T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 11/48

  12. The Full Spectrum of Algorithms Structured Algorithmic Trends Unstructured for (int i=0; i<N, i++) while(v = Q.pop()) { C[i] = A[i] + B[i] for(int i=0, i<v.enum(), i++) { u = v.edges[i]; // mark u VEC MT Q.push(u); MT } for (int i=0; i<N, i+=s) VEC Less vec_add(A[i], B[i], C[i]) Regular while(v = Q.pop()) { VLIW for(int i=0, i<v.enum(), i+=s) { vec_load(u, v.edges[i]; INT FP FP FP FP FP FP FP BR vec_store(Q.end(), u); } while(spawn(Q.pop())) { for(int i=0, i<v.enum(), i+=s) { for (int i=0; i<N, i++) spawn(update(v.edges[i], Q) spawn(A[i] = B[i]+C[I] } T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 12/48

  13. General Architectural Observations • Superscalar, RISC, wide OOO outside of power budget • Maybe “small/simple” versions • VLIW/EPIC and Vector: very power-efficient • Performs best for static applications (e.g., graphics) • Problems with scheduling memory accesses • Limited performance for irregular applications with complex dependencies • Multithreaded: versatile and efficient • Simple logic, low overhead for thread state • Good for irregular applications/complex dependencies • Fast synchronization (full/empty bits etc.) T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 13/48

  14. Optimized CPU System Power Consumption Memory 18% Very inefficient! CPU Network 11% 66% T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 14/48

  15. Memory Power Consumption Prediction • DRAM Architecture (today ~2 nJ / 64 bit) Current RAS/CAS-based Desired Address-based ADDR … … RAS PAGE PAGE PAGE PAGE PAGE PAGE ADDR CAS All pages active Few pages active Many refresh cycles Read (refresh) only needed data Small part of read data is used All read data is used Small number of pins Large number of pins • Cache is 80% throw-away  scratchpad memory! T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 15/48

  16. Optimized DRAM System Power Consumption CPU 11% Memory 2% CPU 13% Network 79% T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 16/48

  17. “The Network is the Computer” • We must obey the network • Everything is a (hierarchical) network! Building Block SuperNode (1024 cores) Super Node (32 Nodes / 4 CEC) L-Link Cables Drawer (256 cores) SMP node (32 cores) P7 Chip (8 cores) T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 17/48

  18. Network Power Consumption 700 600 Energy/64 bit (pJ) 500 Between cabinets 400 300 Board to Board 200 Chip to chip 100 On Die 0 0.1 1 10 100 1000 Interconnect Distance (cm) Source: S. Borkar, Hot Interconnects 2011 T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 18/48

  19. A Quick Glance at Exascale Power Scale Exaflop 20 MW Data Center Petaflop 20 kW Rack/Cabinet Teraflop 20 W Chip • 20 MW  20 pJ/Flop • 20% leakage  16 pJ/Flop • 7nm prediction: 10 pJ/Flop • 6 pJ/Flop for data movement  • Expected to be 10x-100x more! T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 19/48

  20. Programming a “Network Computer” 800 Energy/64 bit (pJ) • Surprise: Locality is important! 600 Between 400 • Energy consumption grows cabinets 200 with distance Board to Board 0 Chip to chip • “Hidden” distribution: OpenMP On Die 0.1 10 1000 • Problem: locality not exposed Interconnect Distance (cm) • “Explicit” distribution: PGAS,MPI But what is • User handles locality ? • MPI supports process mapping • Probably MPI+X in the future T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 20/48

  21. So, is it really about Flops? Of course not! • But: Flops is the default algorithm measure • Often set equal to algorithmic (time) complexity • Numerous papers to reduce number of Flops • Merriam Webster: “flop: to fail completely” • HPC is power-limited! • Flops are cheap, data movement is expensive, right?  Just like using the DRAM architecture from the 80’s, we use algorithmic techniques from the 70’s! • Need to consider I/O complexity instead of FP • Good place to start reading: Hong&Kung: Red-Blue Pebble Game T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 21/48

  22. How much Data Movement is Needed? MatMul? • Matrix Multiplication: A=BC 1 1 3 1 1 4 1 7 • NxN matrix, ≥2N 2 reads, ≥ N 2 writes 9 4 1 2 1 5 1 3 • Textbook algorithm has no reuse 1 3 0 1 5 3 7 4 1 … 3 0 9 8 • Example memory hierarchy model: 1 2 5 6 Energy Performance Capacity (FP) Functionality Core/FP Unit 50 pJ 125 ps - Register Bank 10 pJ 250 ps 100 Cache/SRAM 100 pJ 2 ns 100.000 Memory/DRAM 1000 pJ 100 ns 100.000.000 Source: Dally, 2011 T. Hoefler: Energy-aware Software Development for Massive-Scale Systems 22/48

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend