lecture 1 1 course introduction
play

Lecture 1.1 Course Introduction Course Introduction and Overview - PowerPoint PPT Presentation

GPU Teaching Kit GPU Teaching Kit Accelerated Computing Lecture 1.1 Course Introduction Course Introduction and Overview Course Goals Learn how to program heterogeneous parallel computing systems and achieve High performance and


  1. GPU Teaching Kit GPU Teaching Kit Accelerated Computing Lecture 1.1 – Course Introduction Course Introduction and Overview

  2. Course Goals – Learn how to program heterogeneous parallel computing systems and achieve – High performance and energy-efficiency – Functionality and maintainability – Scalability across future generations – Portability across vendor devices – Technical subjects – Parallel programming API, tools and techniques – Principles and patterns of parallel algorithms – Processor architecture features and constraints 2 2

  3. People – Wen-mei Hwu (University of Illinois) – David Kirk (NVIDIA) – Joe Bungo (NVIDIA) – Mark Ebersole (NVIDIA) – Abdul Dakkak (University of Illinois) – Izzat El Hajj (University of Illinois) – Andy Schuh (University of Illinois) – John Stratton (Colgate College) – Isaac Gelado (NVIDIA) – John Stone (University of Illinois) – Javier Cabezas (NVIDIA) – Michael Garland (NVIDIA) 3

  4. Course Content • Course Introduction and Overview Module 1 • Introduction to Heterogeneous Parallel Computing Course Introduction • Portability and Scalability in Heterogeneous Parallel Computing • CUDA C vs. CUDA Libs vs. OpenACC Module 2 • Memory Allocation and Data Movement API Functions • Data Parallelism and Threads Introduction to CUDA C • Introduction to CUDA Toolkit ​Kernel -Based SPMD Parallel Programming • • Multidimensional Kernel Configuration Module 3 CUDA Parallelism Model • Color-to-Greyscale Image Processing Example • Blur Image Processing Example ​CUDA Memories • ​Tiled Matrix Multiplication • Module 4 ​Tiled Matrix Multiplication Kernel • Memory Model and Locality ​Handling Boundary Conditions in Tiling • ​Tiled Kernel for Arbitrary Matrix Dimensions • • Histogram (Sort) Example Module 5 Basic​ Matrix -Matrix Multiplication Example • Kernel-based Parallel ​Thread Scheduling • Programming • Control Divergence 4

  5. Course Content Module 6 • DRAM Bandwidth Performance Considerations: ​Memory Coalescing in CUDA • Memory Module 7 • Atomic Operations Atomic Operations • Convolution Module 8 Parallel Computation Patterns • Tiled Convolution (Part 1) • 2D Tiled Convolution Kernel Module 9 • Tiled Convolution Analysis Parallel Computation Patterns • Data Reuse in Tiled Convolution (Part 2) • Reduction Module 10 Performance Considerations: • Basic Reduction Kernel Parallel Computation Patterns • Improved Reduction Kernel • Scan (Parallel Prefix Sum) Module 11 • Work-Inefficient Parallel Scan Kernel Parallel Computation Patterns • Work-Efficient Parallel Scan Kernel (Part 3) • More on Parallel Scan 5

  6. Course Content • Scan Applications: Per-thread Output Variable Allocation Module 12 • Scan Applications: Radix Sort Performance Considerations: Scan • Performance Considerations (Histogram (Atomics) Example) Applications • Performance Considerations (Histogram (Scan) Example) • Advanced CUDA Memory Model Module 13 • Constant Memory Advanced CUDA Memory Model • Texture Memory Module 14 • Floating Point Precision Considerations • Numerical Stability Floating Point Considerations Module 15 • GPU as part of the PC Architecture GPU as part of the PC Architecture • Data Movement API vs. Unified Memory Module 16 • Pinned Host Memory Efficient Host-Device Data • Task Parallelism/CUDA Streams Transfer • Overlapping Transfer with Computation Module 17 • Advanced MRI Reconstruction Application Case Study: Advanced MRI Reconstruction Module 18 • Electrostatic Potential Calculation (Part 1) Application Case Study: • Electrostatic Potential Calculation (part 2) Electrostatic Potential Calculation 6

  7. Course Content Module 19 • Computational Thinking for Parallel Programming Computational Thinking For Parallel Programming • Joint MPI-CUDA Programming • Joint MPI-CUDA Programming (Vector Addition - Main Function) Module 20 • Joint MPI-CUDA Programming (Message Passing and Barrier) Related Programming Models: MPI (Data Server and Compute Processes) • Joint MPI-CUDA Programming (Adding CUDA) • Joint MPI-CUDA Programming (Halo Data Exchange) Module 21 • CUDA Python using Numba CUDA Python Using Numba • OpenCL Data Parallelism Model Module 22 • OpenCL Device Architecture Related Programming Models: • OpenCL Host Code (Part 1) OpenCL • OpenCL Host Code (Part 2) Module 23 • Introduction to OpenACC Related Programming Models: • OpenACC Subtleties OpenACC Module 24 Related Programming Models: • OpenGL and CUDA Interoperability OpenGL 7

  8. Course Content Module 25 • Effective use of Dynamic Parallelism • Advanced Architectural Features: Hyper-Q Dynamic Parallelism Module 26 • Multi-GPU Multi-GPU • Example Applications Using Libraries: CUBLAS Module 27 • Example Applications Using Libraries: CUFFT Using CUDA Libraries • Example Applications Using Libraries: CUSOLVER Module 28 • Advanced Thrust Advanced Thrust Module 29 Other GPU Development • Other GPU Development Platforms: QwickLABS Platforms: QwickLABS Where to Find Support 8

  9. GPU Teaching Kit GPU Teaching Kit Accelerated Computing The GPU Teaching Kit is licensed by NVIDIA and the University of Illinois under the Creative Commons Attribution-NonCommercial 4.0 International License.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend