multiprocessors and multithreading
play

Multiprocessors and Multithreading Jason Mars Sunday, March 3, 13 - PowerPoint PPT Presentation

Multiprocessors and Multithreading Jason Mars Sunday, March 3, 13 Parallel Architectures for Executing Multiple Threads Sunday, March 3, 13 Parallel Architectures for Executing Multiple Threads Multiprocessor multiple CPUs tightly


  1. Multiprocessors and Multithreading Jason Mars Sunday, March 3, 13

  2. Parallel Architectures for Executing Multiple Threads Sunday, March 3, 13

  3. Parallel Architectures for Executing Multiple Threads • Multiprocessor – multiple CPUs tightly coupled enough to cooperate on a single problem. Sunday, March 3, 13

  4. Parallel Architectures for Executing Multiple Threads • Multiprocessor – multiple CPUs tightly coupled enough to cooperate on a single problem. • Multithreaded processors (e.g., simultaneous multithreading) – single CPU core that can execute multiple threads simultaneously. Sunday, March 3, 13

  5. Parallel Architectures for Executing Multiple Threads • Multiprocessor – multiple CPUs tightly coupled enough to cooperate on a single problem. • Multithreaded processors (e.g., simultaneous multithreading) – single CPU core that can execute multiple threads simultaneously. • Multicore processors – multiprocessor where the CPU cores coexist on a single processor chip. Sunday, March 3, 13

  6. Multiprocessors • Not that long ago, multiprocessors were expensive, exotic machines – special-purpose engines to solve hard problems. • Now they are pervasive. Processor Processor Processor Cache Cache Cache Single bus Memory I/O Sunday, March 3, 13

  7. Classifying Multiprocessors • Flynn Taxonomy • Interconnection Network • Memory Topology • Programming Model Sunday, March 3, 13

  8. Flynn Taxonomy • SISD (Single Instruction Single Data) • Uniprocessors • SIMD (Single Instruction Multiple Data) • Examples: Illiac-IV, CM-2, Nvidia GPUs, etc. • Simple programming model • Low overhead • MIMD (Multiple Instruction Multiple Data) • Examples: many, nearly all modern multiprocessors or multicores • Flexible • Use o ff -the-shelf microprocessors or microprocessor cores • MISD (Multiple Instruction Single Data) • ??? Sunday, March 3, 13

  9. Interconnection Networks • Bus • Network • pros/cons? Processor Processor Processor Cache Cache Cache Single bus Memory I/O Sunday, March 3, 13

  10. Memory Topology • UMA (Uniform Memory Access) • NUMA (Non-uniform Memory Access) • pros/cons? Processor Processor Processor Cache Cache Cache Processor Processor Processor Single bus Cache Cache Cache Memory I/O cpu M Memory Memory Memory cpu M Network Network cpu M . . . . . . cpu M Sunday, March 3, 13

  11. Programming Model • Shared Memory -- every processor can name every address location • Message Passing -- each processor can name only it’s local memory. Communication is through explicit messages. • pros/cons? Processor Processor Processor Cache Cache Cache Memory Memory Memory Network Sunday, March 3, 13

  12. Programming Model • Shared Memory -- every processor can name every address location • Message Passing -- each processor can name only it’s local memory. Communication is through explicit messages. • pros/cons? find the max of 100,000 integers on 10 processors. Processor Processor Processor Cache Cache Cache Memory Memory Memory Network Sunday, March 3, 13

  13. Parallel Programming i = 47 Processor A Processor B index = i++; index = i++; • Shared-memory programming requires synchronization to provide mutual exclusion and prevent race conditions • locks (semaphores) • barriers Sunday, March 3, 13

  14. Parallel Programming i = 47 Processor A Processor B index = i++; index = i++; load i; load i; inc i; inc i; store i; store i; • Shared-memory programming requires synchronization to provide mutual exclusion and prevent race conditions • locks (semaphores) • barriers Sunday, March 3, 13

  15. Parallel Programming i = 47 Processor A Processor B index = i++; index = i++; load i; inc i; store i; load i; inc i; store i; • Shared-memory programming requires synchronization to provide mutual exclusion and prevent race conditions • locks (semaphores) • barriers Sunday, March 3, 13

  16. Parallel Programming i = 47 Processor A Processor B index = i++; index = i++; • Shared-memory programming requires synchronization to provide mutual exclusion and prevent race conditions • locks (semaphores) • barriers Sunday, March 3, 13

  17. Parallel Programming i = 47 Processor A Processor B index = i++; index = i++; load i; load i; inc i; inc i; store i; store i; • Shared-memory programming requires synchronization to provide mutual exclusion and prevent race conditions • locks (semaphores) • barriers Sunday, March 3, 13

  18. But... • That ignores the existence of caches • How do caches complicate the problem of keeping data consistent between processors? Sunday, March 3, 13

  19. Multiprocessor Caches (Shared Memory) • the problem -- cache coherency • the solution? Processor Processor Processor i i Cache Cache Cache Single bus Memory I/O Sunday, March 3, 13

  20. Multiprocessor Caches (Shared Memory) • the problem -- cache coherency • the solution? inc i; Processor Processor Processor i i Cache Cache Cache Single bus Memory I/O Sunday, March 3, 13

  21. Multiprocessor Caches (Shared Memory) • the problem -- cache coherency • the solution? inc i; load i; Processor Processor Processor i i Cache Cache Cache Single bus Memory I/O Sunday, March 3, 13

  22. Multiprocessor Caches (Shared Memory) • the problem -- cache coherency • the solution? inc i; load i; Processor Processor Processor i i Cache Cache Cache Single bus Memory I/O Sunday, March 3, 13

  23. What Does Coherence Mean? • Informally: • Any read must return the most recent write • Too strict and very di ffi cult to implement • Better: • A processor sees its own writes to a location in the correct order. • Any write must eventually be seen by a read • All writes are seen in order (“serialization”). Writes to the same location are seen in the same order by all processors. • Without these guarantees, synchronization doesn’t work Sunday, March 3, 13

  24. Solutions Sunday, March 3, 13

  25. Solutions • Snooping Solution (Snoopy Bus): • Send all requests for unknown data to all processors • Processors snoop to see if they have a copy and respond accordingly • Requires “broadcast”, since caching information is at processors • Works well with bus (natural broadcast medium) • Dominates for small scale machines (most of the market) Sunday, March 3, 13

  26. Solutions • Snooping Solution (Snoopy Bus): • Send all requests for unknown data to all processors • Processors snoop to see if they have a copy and respond accordingly • Requires “broadcast”, since caching information is at processors • Works well with bus (natural broadcast medium) • Dominates for small scale machines (most of the market) • Directory-Based Schemes • Keep track of what is being shared in one centralized place (for each address) => the directory • Distributed memory => distributed directory (avoids bottlenecks) • Send point-to-point requests to processors (to invalidate, etc.) • Scales better than Snooping for large multiprocessors Sunday, March 3, 13

  27. Implementing Coherence Protocols • How do you find the most up-to-date copy of the desired data? • Snooping protocols • Directory protocols Processor Processor Processor Snoop Cache tag Snoop Cache tag Snoop Cache tag tag and data tag and data tag and data Single bus Memory I/O Sunday, March 3, 13

  28. Implementing Coherence Protocols • How do you find the most up-to-date copy of the desired data? • Snooping protocols • Directory protocols Processor Processor Processor Snoop Cache tag Snoop Cache tag Snoop Cache tag tag and data tag and data tag and data Single bus Memory I/O Write-Update vs Write-Invalidate Sunday, March 3, 13

  29. Parallel Architectures for Executing Multiple Threads • Multiprocessor – multiple CPUs tightly coupled enough to cooperate on a single problem. • Multithreaded processors (e.g., simultaneous multithreading) – single CPU core that can execute multiple threads simultaneously. • Multicore processors – multiprocessor where the CPU cores coexist on a single processor chip. Sunday, March 3, 13

  30. Simultaneous Multithreading (A Few of Dean Tullsen’s 1996 Thesis Slides) Dean Tullsen Sunday, March 3, 13

  31. Hardware Multithreading instruction stream Conventional Processor PC regs CPU Dean Tullsen Sunday, March 3, 13

  32. Hardware Multithreading Multithreaded instruction stream Conventional Processor PC regs CPU Dean Tullsen Sunday, March 3, 13

  33. Hardware Multithreading Multithreaded instruction stream Conventional Processor PC PC regs regs CPU Dean Tullsen Sunday, March 3, 13

  34. Hardware Multithreading Multithreaded instruction stream Conventional Processor PC PC PC regs regs regs CPU Dean Tullsen Sunday, March 3, 13

  35. Hardware Multithreading Multithreaded instruction stream Conventional Processor PC PC PC regs regs regs PC regs CPU Dean Tullsen Sunday, March 3, 13

  36. Superscalar (vs Superpipelined) (multiple instructions in the same stage, same CR as scalar) (more total stages, faster clock rate) Sunday, March 3, 13

  37. Superscalar Execution Issue Slots Time (proc cycles) Dean Tullsen Sunday, March 3, 13

  38. Superscalar Execution Issue Slots Vertical waste Time (proc cycles) Dean Tullsen Sunday, March 3, 13

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend