Multiprocessors (Chapter 9) Idea: create powerful computers by - - PowerPoint PPT Presentation

multiprocessors chapter 9
SMART_READER_LITE
LIVE PREVIEW

Multiprocessors (Chapter 9) Idea: create powerful computers by - - PowerPoint PPT Presentation

Multiprocessors (Chapter 9) Idea: create powerful computers by connecting many smaller ones good news: works for timesharing (better than supercomputer) SI232 Set #22: Multiprocessors & El Grande Finale bad news: its really hard


slide-1
SLIDE 1

1 SI232 Set #22: Multiprocessors & El Grande Finale (Chapter 9) 2

Multiprocessors (Chapter 9)

  • Idea: create powerful computers by connecting many smaller ones

good news: works for timesharing (better than supercomputer) bad news: its really hard to write good concurrent programs many commercial failures

Cache Processor Cache Processor Cache Processor Single bus Memory I/O Network Cache Processor Cache Processor Cache Processor Memory Memory Memory

3

Who? When? Why?

  • “For over a decade prophets have voiced the contention that the
  • rganization of a single computer has reached its limits and that truly

significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit cooperative solution…. Demonstration is made of the continued validity of the single processor approach…”

  • “…it appears that the long-term direction will be to use increased

silicon to build multiple processors on a single chip.”

4

Flynn’s Taxonomy (1966)

  • 1. Single instruction stream, single data stream
  • 2. Single instruction stream, multiple data streams
  • 3. Multiple instruction streams, single data stream
  • 4. Multiple instruction streams, multiple data streams
slide-2
SLIDE 2

5 Question #1: How do parallel processor share data?

  • 1. Shared variables in memory
  • 2. Send explicit messages between processors

Cache P rocessor Cache Processor Cache Processor Single bus Mem ory I/O Network Cache Processor Cache Processor Cache Processor Memory Memory Memory Network Cache Processor Cache Processor Cache Processor Memory Memory Memory

6 Question #2: How do parallel processors coordinate?

  • synchronization
  • built into send / receive primitives
  • perating system protocols

7

Some History

  • Some SIMD designs:
  • “For better or worse, computer architects are not easily discouraged”

Lots of interesting designs and ideas, lots of failures, few successes

8

Clusters

  • Constructed from whole computers
  • Independent, scalable networks
  • Strengths:

– Many applications amenable to loosely coupled machines – Exploit local area networks – Cost effective / Easy to expand

  • Weaknesses:

– Administration costs not necessarily lower – Connected using I/O bus

  • Highly available due to separation of memories
slide-3
SLIDE 3

9

Google

  • Serve an average of 1000 queries per second
  • Google uses 6,000 processors and 12,000 disks
  • Two sites in silicon valley, two in Virginia
  • Each site connected to internet using OC48 (2488 Mbit/sec)
  • Reliability:

– On an average day, 20 machines need rebooted (software error) – 2-3% of the machines replaced each year

10

A Whirlwind tour of Chip Multiprocessors and Multithreading

Slides from Joel Emer’s talk at Microprocessor Forum

11

Instruction Issue

  • Time

12

Superscalar Issue

  • Time
slide-4
SLIDE 4

13

Chip Multiprocessor

  • Time

14

Fine Grained Multithreading

  • Time

15

Simultaneous Multithreading

Time

16

Concluding Remarks

  • Evolution vs. Revolution

“More often the expense of innovation comes from being too disruptive to computer users” “Acceptance of hardware ideas requires acceptance by software people; therefore hardware people should learn about software. And if software people want good machines, they must learn more about hardware to be able to communicate with and thereby influence hardware engineers.”