chapter 1 introduction
play

Chapter 1: Introduction Ajay Kshemkalyani and Mukesh Singhal - PowerPoint PPT Presentation

Chapter 1: Introduction Ajay Kshemkalyani and Mukesh Singhal Distributed Computing: Principles, Algorithms, and Systems Cambridge University Press A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 1 / 36


  1. Chapter 1: Introduction Ajay Kshemkalyani and Mukesh Singhal Distributed Computing: Principles, Algorithms, and Systems Cambridge University Press A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 1 / 36

  2. Distributed Computing: Principles, Algorithms, and Systems Definition Autonomous processors communicating over a communication network Some characteristics ◮ No common physical clock ◮ No shared memory ◮ Geographical seperation ◮ Autonomy and heterogeneity A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 2 / 36

  3. Distributed Computing: Principles, Algorithms, and Systems Distributed System Model P M P M P M P processor(s) Communication network M memory bank(s) (WAN/ LAN) P M P M P M P M Figure 1.1: A distributed system connects processors by a communication network. A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 3 / 36

  4. Distributed Computing: Principles, Algorithms, and Systems Relation between Software Components Distributed application Extent of distributed protocols Distributed software Network protocol stack (middleware libraries) Application layer Operating Transport layer system Network layer Data link layer Figure 1.2: Interaction of the software components at each process. A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 4 / 36

  5. Distributed Computing: Principles, Algorithms, and Systems Motivation for Distributed System Inherently distributed computation Resource sharing Access to remote resources Increased performance/cost ratio Reliability ◮ availability, integrity, fault-tolerance Scalability Modularity and incremental expandability A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 5 / 36

  6. Distributed Computing: Principles, Algorithms, and Systems Parallel Systems Multiprocessor systems (direct access to shared memory, UMA model) ◮ Interconnection network - bus, multi-stage sweitch ◮ E.g., Omega, Butterfly, Clos, Shuffle-exchange networks ◮ Interconnection generation function, routing function Multicomputer parallel systems (no direct access to shared memory, NUMA model) ◮ bus, ring, mesh (w w/o wraparound), hypercube topologies ◮ E.g., NYU Ultracomputer, CM* Conneciton Machine, IBM Blue gene Array processors (colocated, tightly coupled, common system clock) ◮ Niche market, e.g., DSP applications A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 6 / 36

  7. Distributed Computing: Principles, Algorithms, and Systems UMA vs. NUMA Models P P P P P P P M M M Interconnection network Interconnection network M M M M P P P M M M (a) (b) M memory P processor Figure 1.3: Two standard architectures for parallel systems. (a) Uniform memory access (UMA) multiprocessor system. (b) Non-uniform memory access (NUMA) multiprocessor. In both architectures, the processors may locally cache data from memory. A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 7 / 36

  8. Distributed Computing: Principles, Algorithms, and Systems Omega, Butterfly Interconnects 000 M0 000 M0 P0 000 P0 000 M1 P1 001 001 M1 P1 001 001 P2 010 010 M2 010 M2 P2 010 P3 011 011 M3 P3 011 011 M3 P4 100 100 M4 P4 100 100 M4 P5 101 101 M5 P5 101 101 M5 P6 110 110 M6 P6 110 110 M6 M7 P7 111 P7 111 111 111 M7 (a) 3−stage Omega network (n=8, M=4) (b) 3−stage Butterfly network (n=8, M=4) Figure 1.4: Interconnection networks for shared memory multiprocessor systems. (a) Omega network (b) Butterfly network. A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 8 / 36

  9. Distributed Computing: Principles, Algorithms, and Systems Omega Network n processors, n memory banks log n stages: with n / 2 switches of size 2x2 in each stage Interconnection function: Output i of a stage connected to input j of next stage: � 2 i for 0 ≤ i ≤ n / 2 − 1 j = 2 i + 1 − n for n / 2 ≤ i ≤ n − 1 Routing function: in any stage s at any switch: to route to dest. j , if s + 1th MSB of j = 0 then route on upper wire else [ s + 1th MSB of j = 1] then route on lower wire A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 9 / 36

  10. Distributed Computing: Principles, Algorithms, and Systems Interconnection Topologies for Multiprocesors 0100 0110 1110 1100 0000 0010 1010 1000 1111 0101 0111 1101 0001 0011 1011 1001 processor + memory (a) (b) Figure 1.5: (a) 2-D Mesh with wraparound (a.k.a. torus) (b) 3-D hypercube A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 10 / 36

  11. Distributed Computing: Principles, Algorithms, and Systems Flynn’s Taxonomy I I I I I C C C C C C Control Unit I I I I P Processing Unit I instruction stream P P P P P P D data stream D D D D D (a) SIMD (b) MIMD (c) MISD Figure 1.6: SIMD, MISD, and MIMD modes. SISD: Single Instruction Stream Single Data Stream (traditional) SIMD: Single Instruction Stream Multiple Data Stream ◮ scientific applicaitons, applications on large arrays ◮ vector processors, systolic arrays, Pentium/SSE, DSP chips MISD: Multiple Instruciton Stream Single Data Stream ◮ E.g., visualization MIMD: Multiple Instruction Stream Multiple Data Stream ◮ distributed systems, vast majority of parallel systems A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 11 / 36

  12. Distributed Computing: Principles, Algorithms, and Systems Terminology Coupling ◮ Interdependency/binding among modules, whether hardware or software (e.g., OS, middleware) Parallelism: T (1) / T ( n ). ◮ Function of program and system Concurrency of a program ◮ Measures productive CPU time vs. waiting for synchronization operations Granularity of a program ◮ Amt. of computation vs. amt. of communication ◮ Fine-grained program suited for tightly-coupled system A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 12 / 36

  13. Distributed Computing: Principles, Algorithms, and Systems Message-passing vs. Shared Memory Emulating MP over SM: ◮ Partition shared address space ◮ Send/Receive emulated by writing/reading from special mailbox per pair of processes Emulating SM over MP: ◮ Model each shared object as a process ◮ Write to shared object emulated by sending message to owner process for the object ◮ Read from shared object emulated by sending query to owner of shared object A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 13 / 36

  14. Distributed Computing: Principles, Algorithms, and Systems Classification of Primitives (1) Synchronous (send/receive) ◮ Handshake between sender and receiver ◮ Send completes when Receive completes ◮ Receive completes when data copied into buffer Asynchronous (send) ◮ Control returns to process when data copied out of user-specified buffer A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 14 / 36

  15. Distributed Computing: Principles, Algorithms, and Systems Classification of Primitives (2) Blocking (send/receive) ◮ Control returns to invoking process after processing of primitive (whether sync or async) completes Nonblocking (send/receive) ◮ Control returns to process immediately after invocation ◮ Send: even before data copied out of user buffer ◮ Receive: even before data may have arrived from sender A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 15 / 36

  16. Distributed Computing: Principles, Algorithms, and Systems Non-blocking Primitive Send(X, destination, handle k ) // handle k is a return parameter ... ... Wait ( handle 1 , handle 2 , . . . , handle k , . . . , handle m ) // Wait always blocks Figure 1.7: A nonblocking send primitive. When the Wait call returns, at least one of its parameters is posted. Return parameter returns a system-generated handle ◮ Use later to check for status of completion of call ◮ Keep checking (loop or periodically) if handle has been posted ◮ Issue Wait(handle1, handle2, . . . ) call with list of handles ◮ Wait call blocks until one of the stipulated handles is posted A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 16 / 36

  17. Distributed Computing: Principles, Algorithms, and Systems Blocking/nonblocking; Synchronous/asynchronous; send/receive primities S S W W S_C process i P, S_C buffer_i kernel_i kernel_j buffer_j P, R_C process j R_C R R W W (a) blocking sync. Send, blocking Receive (b) nonblocking sync. Send, nonblocking Receive W W S S S_C process i P, S_C buffer_i kernel_i (c) blocking async. Send (d) nonblocking async. Send duration to copy data from or to user buffer duration in which the process issuing send or receive primitive is blocked S Send S_C Send primitive issued processing for completes R R_C Receive primitive issued processing for completes Receive P The completion of the previously initiated nonblocking operation W Process may issue to check completion of nonblocking operation Wait Figure 1.8:Illustration of 4 send and 2 receive primitives A. Kshemkalyani and M. Singhal (Distributed Computing) Introduction CUP 2008 17 / 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend