Communication Chapter 2 IPC Inter-Process Communication is the - - PowerPoint PPT Presentation

communication
SMART_READER_LITE
LIVE PREVIEW

Communication Chapter 2 IPC Inter-Process Communication is the - - PowerPoint PPT Presentation

Communication Chapter 2 IPC Inter-Process Communication is the heart of all DSs. Processes on different machines. Always based on low-level message passing. In this chapter: RPC RMI MOM (Message Oriented


slide-1
SLIDE 1

Communication

Chapter 2

slide-2
SLIDE 2

IPC

  • Inter-Process Communication is the heart of all

DSs.

  • Processes on different machines.
  • Always based on low-level message passing.
  • In this chapter:

– RPC – RMI – MOM (Message Oriented MiddleWare) – Streams (due to the advent of Multimedia DSs)

slide-3
SLIDE 3

Layered Protocols (1)

  • Layers, interfaces, and protocols in the OSI model.

2-1

slide-4
SLIDE 4

Layered Protocols

  • Protocol

– Connection Oriented – Connectionless

  • Protocol Stack
  • Description of the layers, Unit of exchange.
slide-5
SLIDE 5

Layered Protocols (2)

  • A typical message as it appears on the network.

2-2

slide-6
SLIDE 6

Data Link Layer

  • Discussion between a receiver and a sender in the data link layer.

2-3

slide-7
SLIDE 7

Transport Protocols

  • Makes the underlying layers usable by the

application layer.

  • Provide a reliable or unreliable connection for

the upper layer.

  • UDP :: TCP
  • RTP for real-time systems.
slide-8
SLIDE 8

Client-Server TCP

a) Normal operation of TCP. b) Transactional TCP. 2-4

slide-9
SLIDE 9

Middleware Protocols

  • An adapted reference model for networked communication.

2-5

slide-10
SLIDE 10

RPC

  • PC?
  • R…………….PC?
  • Simple idea
  • Complexity in provision
slide-11
SLIDE 11

Conventional Procedure Call

a) Parameter passing in a local procedure call: the stack before the call to read

Count = read (fd, buf, nbytes);

b) The stack while the called procedure is active

slide-12
SLIDE 12

Issues

  • Calling Method?

– Call by value – Call by reference – Call by Copy/Restore – Call by name

slide-13
SLIDE 13

Client and Server Stubs

  • Principle of RPC between a client and server program.
  • The read stub is called on behalf of the real read procedure!
slide-14
SLIDE 14

Steps of a Remote Procedure Call

1. Client procedure calls client stub in normal way 2. Client stub builds message, calls local OS 3. Client's OS sends message to remote OS 4. Remote OS gives message to server stub 5. Server stub unpacks parameters, calls server 6. Server does work, returns result to the stub 7. Server stub packs it in message, calls local OS 8. Server's OS sends message to client's OS 9. Client's OS gives message to client stub

  • 10. Stub unpacks result, returns to client
slide-15
SLIDE 15

Passing Value Parameters (1)

  • Steps involved in doing remote computation through RPC
  • It works fine, while the scenario is simple and straightforward; but ….

2-8

slide-16
SLIDE 16

Passing Value Parameters (2)

  • Different character set standards (ASCII vs EBCDIC)
  • Little-Endian vs Big-Endian Architecture.

a) Original message on the Pentium (L. E.) b) The message after receipt on the SPARC (B. E.) c) The message after being inverted. The little numbers in boxes indicate the address of each byte

slide-17
SLIDE 17

Call by Reference Parameter Passing

???

slide-18
SLIDE 18

Parameter Specification and Stub Generation

  • Both sides should agree on the content of passing

data structures.

  • Example in the next slide.
  • The way a message including the parameters is

interpreted is the main issue!!

  • Client and server should agree on the

representation of simple data structures.

  • Agreement on the actual exchange of the messages

(connection-oriented or connection-less)

slide-19
SLIDE 19

Parameter Specification and Stub Generation

a) A procedure b) The corresponding message. c) Interface Definition Language  compiling into client stub and server stub

slide-20
SLIDE 20

Extended RPC Models

  • RPC becoming as de facto standard for comm. in

DSs.

  • Popularity due to simplicity.
  • Two extensions

– Doors – Async RPC.

slide-21
SLIDE 21

Doors

  • Equivalent to RPC for processes located on the

same machine.

  • A door is a name for a procedure in the address

space of a server process, called by collocated processes within the server.

  • Idea was originally from the Spirit OS (1994)
  • Same as LightWeight RPC.
  • The server process must register a door before

use (calling door-create)

slide-22
SLIDE 22

Doors

  • The principle of using doors as IPC mechanism.
slide-23
SLIDE 23

Asynchronous RPC (1)

a) The interconnection between client and server in a traditional RPC b) The interaction using asynchronous RPC

2-12

slide-24
SLIDE 24

Asynchronous RPC (2)

  • A client and server interacting through two

asynchronous RPCs

2-13

slide-25
SLIDE 25

Writing a Client and a Server

  • The steps in writing a client and a server in DCE RPC.

2-14

slide-26
SLIDE 26

Binding a Client to a Server

  • Client-to-server binding in DCE.

2-15

slide-27
SLIDE 27

Performing an RPC

  • The whole scenario!
  • Semantics

– At-most-once operation

  • Idempotency
slide-28
SLIDE 28

Remote Object Invocation

  • OO technology in centralized systems.
  • Promoting the idea of RPC to the OO technology.
  • Proxy as the client delegate == Client stub.
  • Skeleton == server stub
  • The object state is normally not distributed  remote
  • bject instead of distributed object
slide-29
SLIDE 29

Distributed Objects

  • Common organization of a remote object with

client-side proxy.

2-16

slide-30
SLIDE 30

Message-Oriented Communication

  • Sometimes both RPC and RMI is not appropriate
  • Synchronous nature of RPC and RMI!

 Messaging.

slide-31
SLIDE 31

Berkeley Sockets (2)

  • Connection-oriented communication pattern using sockets.
slide-32
SLIDE 32

The Message-Passing Interface (MPI)

  • Some of the most intuitive message-passing

primitives of MPI.

Primitive Meaning MPI_bsend Append outgoing message to a local send bufger MPI_send Send a message and wait until copied to local or remote bufger MPI_ssend Send a message and wait until receipt starts MPI_sendrecv Send a message and wait for reply MPI_isend Pass reference to outgoing message, and continue MPI_issend Pass reference to outgoing message, and wait until receipt starts MPI_recv Receive a message; block if there are none MPI_irecv Check if there is an incoming message, but do not block
slide-33
SLIDE 33

Stream-Oriented Communication

  • Till now, focus was on exchanging one or more

independent and complete units of info.

  • However, consider an audio stream, CD quality

is also required  the original sound has been sampled at 44100 Hz a sample in each 1/44100 Sec is required to re-produce the

  • riginal sound.
  • Time-dependent and continuous media is

required :: Temporal relationship between data items are crucial.

slide-34
SLIDE 34

Data Stream (1)

  • Setting up a stream between two processes across a network.
  • Data stream is a sequence of data units.
slide-35
SLIDE 35

Transmission Modes

  • Async Trans Mode: Sending regardless of time
  • Synch Trans Mode: There is a max end-to-end

delay for each unit: Sensor info!

  • Isochronous Trans Mode: Data units should be

transferred on time:: A max and min end-to-end delay (bounded jitter).

slide-36
SLIDE 36

Data Stream (2)

  • Setting up a stream directly between two devices.

2-35.2

slide-37
SLIDE 37

Data Stream (3)

  • An example of multicasting a stream to several receivers.
slide-38
SLIDE 38

QoS

  • Time-Dependent requirement:: QoS
  • Next slide as a sample QoS specification
  • Formulation based on the token bucket algorithm
  • Basic idea is that tokens are generated at a

constant rate.

  • Token is a fixed # of bytes, an application is

allowed to pass to the network.

slide-39
SLIDE 39

Specifying QoS (2)

  • The principle of a token bucket algorithm.
slide-40
SLIDE 40

Setting Up a Stream

  • The basic organization of RSVP (Resource reSerVation Protocol)

for resource reservation in a distributed system.

slide-41
SLIDE 41

End of Chapter 2