Flat and nested distributed Outline transactions Flat and nested - - PDF document

flat and nested distributed outline transactions
SMART_READER_LITE
LIVE PREVIEW

Flat and nested distributed Outline transactions Flat and nested - - PDF document

Distributed System s Fall 2 0 0 9 Distributed transactions Flat and nested distributed Outline transactions Flat and nested distributed transactions Distributed transaction: Atom ic comm it Transactions dealing with objects


slide-1
SLIDE 1

Distributed System s Fall 2 0 0 9 Distributed transactions

Fall 2 0 0 9 5 DV0 2 0 3

Outline

  • Flat and nested distributed transactions
  • Atom ic comm it

– Two-phase com mit protocol

  • Concurrency control

– Locking – Optim istic concurrency control

  • Distributed deadlock

– Edge chasing

  • Summary
Fall 2 0 0 9 5 DV0 2 0 4

Flat and nested distributed transactions

  • Distributed transaction:

– Transactions dealing with objects managed by different processes

  • Allows for even better performance

– At the price of increased complexity

  • Transaction coordinators and object

servers

– Participants in the transaction

Fall 2 0 0 9 5 DV0 2 0 5

Atom ic com m it

  • If client is told that the transaction

is committed, it must be committed at all object servers

– ...at the same time – ...in spite of (crash) failures and asynchronous systems

Fall 2 0 0 9 5 DV0 2 0 6

Tw o-phase com m it protocol

  • Phase 1: Coordinator collects votes

– “Abort”

  • Any participant can abort its part of the

transaction

– “Prepared to commit”

  • Save update to permanent storage to

survive crashes

  • May not change vote to “abort”
  • Phase 2: Participants carry out the

joint decision

slide-2
SLIDE 2 Fall 2 0 0 9 5 DV0 2 0 7

Tw o-phase com m it protocol ( in detail)

  • Phase 1 (voting):

– Coordinator sends “canComm it?” to each participant – Participants answer “yes” or “no”

  • “Yes”: update saved to permanent storage
  • “No”: abort immediately
Fall 2 0 0 9 5 DV0 2 0 8

Tw o-phase com m it protocol ( in detail)

  • Phase 2 (completion):

– Coordinator collects votes (including

  • wn)
  • No failures and all votes are “yes”? Send

“doCommit” to each participant, otherwise, send “doAbort”

– Participants are in the “uncertain” state until they receive “doComm it” or “doAbort”, and may act accordingly

  • Confirm commit via “haveCommitted”
Fall 2 0 0 9 5 DV0 2 0 9

Tw o-phase com m it protocol

  • If coordinator fails

– Participants are “uncertain”

  • If some have received an answer (or they

can figure it out themselves), they can coordinate themselves

– Participants can request status – If participant has not received “canComm it?” and waits too long, it may abort

Tw o-phase com m it protocol

  • If participant fails

– No reply to “canComm it?” in tim e?

  • Coordinator can abort

– Crash after “canComm it?”

  • Use permanent storage to get up to speed
Fall 2 0 0 9 5 DV0 2 0 1 1

Tw o-phase com m it protocol for nested transactions

  • Subtransactions a “provisional

commit”

– Nothing written to permanent storage

  • Ancestor could still abort!

– If they crash, the replacem ent cannot comm it

  • Status information is passed

upward in tree

– List of provisionally com m itted subtransactions eventually reach top level

Fall 2 0 0 9 5 DV0 2 0 1 2

Tw o-phase com m it protocol for nested transactions

  • Top-level transaction initiates

voting phase with provisionally committed transactions

– If they have crashed since the provisional com mit, they must abort – Before voting “yes”, must prepare to comm it data

  • At this point we use permanent storage

– Hierarchic or flat voting

slide-3
SLIDE 3 Fall 2 0 0 9 5 DV0 2 0 1 3

Hierarchic voting

  • Responsibility to vote passed one

level/ generation at a time, through the tree

Fall 2 0 0 9 5 DV0 2 0 1 4

Flat voting

  • Contact coordinators directly using

parameters

– Transaction ID – List of transactions that are reported as aborted

  • Coordinators may manage more than one

subtransaction, and due to crashes, this information may be required

Fall 2 0 0 9 5 DV0 2 0 1 5

Concurrency control revisited

  • Locks

– Release locks when transaction can finish

  • After phase 1 if transaction should abort
  • After phase 2 if transaction should commit

– Distributed deadlock, oh my!

  • Optimistic concurrency control

– Validate access to local objects – Comm itm ent deadlock if serial – Different transaction order if parallel – Interesting problem ! Read book!

Fall 2 0 0 9 5 DV0 2 0 1 6

Distributed deadlock

  • Local and distributed deadlocks

– Phantom deadlocks

  • Simplest solution

– Manager collects local wait-for information and constructs global wait- for graph

  • Single point of failure, bad performance,

does not scale, what about availability, etc.

  • Distributed solution
Fall 2 0 0 9 5 DV0 2 0 1 7

Edge chasing

  • Initiation: a server notices that T

waits for U for object A, so sends < T → U> to server handling A (where U may be blocked)

Edge chasing

  • Detection: servers handle incoming

requests by inspecting if the relevant transaction (U) is also waiting for another transaction (V) – if so, updates probe (< T → U → V> ) and sends it along

– Loops (e.g. < T → U → V → T> ) indicate deadlock

slide-4
SLIDE 4

Edge chasing

  • Resolution: abort a transaction in

the cycle

  • Servers communicate with the

coordinators for each transaction to find out what they wait for

Fall 2 0 0 9 5 DV0 2 0 2 0

Edge chasing

  • Any problem with the algorithm?

– What if all coordinators initiate it, and then (when they detect the loop) start aborting left and right?

  • Totally ordered transaction

priorities

– Abort lowest priority!

Edge chasing

  • Optimization: only initiate probe if a

transaction with higher priority waits for a lower one

– Also only forward probes to transactions of lower priority

Fall 2 0 0 9 5 DV0 2 0 2 2

Edge chasing

  • Any problem with the optimized

algorithm?

– If higher transactions wait for a lower

  • ne (but the lower one is not blocked

when the request comes), and it then becomes blocked, it will not initiate probing

Edge chasing

  • Add probe queues!

– All probes that are related to a transaction are saved, and are sent (by the coordinator) to the server of the

  • bject with the request for access

– Works, but increases complexity – Probe queues m ust be maintained

Fall 2 0 0 9 5 DV0 2 0 2 4

Sum m ary

  • Distributed transactions
  • Atomic commit protocol

– Two-phase com mit protocol

  • Vote, then carry out order
  • Flat transactions
  • Nested transactions

– Voting schemes

  • Concurrency control

– Problems! – Distributed deadlock

  • Edge chasing
slide-5
SLIDE 5 Fall 2 0 0 9 5 DV0 2 0 2 5

Next lecture

  • Daniel takes over!
  • Beyond client-server

– Peer to peer (P2P) – BitTorrent – ...and more!