flat and nested distributed outline transactions
play

Flat and nested distributed Outline transactions Flat and nested - PDF document

Distributed System s Fall 2 0 0 9 Distributed transactions Flat and nested distributed Outline transactions Flat and nested distributed transactions Distributed transaction: Atom ic comm it Transactions dealing with objects


  1. Distributed System s Fall 2 0 0 9 Distributed transactions Flat and nested distributed Outline transactions • Flat and nested distributed transactions • Distributed transaction: • Atom ic comm it – Transactions dealing with objects – Two-phase com mit protocol managed by different processes • Concurrency control • Allows for even better performance – Locking – At the price of increased complexity – Optim istic concurrency control • Transaction coordinators and object • Distributed deadlock servers – Edge chasing – Participants in the transaction • Summary Fall 2 0 0 9 5 DV0 2 0 3 Fall 2 0 0 9 5 DV0 2 0 4 Atom ic com m it Tw o-phase com m it protocol • If client is told that the transaction • Phase 1: Coordinator collects votes is committed, it must be committed – “Abort” at all object servers • Any participant can abort its part of the transaction – ...at the same time – “Prepared to commit” – ...in spite of (crash) failures and • Save update to permanent storage to asynchronous systems survive crashes • May not change vote to “abort” • Phase 2: Participants carry out the joint decision Fall 2 0 0 9 5 DV0 2 0 5 Fall 2 0 0 9 5 DV0 2 0 6

  2. Tw o-phase com m it protocol Tw o-phase com m it protocol ( in detail) ( in detail) • Phase 1 (voting): • Phase 2 (completion): – Coordinator sends “canComm it?” to – Coordinator collects votes (including each participant own) • No failures and all votes are “yes”? Send – Participants answer “yes” or “no” “doCommit” to each participant, otherwise, • “Yes”: update saved to permanent storage send “doAbort” • “No”: abort immediately – Participants are in the “uncertain” state until they receive “doComm it” or “doAbort”, and may act accordingly • Confirm commit via “haveCommitted” Fall 2 0 0 9 5 DV0 2 0 7 Fall 2 0 0 9 5 DV0 2 0 8 Tw o-phase com m it protocol Tw o-phase com m it protocol • If coordinator fails • If participant fails – Participants are “uncertain” – No reply to “canComm it?” in tim e? • If some have received an answer (or they • Coordinator can abort can figure it out themselves), they can – Crash after “canComm it?” coordinate themselves • Use permanent storage to get up to speed – Participants can request status – If participant has not received “canComm it?” and waits too long, it may abort Fall 2 0 0 9 5 DV0 2 0 9 Tw o-phase com m it protocol Tw o-phase com m it protocol for nested transactions for nested transactions • Subtransactions a “provisional • Top-level transaction initiates commit” voting phase with provisionally – Nothing written to permanent storage committed transactions • Ancestor could still abort! – If they have crashed since the – If they crash, the replacem ent cannot provisional com mit, they must abort comm it – Before voting “yes”, must prepare to • Status information is passed comm it data upward in tree • At this point we use permanent storage – List of provisionally com m itted subtransactions eventually reach top – Hierarchic or flat voting level Fall 2 0 0 9 5 DV0 2 0 1 1 Fall 2 0 0 9 5 DV0 2 0 1 2

  3. Hierarchic voting Flat voting • Responsibility to vote passed one • Contact coordinators directly using level/ generation at a time, through parameters the tree – Transaction ID – List of transactions that are reported as aborted • Coordinators may manage more than one subtransaction, and due to crashes, this information may be required Fall 2 0 0 9 5 DV0 2 0 1 3 Fall 2 0 0 9 5 DV0 2 0 1 4 Concurrency control revisited Distributed deadlock • Locks • Local and distributed deadlocks – Release locks when transaction can – Phantom deadlocks finish • Simplest solution • After phase 1 if transaction should abort • After phase 2 if transaction should commit – Manager collects local wait-for – Distributed deadlock, oh my! information and constructs global wait- for graph • Optimistic concurrency control • Single point of failure, bad performance, – Validate access to local objects does not scale, what about availability, etc. – Comm itm ent deadlock if serial • Distributed solution – Different transaction order if parallel – Interesting problem ! Read book! Fall 2 0 0 9 5 DV0 2 0 1 5 Fall 2 0 0 9 5 DV0 2 0 1 6 Edge chasing Edge chasing • Initiation: a server notices that T • Detection: servers handle incoming waits for U for object A, so sends requests by inspecting if the < T → U> to server handling A relevant transaction (U) is also (where U may be blocked) waiting for another transaction (V) – if so, updates probe (< T → U → V> ) and sends it along – Loops (e.g. < T → U → V → T> ) indicate deadlock Fall 2 0 0 9 5 DV0 2 0 1 7

  4. Edge chasing Edge chasing • Resolution: abort a transaction in • Any problem with the algorithm? the cycle – What if all coordinators initiate it, and then (when they detect the loop) start aborting left and right? • Servers communicate with the • Totally ordered transaction coordinators for each transaction to priorities find out what they wait for – Abort lowest priority! Fall 2 0 0 9 5 DV0 2 0 2 0 Edge chasing Edge chasing • Optimization: only initiate probe if a • Any problem with the optimized transaction with higher priority algorithm? waits for a lower one – If higher transactions wait for a lower one (but the lower one is not blocked – Also only forward probes to when the request comes), and it then transactions of lower priority becomes blocked, it will not initiate probing Fall 2 0 0 9 5 DV0 2 0 2 2 Edge chasing Sum m ary • Distributed transactions • Add probe queues! • Atomic commit protocol – All probes that are related to a transaction are saved, and are sent (by – Two-phase com mit protocol the coordinator) to the server of the • Vote, then carry out order object with the request for access • Flat transactions • Nested transactions – Works, but increases complexity – Voting schemes – Probe queues m ust be maintained • Concurrency control – Problems! – Distributed deadlock • Edge chasing Fall 2 0 0 9 5 DV0 2 0 2 4

  5. Next lecture • Daniel takes over! • Beyond client-server – Peer to peer (P2P) – BitTorrent – ...and more! Fall 2 0 0 9 5 DV0 2 0 2 5

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend