distributed transaction management
play

Distributed Transaction Management Advanced Topics in Database - PDF document

Distributed Transaction Management Advanced Topics in Database Management (INFSCI 2711) Some materials are from Database Management Systems, Ramakrishnan and Gehrke and Database System Concepts, Siberschatz, Korth and Sudarshan and Data


  1. Distributed Transaction Management Advanced Topics in Database Management (INFSCI 2711) Some materials are from Database Management Systems, Ramakrishnan and Gehrke and Database System Concepts, Siberschatz, Korth and Sudarshan and Data Management in the Cloud, Aggrawal, Das, Abbadi Vladimir Zadorozhny, DINS, University of Pittsburgh 1 Distributed Database System A distributed database system consists of loosely coupled sites that share no physical component Database systems that run on each site are independent of each other Transactions may access data at one or more sites 2 1

  2. Distributed Data Storage Assume relational data model Replication System maintains multiple copies of data, stored in different sites, for faster retrieval and fault tolerance. Fragmentation Relation is partitioned into several fragments stored in distinct sites Replication and fragmentation can be combined Relation is partitioned into several fragments: system maintains several identical replicas of each such fragment. 3 Transactions A user ’ s program may carry out many operations on the data retrieved from the database, but the DBMS is only concerned about what data is read/written from/to the database. A transaction is the DBMS ’ s abstract view of a user program: a sequence of reads and writes. T1: R(A); A=A+100; W(A); R(B); B=B-100; W(B); Commit 4 2

  3. The ACID properties n A tomicity: All actions in the Xact happen, or none happen. n C onsistency: If each Xact is consistent, and the DB starts consistent, it ends up consistent. n I solation: Execution of one Xact is isolated from that of other Xacts. n D urability: If a Xact commits, its effects persist. 5 Concurrency in a DBMS Users submit transactions, and can think of each transaction as executing by itself. Concurrency is achieved by the DBMS, which interleaves actions (reads/writes of DB objects) of various transactions. Each transaction must leave the database in a consistent state if the DB is consistent when the transaction begins. 6 3

  4. Example Consider a possible interleaving ( schedule ): T1: A=A+100, B=B-100 T2: A=1.06*A, B=1.06*B This is OK. But what about: ❖ T1: A=A+100, B=B-100 T2: A=1.06*A, B=1.06*B The DBMS ’ s view of the second schedule: ❖ T1: R(A), W(A), R(B), W(B) T2: R(A), W(A), R(B), W(B) 7 Scheduling Transactions Serial schedule: Schedule that does not interleave the actions of different transactions. Equivalent schedules : For any database state, the effect (on the set of objects in the database) of executing the first schedule is identical to the effect of executing the second schedule. Serializable schedule : A schedule that is equivalent to some serial execution of the transactions. (Note: If each transaction preserves consistency, every serializable schedule preserves consistency. ) 8 4

  5. Lock-Based Concurrency Control Each Xact must obtain a S ( shared ) lock on object before reading, and an X ( exclusive ) lock on object before writing. If an Xact holds an X lock on an object, no other Xact can get a lock (S or X) on that object. T1: S(A), R(A), unlock(A) T2: X(A), R(A), W(A), unlock(A) 9 Two-Phase Locking (2PL) Each Xact must obtain a S ( shared ) lock on object before reading, and an X ( exclusive ) lock on object before writing. A transaction can not request additional locks once it releases any locks. If an Xact holds an X lock on an object, no other Xact can get a lock (S or X) on that object. 10 5

  6. Strict 2PL Each Xact must obtain a S ( shared ) lock on object before reading, and an X ( exclusive ) lock on object before writing. All locks held by a transaction are released when the transaction completes If an Xact holds an X lock on an object, no other Xact can get a lock (S or X) on that object. Strict 2PL allows only serializable schedules 11 Deadlocks and Deadlock Detection Deadlock: Cycle of transactions waiting for locks to be released by each other. Create a waits-for graph: Nodes are transactions There is an edge from Ti to Tj if Ti is waiting for Tj to release a lock Periodically check for cycles in the waits-for graph 12 6

  7. Deadlock Detection (Continued) Example: T1: S(A), R(A), S(B) T2: X(B),W(B) X(C) T3: S(C), R(C) X(A) T4: X(B) T1 T2 T1 T2 T4 T3 T3 T3 13 Distributed Transactions Transaction may access data at several sites. Each site has a local transaction manager responsible for: Maintaining a log for recovery purposes Participating in coordinating the concurrent execution of the transactions executing at that site. Each site has a transaction coordinator, which is responsible for: Starting the execution of transactions that originate at the site. Distributing subtransactions at appropriate sites for execution. Coordinating the termination of each transaction that originates at the site, which may result in the transaction being committed at all sites or aborted at all sites. 14 7

  8. Transaction System Architecture 15 System Failure Modes Failures unique to distributed systems: Failure of a site. Loss of massages  Handled by network transmission control protocols such as TCP-IP Failure of a communication link  Handled by network protocols, by routing messages via alternative links Network partition  A network is said to be partitioned when it has been split into two or more subsystems that lack any connection between them – Note: a subsystem may consist of a single node Network partitioning and site failures are generally indistinguishable. 16 8

  9. Commit Protocols Commit protocols are used to ensure atomicity across sites a transaction which executes at multiple sites must either be committed at all the sites, or aborted at all the sites. not acceptable to have a transaction committed at one site and aborted at another The two-phase commit (2PC) protocol is widely used The three-phase commit (3PC) protocol is more complicated and more expensive, but avoids some drawbacks of two-phase commit protocol. This protocol is not used in practice. 17 Two Phase Commit Protocol (2PC) Assumes fail-stop model – failed sites simply stop working, and do not cause any other harm, such as sending incorrect messages to other sites. Execution of the protocol is initiated by the coordinator after the last step of the transaction has been reached. The protocol involves all the local sites at which the transaction executed Let T be a transaction initiated at site S i , and let the transaction coordinator at S i be C i 18 9

  10. Phase 1: Obtaining a Decision Coordinator asks all participants to prepare to commit transaction T i . C i adds the records < prepare T > to the log and forces log to stable storage sends prepare T messages to all sites at which T executed Upon receiving message, transaction manager at site determines if it can commit the transaction if not, add a record < no T > to the log and send abort T message to C i if the transaction can be committed, then: add the record < ready T > to the log force all records for T to stable storage send ready T message to C i 19 Phase 2: Recording the Decision T can be committed of C i received a ready T message from all the participating sites: otherwise T must be aborted. Coordinator adds a decision record, < commit T > or <a bort T >, to the log and forces record onto stable storage. Once the record stable storage it is irrevocable (even if failures occur) Coordinator sends a message to each participant informing it of the decision (commit or abort) Participants take appropriate action locally. 20 10

  11. Handling of Failures - Site Failure When site S i recovers, it examines its log to determine the fate of transactions active at the time of the failure. Log contain < commit T > record: site executes redo ( T ) Log contains < abort T > record: site executes undo ( T ) Log contains < ready T > record: site must consult C i to determine the fate of T . If T committed, redo ( T ) If T aborted, undo ( T ) The log contains no control records concerning T replies that S k failed before responding to the prepare T message from C i since the failure of S k precludes the sending of such a response C 1 must abort T S k must execute undo ( T ) 21 Handling of Failures- Coordinator Failure If coordinator fails while the commit protocol for T is executing then participating sites must decide on T ’ s fate: If an active site contains a < commit T > record in its log, then T must 1. be committed. If an active site contains an < abort T > record in its log, then T must 2. be aborted. If some active participating site does not contain a < ready T > record 3. in its log, then the failed coordinator C i cannot have decided to commit T . Can therefore abort T . If none of the above cases holds, then all active sites must have a 4. < ready T > record in their logs, but no additional control records (such as < abort T > of < commit T >). In this case active sites must wait for C i to recover, to find decision. Blocking problem : active sites may have to wait for failed coordinator to recover. 22 11

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend