Chapter 18.2: Distributed CoordinationChapter 18 Distributed CoordinationChapter ObjectivesConcurrency ControlConcurrency Control - BackgroundConcurrency Control with Locking ProtocolsNon-Replicated SchemaReplicated Schemes: Single-Coordinator ApproachSlide 9Replicated Schemes: the Majority ProtocolReplicated Schemes: The Biased ProtocolReplicated Locks: The Primary CopyConcurrency Control with Time-StampingSlide 14Generation of Unique TimestampsSlide 16Deadlock PreventionDeadlock Prevention (1 of 2)Deadlock Prevention (2 of 2)Time-stamped Deadlock-Prevention SchemeWait-Die SchemeWound-Wait SchemeAppraisal of Wait-Die and Wound-Wait SchemesAppraisals of wait…die; wound…waitEnd of Chapter 18.2Chapter 18.2: Distributed CoordinationChapter 18.2: Distributed Coordination18.2Silberschatz, Galvin and Gagne ©2005Operating System ConceptsChapter 18 Distributed CoordinationChapter 18 Distributed CoordinationChapter 18.1Event OrderingMutual Exclusion AtomicityChapter 18.2Concurrency ControlDeadlock HandlingChapter 18.3Deadlock PreventionElection AlgorithmsReaching Agreement18.3Silberschatz, Galvin and Gagne ©2005Operating System ConceptsChapter ObjectivesChapter ObjectivesTo show how some currency-control schemes can be modified for use in a distributed environmentTo present schemes for handling deadlock prevention, deadlock avoidance, and deadlock detection in a distributed system18.4Silberschatz, Galvin and Gagne ©2005Operating System ConceptsConcurrency ControlLocking Protocols using replicated and non-replicated schemesTime Stamping18.5Silberschatz, Galvin and Gagne ©2005Operating System ConceptsConcurrency Control - BackgroundConcurrency Control - BackgroundWhile concurrency issues from earlier chapters apply, the issue of distributed control significantly complicates these ‘testy’ issues.To begin with, we must recognize that ‘Transactions’ may be either local or global. Clearly local considerations are simpler to deal with. But with global transactions, we must modify the centralized concurrency schemes to accommodate the distribution of transactionsTo put transactions into perspective:A Local transaction only executes at one site A Global transaction executes at several sitesAs we’ve had in the past, we have Transaction Managers that are responsible for coordinating execution of transactions that access data at all local sites18.6Silberschatz, Galvin and Gagne ©2005Operating System ConceptsConcurrency Control with Locking ProtocolsConcurrency Control with Locking ProtocolsCan use a two-phase locking protocol that we’ve discussed before in a distributed environment by changing how the lock manager is implementedWe have two primary implmentation schemes:Non-replicated schemes, andReplicated schemes.We will first discuss the non-replicated scheme.18.7Silberschatz, Galvin and Gagne ©2005Operating System ConceptsNon-Replicated SchemaThis is the relatively easy case.Here, each site maintains a local lock manager which administers lock and unlock requests for those data items stored in that siteSimple implementation involves two message transfers for handling lock requests, and a single message transfer for unlock requestsWhen a transaction wishes to lock some data item Q at some site S, it merely sends a message to the lock manager at S a request to a lock. If site S can comply, the lock manager replies citing lock has been granted.Clearly, if the lock cannot be granted, request is delayed until lock can be granted.This is a simple scheme conceptually and implementation. As stated, only two plus one message transfers are needed.Deadlock can be much more complex since lock and unlock requests are not made at a single site.Much more ahead on this one!This is the only non-replicated scheme we will discuss here.18.8Silberschatz, Galvin and Gagne ©2005Operating System ConceptsReplicated Schemes: Single-Coordinator ApproachReplicated Schemes: Single-Coordinator ApproachIn this approach, a single lock manager resides in a single chosen site. All lock and unlock requests are made a that siteSo, when some transaction needs a lock a data item, it sends a request to this single site, where the lock manager at this site determines if lock can be granted.If so, it sends a positive reply back to the site requesting the lock.If not, the request is delayed, at which time a postive message is then transmitted to the requesting site.Since we have replication, the requesting site can read data item from any site.With a write operation, however, all sites where the replicas are found must be involved in the writing. issues.(more)18.9Silberschatz, Galvin and Gagne ©2005Operating System ConceptsReplicated Schemes: Single-Coordinator ApproachReplicated Schemes: Single-Coordinator ApproachNow, here’s the deal:On the plus side: We clearly have simple implementation and simple deadlock handlingOn the negative side:It is clear that there is a real possibility of bottleneck because all requests must go through this central site.Also, a biggee: a single central site is vulnerable to loss of concurrency controller if this single site fails Compromise: Multiple-coordinator approach distributes lock-manager function over several sites Helps bottleneck problem but significantly impacts the deadlock problem because both lock and unlock requests are not made at a single site!!Much more ahead on a major section in this chapter addressing deadlock.18.10Silberschatz, Galvin and Gagne ©2005Operating System ConceptsReplicated Schemes: the Majority ProtocolReplicated Schemes: the Majority ProtocolSimilar to the non-replicated scheme, but here we have a lock manager at each site controlling locks for all data or data replicas stored at that site. So, when some transaction needs to lock a data item replicated at several different sites, lock requests must be sent to more than one-half of the number of sites where the data item is stored.Each lock manager must then determine if the lock can be granted.The transaction cannot continue until a majority of locks are granted.Clearly, if a lock cannot be granted at some site S, then positive responses will be delayed until the requests can be granted. The Majority Protocol avoids drawbacks of central control by dealing with replicated data in a
View Full Document