DOC PREVIEW
CMU CS 15740 - lect17

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Page 1SynchronizationTodd C. MowryCS 740November 1, 2000Topics• Locks• Barriers• Hardware primitivesCS 740 F’00– 2 –Types of SynchronizationMutual Exclusion• LocksEvent Synchronization• Global or group-based (barriers)• Point-to-pointCS 740 F’00– 3 –Busy Waiting vs. BlockingBusy-waiting is preferable when:• scheduling overhead is larger than expected wait time• processor resources are not needed for other tasks• schedule-based blocking is inappropriate (e.g., in OS kernel)CS 740 F’00– 4 –A Simple Locklock: ld register, locationcmp register, #0bnz lockst location, #1retunlock: st location, #0retPage 2CS 740 F’00– 5 –Need Atomic Primitive!Test&SetSwapFetch&Op• Fetch&Incr, Fetch&DecrCompare&SwapCS 740 F’00– 6 –Test&Set based locklock: t&s register, location bnz lockretunlock: st location, #0retCS 740 F’00– 7 –T&S Lock PerformanceCode: lock; delay(c); unlock;Same total no. of lock calls as p increases; measure time per transferssssssssssssssssllllllllllllllllnnnnnnnnnnnnnnnnuU U u u u u u u u u u u u u uNumber of processorsTime (µs)11 13 1502468101214161820s Test&set, c = 0l Test&set, exponentialbackoff, c= 3.64nTest&set, exponential backoff, c= 0u Ideal9753CS 740 F’00– 8 –Test and Test and SetA: while (lock != free)if (test&set(lock) == free) {critical section;}else goto A;(+) spinning happens in cache(-) can still generate a lot of traffic when many processors go to do test&setPage 3CS 740 F’00– 9 –Test and Set with BackoffUpon failure, delay for a while before retrying• either constant delay or exponential backoffTradeoffs:(+) much less network traffic(-) exponential backoff can cause starvation for high-contention locks–new requestors back off for shorter timesBut exponential found to work best in practiceCS 740 F’00– 10 –Test and Set with UpdateTest and Set sends updates to processors that cache the lockTradeoffs:(+) good for bus-based machines(-) still lots of traffic on distributed networksMain problem with test&set-based schemes is that a lock release causes all waiters to try to get the lock, using a test&set to try to get it.CS 740 F’00– 11 –Ticket Lock (fetch&incr based)Two counters:• next_ticket (number of requestors)• now_serving (number of releases that have happened)Algorithm:• First do a fetch&incr on next_ticket (not test&set)• When release happens, poll the value of now_serving–if my_ticket, then I winUse delay; but how much?CS 740 F’00– 12 –Ticket Lock Tradeoffs(+) guaranteed FIFO order; no starvation possible(+) latency can be low if fetch&incr is cacheable(+) traffic can be quite low(-) but traffic is not guaranteed to be O(1) per lock acquirePage 4CS 740 F’00– 13 –Array-Based Queueing LocksEvery process spins on a unique location, rather than on a single now_serving counterfetch&incr gives a process the address on which to spinTradeoffs:(+) guarantees FIFO order (like ticket lock)(+) O(1) traffic with coherence caches (unlike ticket lock)(-) requires space per lock proportional to PCS 740 F’00– 14 –List-Base Queueing Locks (MCS)All other good things + O(1) traffic even without coherent caches (spin locally)Uses compare&swap to build linked lists in softwareLocally-allocated flag per list node to spin onCan work with fetch&store, but loses FIFO guaranteeTradeoffs:(+) less storage than array-based locks(+) O(1) traffic even without coherent caches(-) compare&swap not easy to implementCS 740 F’00– 15 –Implementing Fetch&OpLoad Linked/Store Conditionallock: ll reg1, location /* LL location to reg1 */bnz reg1, lock /* check if location locked*/sc location, reg2 /* SC reg2 into location*/beqz reg2, lock /* if failed, start again */retunlock:st location, #0 /* write 0 to location */retCS 740 F’00– 16 –BarriersWe will discuss five barriers:• centralized• software combining tree• dissemination barrier• tournament barrier• MCS tree-based barrierPage 5CS 740 F’00– 17 –Centralized BarrierBasic idea:• notify a single shared counter when you arrive• poll that shared location until all have arrivedSimple implementation require polling/spinning twice:• first to ensure that all procs have left previous barrier• second to ensure that all procs have arrived at current barrierSolution to get one spin: sense reversalCS 740 F’00– 18 –Software Combining Tree BarrierWrites into one tree for barrier arrivalReads from another tree to allow procs to continueSense reversal to distinguish consecutive barriersFlat Tree structuredContention Little contentionCS 740 F’00– 19 –Dissemination Barrierlog P rounds of synchronizationIn round k, proc i synchronizes with proc (i+2k) mod PAdvantage:• Can statically allocate flags to avoid remote spinningCS 740 F’00– 20 –Tournament BarrierBinary combining treeRepresentative processor at a node is statically chosen• no fetch&op neededIn round k, proc i=2ksets a flag for proc j=i-2k• i then drops out of tournament and j proceeds in next round• i waits for global flag signalling completion of barrier to be set–could use combining wakeup treePage 6CS 740 F’00– 21 –MCS Software BarrierModifies tournament barrier to allow static allocation in wakeup tree, and to use sense reversalEvery processor is a node in two P-node trees:• has pointers to its parent building a fanin-4 arrival tree• has pointers to its children to build a fanout-2 wakeup treeCS 740 F’00– 22 –Barrier RecommendationsCriteria:• length of critical path• number of network transactions• space requirements• atomic operation requirementsCS 740 F’00– 23 –Space RequirementsCentralized:• constantMCS, combining tree:• O(P)Dissemination, Tournament:• O(PlogP)CS 740 F’00– 24 –Network TransactionsCentralized, combining tree:• O(P) if broadcast and coherent caches;• unbounded otherwiseDissemination:• O(PlogP)Tournament, MCS:• O(P)Page 7CS 740 F’00– 25 –Critical Path LengthIf independent parallel network paths available:• all are O(logP) except centralized, which is O(P)Otherwise (e.g., shared bus):• linear factors dominateCS 740 F’00– 26 –Primitives NeededCentralized and combining tree:• atomic increment• atomic decrementOthers:• atomic read• atomic writeCS 740 F’00– 27 –Barrier RecommendationsWithout broadcast on distributed memory:• Dissemination– MCS is good, only critical path


View Full Document

CMU CS 15740 - lect17

Documents in this Course
leecture

leecture

17 pages

Lecture

Lecture

9 pages

Lecture

Lecture

36 pages

Lecture

Lecture

9 pages

Lecture

Lecture

13 pages

lecture

lecture

25 pages

Lecture

Lecture

65 pages

Lecture

Lecture

28 pages

lect07

lect07

24 pages

lect07

lect07

12 pages

lect03

lect03

3 pages

lecture

lecture

11 pages

lecture

lecture

20 pages

lecture

lecture

11 pages

Lecture

Lecture

9 pages

Lecture

Lecture

10 pages

Lecture

Lecture

22 pages

Lecture

Lecture

28 pages

Lecture

Lecture

18 pages

lecture

lecture

63 pages

lecture

lecture

13 pages

Lecture

Lecture

36 pages

Lecture

Lecture

18 pages

Lecture

Lecture

17 pages

Lecture

Lecture

12 pages

lecture

lecture

34 pages

lecture

lecture

47 pages

lecture

lecture

7 pages

Lecture

Lecture

18 pages

Lecture

Lecture

7 pages

Lecture

Lecture

21 pages

Lecture

Lecture

10 pages

Lecture

Lecture

39 pages

Lecture

Lecture

11 pages

lect04

lect04

40 pages

Load more
Download lect17
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view lect17 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view lect17 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?