DOC PREVIEW
CORNELL CS 4410 - Race Conditions Critical Sections Dekker’s Algorithm

This preview shows page 1-2-22-23 out of 23 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Race Conditions Critical Sections Dekker’s AlgorithmAnnouncementsReview: CPU SchedulingGoals to TodayBackgroundProducer-ConsumerRace ConditionWhat just happened?Two threads, one counterShared countersRace conditionsScheduler assumptionsScheduler AssumptionsCritical Section GoalsSlide 15Slide 16Solving the problemSolving the problem: Take 2Solving the problem: Take 3A solution that worksNapkin analysis of Dekker’s algorithm:Why does it work?SummaryRace ConditionsCritical SectionsDekker’s AlgorithmAnnouncements•CS 4411 Project due following Wednesday, September 17th –initial design documents due last, Monday, September 8thReview: CPU Scheduling•Scheduling problem–Given a set of processes that are ready to run–Which one to select next•Scheduling criteria–CPU utilization, Throughput, Turnaround, Waiting, Response–Predictability: variance in any of these measures •Scheduling algorithms–FCFS, SJF, SRTF, RR–Multilevel (Feedback-)Queue SchedulingGoals to Today•Introduction to Synchronization–..or: the trickiest bit of this course•Background•Race Conditions•The Critical-Section Problem•Dekker’s SolutionBackground•Concurrent access to shared data may result in data inconsistency•Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes•Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. –Assume an integer count keeps track of the number of full buffers. –Initially, count is set to 0. –It is incremented by the producer after it produces a new buffer–It is decremented by the consumer after it consumes a buffer.Producer-Consumer •Producerwhile (true) { /* produce an item and */ /* put in nextProduced */ while (count == BUFFER_SIZE); // do nothing b/c full buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++;} •Consumerwhile (true) { while (count == 0); // do nothing b/c empty nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /* consume the item */ /* in nextConsumed */}Race Condition•count++ not atomic operation. Could be implemented as register1 = count register1 = register1 + 1 count = register1•count-- not atomic operation. Could be implemented as register2 = count register2 = register2 - 1 count = register2•Consider this execution interleaving with “count = 5” initially:S0: producer execute register1 = count {register1 = 5}S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}What just happened?•Threads share global memory•When a process contains multiple threads, they have–Private registers and stack memory (the context switching mechanism needs to save and restore registers when switching from thread to thread)–Shared access to the remainder of the process “state”•This can result in race conditionsTwo threads, one counterPopular web server •Uses multiple threads to speed things up. •Simple shared state error: –each thread increments a shared counter to track number of hits•What happens when two threads execute concurrently?…hits = hits + 1;…Shared counters•Possible result: lost update! •One other possible result: everything works. Difficult to debug•Called a “race condition”hits = 0 + 1read hits (0)hits = 0 + 1read hits (0)T1T2hits = 1hits = 0timeRace conditions•Def: a timing dependent error involving shared state –Whether it happens depends on how threads scheduled–In effect, once thread A starts doing something, it needs to “race” to finish it because if thread B looks at the shared memory region before A is done, it may see something inconsistent•Hard to detect:–All possible schedules have to be safe •Number of possible schedule permutations is huge•Some bad schedules? Some that will work sometimes?–they are intermittent•Timing dependent = small changes can hide bug–Celebrate if bug is deterministic and repeatable!If i is shared, and initialized to 0–Who wins?–Is it guaranteed that someone wins?–What if both threads run on identical speed CPU•executing in parallel Scheduler assumptionsProcess b: while(i > -10)i = i - 1; print “B won!”;Process a: while(i < 10)i = i +1; print “A won!”;Scheduler Assumptions•Normally we assume that–A scheduler always gives every executable thread opportunities to run•In effect, each thread makes finite progress–But schedulers aren’t always fair•Some threads may get more chances than others–To reason about worst case behavior we sometimes think of the scheduler as an adversary trying to “mess up” the algorithmCritical Section Goals•Threads do some stuff but eventually might try to access shared dataCSEnter();Critical sectionCSExit();T1T2timeCSEnter();Critical sectionCSExit();T1T2Critical Section Goals•Perhaps they loop (perhaps not!)T1T2CSEnter();Critical sectionCSExit();T1T2CSEnter();Critical sectionCSExit();Critical Section Goals•We would like–Safety (aka mutual exclusion)•No more than one thread can be in a critical section at any time.–Liveness (aka progress)•A thread that is seeking to enter the critical section will eventually succeed–Bounded waiting•A bound must exist on the number of times that other threads are allowed to enter their critical sections after a thread has made a request to enter its critical section and before that request is granted•Assume that each process executes at a nonzero speed •No assumption concerning relative speed of the N processes•Ideally we would like fairness as well–If two threads are both trying to enter a critical section, they have equal chances of success–… in practice, fairness is rarely guaranteedSolving the problemCSEnter(){while(inside) continue;inside = true;}•A first idea:–Have a boolean flag, inside. Initially false.CSExit(){inside = false;}•Now ask:–Is this Safe? Live? Bounded waiting?Code is unsafe: thread 0 could finish the while test when inside is false, but then 1 might call CSEnter() before 0 can set inside to true!Solving the problem: Take 2CSEnter(int i){inside[i] = true;while(inside[i^1]) continue;}•A different idea (assumes just two threads):–Have a boolean


View Full Document

CORNELL CS 4410 - Race Conditions Critical Sections Dekker’s Algorithm

Download Race Conditions Critical Sections Dekker’s Algorithm
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Race Conditions Critical Sections Dekker’s Algorithm and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Race Conditions Critical Sections Dekker’s Algorithm 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?