DOC PREVIEW
Berkeley COMPSCI 162 - Lecture 7: Implementing Mutual Exclusion

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 162 Spring 2004 Lecture 7 1/9 CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004 Lecture 7: Implementing Mutual Exclusion 7.0 Main Points Hardware support for synchronization Building higher-level synchronization programming abstractions on top of the hardware support. 7.1 The Big Picture The abstraction of threads is good, but concurrent threads sharing state is still too complicated (think of the “too much milk” example). Implementing a concurrent program directly with loads and stores would be tricky and error-prone. So we’d like to provide a synchronization abstraction that hides/manages most of the complexity and puts the burden of coordinating multiple activities on the OS instead of the programmer – Give the programmer higher level operations, such as locks. In this lecture, we’ll explore how one might implement higher-level operations on top of the atomic operations provided by hardware. In the next lecture, we’ll explore what higher-level primitives make it easiest to write correct concurrent programs. CS 162 Spring 2004 Lecture 7 2/9 Low levelatomicoperations(hardware)High levelatomicoperations(API)load/storelocksinterrupt disabletest&setsemaphores monitors send&receiveconcurrent programs Relationship among synchronization abstractions 7.2 Ways of implementing locks All require some level of hardware support. 7.2.1 Atomic memory load and store See too much milk lecture! 7.2.2 Directly implement locks and context switches in hardware Makes hardware slow! One has to be careful not to slow down the common case in order to speed up a special case. 7.2.3 Disable interrupts (uniprocessor only) Two ways for dispatcher to get control: • internal events – thread does something to relinquish the CPU • external events – interrupts cause dispatcher to take CPU away On a uniprocessor, an operation will be atomic as long as a context switch does not occur in the middle of the operation. Need to prevent both internal andCS 162 Spring 2004 Lecture 7 3/9 external events. Preventing internal events is easy (although virtual memory makes it a bit tricky). Prevent external events by disabling interrupts, in effect, telling the hardware to delay handling of external events until after we’re done with the atomic operation. 7.2.3.1 A flawed, but very simple solution Why not do the following: Lock::Acquire() { disable interrupts;} Lock::Release() { enable interrupts;} 1. Need to support synchronization operations in user-level code. Kernel can’t allow user code to get control with interrupts disabled (might never give CPU back!). 2. Real-time systems need to guarantee how long it takes to respond to interrupts, but critical sections can be arbitrarily long. Thus, one should leave interrupts off for shortest time possible. 3. Simple solution might work for locks, but wouldn’t work for more complex primitives, such as semaphores or condition variables. CS 162 Spring 2004 Lecture 7 4/9 7.2.3.2 Implementing locks by disabling interrupts Key idea: maintain a lock variable and impose mutual exclusion only on the operations of testing and setting that variable. class Lock { int value = FREE; } Lock::Acquire() { Disable interrupts; if (value == BUSY) { Put on queue of threads waiting for lock Go to sleep // Enable interrupts? See comments below } else { value = BUSY; } Enable interrupts; } Lock::Release() Disable interrupts; If anyone on wait queue { Take a waiting thread off wait queue Put it at the front of the ready queue } else { value = FREE; } Enable interrupts; } Why do we need to disable interrupts at all? Otherwise, one thread could be trying to acquire the lock, and could get interrupted between checking and setting the lock value, so two threads could think that they both have the lock.CS 162 Spring 2004 Lecture 7 5/9 By disabling interrupts, the check and set operations occur without any other thread having the chance to execute in the middle. When does Acquire re-enable interrupts in going to sleep? Before putting the thread on the wait queue? Then Release can check the queue, and not wake the thread up. After putting the thread on the wait queue, but before going to sleep? Then Release puts the thread on the ready queue, but the thread still thinks it needs to go to sleep! It will go to sleep, missing the wakeup from Release. To fix this, in Nachos, interrupts are disabled when you call Thread::Sleep; it is the responsibility of the next thread to run to re-enable interrupts. When the sleeping thread wakes up, it returns from Thread::Sleep back to Acquire. Interrupts are still disabled, so turn on interrupts. TimeThread A Thread B. . . disable sleepsleep return enable . . . disable sleepsleep return enable . . .switchswitch CS 162 Spring 2004 Lecture 7 6/9 Interrupt disable and enable pattern across context switches An important point about structuring code: If you look at the Nachos code you will see lots of comments about the assumptions made concerning when interrupts are disabled. This is an example of where modifications to and assumptions about program state can’t be localized within a small body of code. When that’s the case you have a very good chance that eventually your program will “acquire” bugs: as people modify the code they may forget or ignore the assumptions being made and end up invalidating the assumptions. Can you think of other examples where this will be a concern? What about acquiring and releasing locks in the presence of C++ exception exits out of a procedure? 7.2.4 Atomic read-modify-write instructions On a multiprocessor, interrupt disable doesn’t provide atomicity. It stops context switches from occurring on that CPU, but it doesn’t stop the other CPUs from entering the critical section. One could provide support to disable interrupts on all CPUs, but that would be expensive: stopping everyone else, regardless of what each CPU is doing. Instead, every modern processor architecture provides some kind of atomic read-modify-write instruction. These instructions atomically read a value from memory into a register, and write a new value. The hardware is responsible for implementing this correctly on both uniprocessors (not too hard) and multiprocessors (requires special hooks in the multiprocessor cache coherence strategy). Unlike disabling interrupts, this can be used on both uniprocessors and


View Full Document

Berkeley COMPSCI 162 - Lecture 7: Implementing Mutual Exclusion

Documents in this Course
Lecture 1

Lecture 1

12 pages

Nachos

Nachos

41 pages

Security

Security

39 pages

Load more
Download Lecture 7: Implementing Mutual Exclusion
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 7: Implementing Mutual Exclusion and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 7: Implementing Mutual Exclusion 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?