DOC PREVIEW
MIT 6 826 - Practical Concurrency

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Handout 14 1Massachusetts Institute of TechnologyDepartment of Electrical Engineering and Computer Science6.826 — Principles of Computer SystemsHandout 14 March 5, 1997____________________________________________________________________________Practical ConcurrencyWe begin our study of concurrency by describing how to use it in practice; later, in handout 16,we shall study it more formally. First we explain where the concurrency in a system comes from,and discuss the main ways to express concurrency. Then we describe the difference between‘hard’ and ‘easy’ concurrency1: the latter is done by locking shared data before you touch it, theformer in subtle ways that are so error-prone that simple prudence requires correctness proofs.We give the rules for easy concurrency using locks, and discuss various issues that complicatethe easy life: scheduling, locking granularity, and deadlocks.Sources of concurrencyBefore studying concurrency in detail, it seems useful to consider how you might get concurrencyin your system. Obviously if you have a multiprocessor or a distributed system you will haveconcurrency, since in these systems there is more than one CPU executing instructions. Similarly,most hardware has separate parts that can change state simultaneously and independently. Butsuppose your system consists of a single CPU running a program. Then you can certainly arrangefor concurrency by multiplexing that CPU among several tasks, but why would you want to dothis? Since the CPU can only execute one instruction at a time, it isn’t entirely obvious that thereis any advantage to concurrency. Why not get one task done before moving on to the next one?There are only two possible reasons:A task might have to wait for something else to complete before it can proceed, for instancefor a disk read. But this means that there is some concurrent task that is going to complete,in the example an I/O device. So we have concurrency in any system that has I/O, evenwhen there is only one CPU.Something else might have to wait for the result of one task but not for the rest of thecomputation, for example a human user. But this means that there is some concurrent taskthat is waiting, in the example the user. Again we have concurrency in any system that hasI/O. 1 I am indebted to Greg Nelson for this taxonomy, and for the object and set example of deadlock avoidance.Handout 14 2In the first case one task must wait for I/O, and we can get more work done by running anothertask on the CPU, rather than letting it idle during the wait. Thus the concurrency of the I/Osystem leads to concurrency on the CPU. If the I/O wait is explicit in the program, theprogrammer can know when other tasks might run; this is often called a ‘non-preemptive’system, because it has sequential semantics except when the program explicitly allowsconcurrent activity by waiting. But if the I/O is done at some low level of abstraction, higherlevels may be quite unaware of it. The most insidious example of this is I/O caused by the virtualmemory system: every instruction can cause a disk read. Such a system is called ‘preemptive’;for practical purposes a task can lose the CPU at any point, since it’s too hard to predict whichmemory references might cause page faults.In the second case we have a motivation for true preemption: we want some tasks to have higherpriority for the CPU than others. An important special case is interrupts, discussed below.Waysto package concurrencyIn the last section we used the work ‘task’ informally to describe a more-or-less independent,more-or-less sequential part of a computation. Now we shall be less coy about how concurrencyshows up in a system.The most general way to describe a concurrent system is in terms of a set of atomic actions withthe property that usually more than one of them can occur (is enabled); we will use thisviewpoint in our later study of formal concurrency. In practice, however, we usually think interms of several ‘threads’ of concurrent execution. Within a single thread only one action isenabled at a time; in general one action may be enabled from each thread, though often some ofthe threads are waiting or ‘blocked’, that is, have no enabled actions.The most convenient way to do concurrent programming is in a system that allows each thread tobe described as an execution path in an ordinary-looking program with modules, routines,commands, etc., such as Spec, C, or Java. In this scheme more than one thread can execute thecode of the same procedure; threads have local state which is the local variables of theprocedures they are executing. All the languages mentioned and many others allow you toprogram in this way.In fault-tolerant systems there is a conceptual drawback to this thread model. If a failure canoccur after each atomic command, it is hard to understand the program by following thesequential flow of control in a thread, because there are so many other paths that result fromfailure and recovery. In these systems it is often best to reason strictly in terms of independentatomic actions. We will see detailed examples of this when we study reliable messages,consensus, and replication. Applications programmed in a transaction system are anotherexample of this approach: each application runs in response to some input and is a single atomicaction.The biggest drawback of this kind of ‘official’ thread, however, is the costs of representing thelocal state and call stack of each thread and of a general mechanism for scheduling the threads.There are several alternatives that reduce these costs: interrupts, control blocks, and SIMDcomputers. They are all based on restricting the freedom of a thread to block, that is, to yield theHandout 14 3processor until some external condition is satisfied, for example, until there is space in a bufferor until a lock is free.InterruptsAn interrupt routine is not the same as a thread, because it always starts at the same point andcannot wait for another thread. The reason for these restrictions is that the execution context foran interrupt routine is allocated on someone else’s stack, which means that the routine mustcomplete before the thread that it interrupted can continue to run. On the other hand, thehardware that schedules an interrupt routine is efficient and takes account of priority withincertain limits. In addition, the interrupt routine doesn’t pay the cost of its own stack like anordinary thread.It’s


View Full Document

MIT 6 826 - Practical Concurrency

Documents in this Course
Consensus

Consensus

10 pages

Load more
Download Practical Concurrency
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Practical Concurrency and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Practical Concurrency 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?