DOC PREVIEW
UW CSE 303 - Lecture Notes

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

'&$%CSE 303:Concepts and Tools for Software DevelopmentHal PerkinsSpring 2008Lecture 22— Shared-Memory ConcurrencyCSE303 Spring 2008, Lecture 22 1'&$%ConcurrencyComputation where “multiple things happen at the same time” isinherently more com plicated than se quential computation.• Entirely new kinds of bugs and obligationsTwo forms of concurrency:• time-slicing: only one computation at a time but pre-empt toprovide responsiveness or mask I/O latency.• true parallelism: more than one CPU (e.g., the lab machines havetwo, the attu machines have 4, ...)No problem unless the different computations need to communicate oruse the same resources.CSE303 Spring 2008, Lecture 22 2'&$%Example: ProcessesThe O/S runs multiple processes “at once”.Why? (Convenience, efficient use of resources, performance)No problem: keep their address-space s s eparate.But they do communicate/share via files (and pipes).Things can go wrong, e.g., a race condition:echo "hi" > someFilefoo=‘cat someFile‘# assume foo holds the string hi??The O/S provides sync hronization mechanisms to avoid this• See CSE451; we will focus on intraprocess concurrency.CSE303 Spring 2008, Lecture 22 3'&$%The Old StoryWe s aid a running Java or C program had code, a heap, globalvariables, a stack, and “what is executing right now” (in assembly, aprogram counter).C, Java support parallelism similarly (other languages can be different):• One pile of code, global variables, and heap.• Multiple “stack + program counter”s — c alled threads• Threads can be pre-empted whenever by a scheduler• Threads can communicate (or me ss each other up) via sharedmemory.• Various synchronization mechanisms control what threadinterleavings are possible.– “Do not do your thing until I am done with my thing”CSE303 Spring 2008, Lecture 22 4'&$%BasicsC: The POSIX Threads (pthreads) library• #include <pthread.h>• Link with -lpthread• pthread_create takes a function pointer and an argument for it;runs it as a separate thread.• Many types, functions, and macros for threads, locks, etc.Java: Built into the language• Subclass java.lang.Thread overriding run• Create a Thread object and call its start method• Any object can “be synchronized on” (later)CSE303 Spring 2008, Lecture 22 5'&$%Why do this?• Convenient structure of code– Example: 2 threads using information computed by the other– Example: Failure-isolation – each “file request” in its ownthread so if a problem just “kill that request”.– Example: Fairness – one slow computation only takes some ofthe CPU time without your own complicated timer code.Avoids starvation.• Performance– Run other threads while one is reading/writing to disk (orother slow thing that can happen in parallel)– Use more than one CPU at the same time∗ The way computers will get faster over the next 10 years∗ So no parallelism means no faster.CSE303 Spring 2008, Lecture 22 6'&$%Simple synchronizationIf one thread did nothing of interest to any other thread, why is itrunning?So threads have to communicate and coordinate.• Use each others’ results; avoid messing up each other’scomputation.Simplest two ways not to mess each other up (don’t underestimate!):1. Do not access the same memory.2. Do not mutate shared m em ory.Next simplest: One thread does not run until/unless another thread isdone• Called a joinCSE303 Spring 2008, Lecture 22 7'&$%Using Parallel Threads• A common pattern for expensive computations:– Split the work– Join on all the helper threads– Called fork-join parallelism• To avoid bottlenecks, each t hread should have about the sameamount of work (load-balancing)– Performance depends on number of CPUs available and willtypically be less than “perfect speedup”• C vs. Java (specific to threads)– Java takes an OO approach (shared data via fields of Thread)– Java separates creating the Thread-object and creating therunning-threadCSE303 Spring 2008, Lecture 22 8'&$%Less structureOften you have a bunch of threads running at once and they mightneed the same mutable memory at the same time but probably not.Want to be correct without sacrificing parallelism.Example: A bunch of threads processing bank transactions:• withdraw, deposit, transfer, currentBalance, ...• chance of two threads accessing the same account at the sametime very low, but not zero.• want mutual exc lusion (a way to keep each ot her out of the waywhen there is contention)Another example: Parallel search through an arbitrary graphCSE303 Spring 2008, Lecture 22 9'&$%The Issuestruct Acct { int balance; /* ... other fields ... */ };int withdraw(struct Acct * a, int amt) {if(a->balance < amt) return 1; // 1==failurea->balance -= amt;return 0; // 0==success}This code is correct in a sequential program.It may have a race c ondition in a concurrent program, allowing anegative balance.Discovering this bug is very hard with testing since the interleaving hasto be “just wrong”.CSE303 Spring 2008, Lecture 22 10'&$%atomicProgrammers must indicate what must appear to happen all-at-once.int withdraw(struct Acct * a, int amt) {atomic {if(a->balance < amt) return 1; // 1==failurea->balance -= amt;}return 0; // 0==success}Reasons not to do “too m uch” in an atomic:• Correctness: If another threads ne eds an interme diate result tocompute something you need, must “expose” it.• Performance: Parallel threads must access disjoint memory– Actually read/read conflicts can happen in parallelCSE303 Spring 2008, Lecture 22 11'&$%Getting it “just right”This code is probably wrong because critical sections too small:atomic { if(a->balance < amt) return 1; }atomic { a->balance -= amt; }This code (skeleton) is probably wrong because critical section too big:• Assume other guy does not compute until the data is set.atomic {data_for_other_guy = 42; // set some globalans = wait_for_other_guy_to_compute();return ans;}CSE303 Spring 2008, Lecture 22 12'&$%So farShared-memory concurrency where multiple threads might access thesame mutable data at the same time is tricky• Must get size of critical sections just rightIt’s worse because• atomic does not yet exist in languages like C and Java• (Major thread of programming language research at UW.)Instead programmers must use locks (a.k.a. mutexes) or othermechanisms, usually to get the behavior of critical sections• But misuse of locks will violate the “all-at-once” property• Or lead to


View Full Document

UW CSE 303 - Lecture Notes

Documents in this Course
Profiling

Profiling

11 pages

Profiling

Profiling

22 pages

Profiling

Profiling

11 pages

Testing

Testing

12 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?