DOC PREVIEW
UW CSE 303 - Concurrency

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

'&$%CSE 303:Concepts and Tools for Software DevelopmentHal PerkinsAutumn 2007Lecture 23— Concurrency Part 2CSE303 Autumn 2007, Lecture 23 1'&$%Concurrency (Review)Computation where “multiple things happen at the same time” isinherently more complicated than sequential computation.• Entirely new kinds of bugs and obligationsTwo forms of concurrency:• time-slicing: only one computation at a time but pre-empt toprovide responsiveness or mask I/O latency.• true parallelism: more than one CPU (e.g., the lab machines havetwo, the attu machines have 4, ...)No problem unless the different computations need to communicate oruse the same resources.• Which, of course, they need to do if anything interesting is goingto happen. . . .CSE303 Autumn 2007, Lecture 23 2'&$%Example: ProcessesThe O/S runs multiple processes “at once”.Why? (Convenience, efficient use of resources, performance)No problem: keep their address-spaces separate.But they do communicate/share via files (and pipes).Things can go wrong, e.g., a race condition:echo "hi" > someFilefoo=‘cat someFile‘# assume foo holds the string hi??The O/S provides synchronization mechanisms to avoid this• See CSE451; we will focus on intraprocess concurrency.CSE303 Autumn 2007, Lecture 23 3'&$%The Old StoryWe said a running Java or C program had code, a heap, globalvariables, a stack, and “what is executing right now” (in assembly, aprogram counter).C, Java support parallelism similarly (other languages can be different):• One pile of code, global variables, and heap.• Multiple “stack + program counter”s — called threads• Threads can be pre-empted whenever by a scheduler• Threads can communicate (or mess each other up) via sharedmemory.• Various synchronization mechanisms control what threadinterleavings are possible.– “Do not do your thing until I am done with my thing”CSE303 Autumn 2007, Lecture 23 4'&$%Basics (Review)C: The POSIX Threads (pthreads) library• #include <pthread.h>• Link with -lpthread• pthread_create takes a function pointer and an argument for it;runs it as a separate thread.• Many types, functions, and macros for threads, locks, etc.Java: Built into the language• Subclass java.lang.Thread overriding run• Create a Thread object and call its start method• Any object can “be synchronized on” (later)CSE303 Autumn 2007, Lecture 23 5'&$%Simple synchronizationThreads have to communicate and coordinate.• Use each others’ results; avoid messing up each other’scomputation.Simplest two ways not to mess each other up (don’t underestimate!):1. Do not access the same memory.2. Do not mutate shared memory.Next simplest: One thread does not run until/unless another thread isdone• Called a joinCSE303 Autumn 2007, Lecture 23 6'&$%Using Threads• A common pattern for expensive computations:– Split the work– Join on all the helper threads– Called fork-join parallelism• To avoid bottlenecks, each thread should have about the sameamount of work (load-balancing)– Performance depends on number of CPUs available and willtypically be less than “perfect speedup”– Does the algorithm lend itself to multiple, independent tasks,or do the steps need to be done in a particular order?• C vs. Java (specific to threads)– Java takes an OO approach (shared data via fields of Thread)– Java separates Thread-object creation from Thread-startCSE303 Autumn 2007, Lecture 23 7'&$%The Issuestruct Acct { int balance; /* ... other fields ... */ };int withdraw(struct Acct * a, int amt) {if(a->balance < amt) return 1; // 1==failurea->balance -= amt;return 0; // 0==success}This code is correct in a sequential program.It may have a race condition in a concurrent program, allowing anegative balance.Discovering this bug is very hard with testing since the interleaving hasto be “just wrong”.CSE303 Autumn 2007, Lecture 23 8'&$%atomicProgrammers must indicate what must appear to happen all-at-once.int withdraw(struct Acct * a, int amt) {atomic {if(a->balance < amt) return 1; // 1==failurea->balance -= amt;}return 0; // 0==success}Reasons not to do “too much” in an atomic:• Correctness: If another threads needs an intermediate result tocompute something you need, must “expose” it.• Performance: Parallel threads must access disjoint memory– Actually read/read conflicts can happen in parallelCSE303 Autumn 2007, Lecture 23 9'&$%Getting it “just right”This code is probably wrong because critical sections too small:atomic { if(a->balance < amt) return 1; }atomic { a->balance -= amt; }This code (skeleton) is probably wrong because critical section too big:• Assume other guy does not compute until the data is set.atomic {data_for_other_guy = 42; // set some globalans = wait_for_other_guy_to_compute();return ans;}CSE303 Autumn 2007, Lecture 23 10'&$%So farShared-memory concurrency where multiple threads might access thesame mutable data at the same time is tricky• Must get size of critical sections just rightIt’s worse because• atomic does not yet exist in languages like C and Java (openresearch problem).Instead programmers must use locks (a.k.a. mutexes) or othermechanisms, usually to get the behavior of critical sections• But misuse of locks will violate the “all-at-once” property• Or lead to other bugs we haven’t seen yetCSE303 Autumn 2007, Lecture 23 11'&$%Lock basicsA lock is acquired and released by a thread.• At most one thread “holds it” at any moment• Acquiring it “blocks” until the holder releases it and the blockedthread acquires it– Many threads might be waiting; one will “win”.– The lock-implementor avoids race conditions on thelock-acquire• So to keep two things from happening at the same time, surroundthem with the same lock-acquire/lock-releaseCSE303 Autumn 2007, Lecture 23 12'&$%Locks in C/JavaC: Need to initialize and destroy mutexes (a synonym for locks).• The joys of CAn initialized (pointer to a) mutex can be locked or unlocked vialibrary function calls.Java: A synchronized statement is an acquire/release.• Any object can serve as a lock.• Lock is released on any control-transfer out of the block (return,break, exception, ...)• “Synchronized methods” just save keystrokes.CSE303 Autumn 2007, Lecture 23 13'&$%Choosing how to lockNow we know what locks are (how to make them, whatacquiring/releasing means), but programming with them correctly andefficiently is difficult...• As before, if critical


View Full Document

UW CSE 303 - Concurrency

Documents in this Course
Profiling

Profiling

11 pages

Profiling

Profiling

22 pages

Profiling

Profiling

11 pages

Testing

Testing

12 pages

Load more
Download Concurrency
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Concurrency and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Concurrency 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?