DOC PREVIEW
U of I CS 421 - Lecture 22

This preview shows page 1-2-23-24 out of 24 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

OutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationCS421 Lecture 22: Concurrency1Mark [email protected] of Illinois at Urbana-ChampaignAugust 3, 20061Based on slides by Mattox Beckman, as updated by Vikram Adve, GulAgha, and Elsa GunterMark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationMark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationConcurrency: Terminology◮A program where two or more execution contexts may beactive (running) at once is concurrent.◮A concurrent program where execut ion is actually occuring inmultiple contexts at once is is parallel.◮A concurrent program with execution contexts running onmultiple nodes is distributed.◮An execution context in a program is a thread.Unfortunately, this terminology (especially t hread) varies withdifferent languages and systems.Mark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationOptions in ConcurrencyIn a language with concurrency support, we have many designoptions. These are the big three, but they lead to others...◮Library or Language: Is support for concurrency baked directlyinto the language, or is it provided by system libraries?◮Shared Memory or Message Passing: Do threads communicateby reading and writing shared areas of memory, or by passingmessages back and forth?◮Synchronization: How do threads make sure that operationsoccur in the desired order?Mark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationUsing LibrariesLibraries are still the most common way to support concurrency.◮For shared memory systems, pthreads is the most commonoption◮Vendors often provide t heir own thread packages as well(Microsoft has similar support for Windows, for i nstance)◮For message passing, PVM and MPI are both very popular,with (my opinion) MPI probably moreso nowMark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationLanguage SupportLanguages with direct support for concurrency have provided manydifferent constructs to represent concurrent operations.◮co-begin◮parallel loops◮launch-at-elaboration◮fork/join◮implicit receipt◮early replyMark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationCo-beginA co-begin construct allows a standard block to be executedeither sequentially or in parallel, based on the chosen keywords.From Algol 68:1 # from Algol 68 #2 par begin3 p(a, b, c),4 begin5 d := q(e, f);6 r(d, g, h)7 end,8 s(i, j)9 endThe comma-separated statements execute in parallel, but theinternal begin is sequentialized.Mark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationParallel LoopsParallel loops are available in some languages to allow eachiteration to be executed in parallel. Some optimizing compilers alsotransform loops automatically into forms that can be executed inparallel.1 # from the SR language2 co (i := 5 to 10) ->3 p(a, b, i) # six instances of p4 ocMark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationLaunch-at-ElaborationHere, the thread code is declared in syntax similar to that forsubroutines. The thread is created to execute the contained codewhen the declaration is elaborated at runtime.1 procedure P is2 task T is3 ...4 end T;5 begin6 ...7 end P;Task T needs information in procedure P, so P will wait for T tofinish. Many instances of T can run at once – think recursion...Mark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationFork/JoinInstead of threads being worked in to the control flow, they canalso be explicitly defined and created. In Ada:1 task type T is2 ...3 begin4 ...5 end T;6 ...7 pt : access T := new T;Mark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationFork/Join in Java1 class CClass extends Thread {2 ...3 CClass(...) {4 // constructor5 }6 ...7 public void run() {8 // code that runs in the thread9 }10 }11 ...12 CClass myclass = new CClass(...);13 myclass.start();14 ...15 myclass.join();Mark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationShared MemoryShared memory systems allow threads to communicate by updatingshared components of program memory.◮Advantage: easy, familiar programming model◮Disadvantage: leads to race conditions and a need forexplicit synchronization◮Disadvantage: not scalableMark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationMessage PassingMessage passing systems all threads to communicate by sendingmessages to one another. Messages generally come in two forms:◮Synchronous Messages: One thread sends a message toanother and waits for the message to be received; a variant(RPC) actually waits for a reply as well◮Asynchronous Messages: One thread sends a message toanother and then continues computationSynchronous is probably more common, and maybe (but notalways) makes more sense for threads communicating within aprocess or on the same machine. Asynchronous makes more sensefor many (but not all) distributed computations.Mark Hills CS421 Lecture 22: ConcurrencyOutlineConcurrency OverviewLanguage SupportShared Memory and Message PassingSynchronizationSynchronous Message PassingMany languages use a synchronous message passing model –Concurrent ML, plus by default with most RPC and RMImechanisms.◮Advantage: allows direct synchronization on events betweenthreads◮Advantage: provides a sense of “shared state”; the threadsknow something about where they each are in a computation◮Disadvantage: does not handle network problems, unreliableor congested threads wellMark Hills CS421 Lecture 22:


View Full Document

U of I CS 421 - Lecture 22

Documents in this Course
Lecture 2

Lecture 2

12 pages

Exams

Exams

20 pages

Lecture

Lecture

32 pages

Lecture

Lecture

21 pages

Lecture

Lecture

15 pages

Lecture

Lecture

4 pages

Lecture

Lecture

68 pages

Lecture

Lecture

68 pages

Lecture

Lecture

84 pages

s

s

32 pages

Parsing

Parsing

52 pages

Lecture 2

Lecture 2

45 pages

Midterm

Midterm

13 pages

LECTURE

LECTURE

10 pages

Lecture

Lecture

5 pages

Lecture

Lecture

39 pages

Load more
Download Lecture 22
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 22 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 22 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?