DOC PREVIEW
LSU CSC 4101 - Programming Languages

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

11Programming LanguagesTevfik KoşarLecture - XXVIIMay 2nd, 20062Roadmap• Shared Memory– Cigarette Smokers Problem– Monitors• Message Passing– Cooperative Operations– Synchronous and Asynchronous Sends– Blocking and Non-blocking Operations– Collective Communications23Cigarette Smokers Problem• Assume a cigarette requires three ingredients to smoke:– Tobacco– Paper– Match• Assume there are also three smokers around a table, each of whom has an infinite supply of one of the three ingredients• Assume there is also a non-smoking arbiter. The arbiter enables the smokers to make their cigarettes by – selecting two of the smokers at random, – taking one item out of each of their supplies, – and placing the items on the table. – He then notifies the third smoker that he has done this. • The third smoker removes the two items from the table and uses them (along with his own supply) to make a cigarette, which he smokes for a while. • Meanwhile, the arbiter, seeing the table empty, again chooses two smokers at random and places their items on the table. This process continues forever. 4Solution• Let us define an array of binary semaphores A, one for each smoker; and a binary semaphore for the table, T.• Initialize the smokers' semaphores to zero and the table's semaphore to 1. • Then the arbiter's code is :while true { wait(T); choose smokers i and j randomly, making the third smoker ksignal(A[k]); }35Solution (cont.)• And the code for smoker i is:while true { wait(A[i]); make a cigarettesignal(T); smoke the cigarette} 6Semaphores• Although widely used, they are considered as too “low level”, and generally used low level code, eg. operating systems• Since their operations are simply subroutine calls, it is very easy to leave out one (eg. forgetting to release a semaphore that has been taken)• Generally scattered throughout a program, and hard to debug and maintain47Monitors• Suggested by Dijkstra (‘72)• Formulized by Hoare (’74)• Monitors encapsulate all synchronization code into a single structure (like objects in OOP)• Only one operation of a given monitor is allowed to be active at a given time (ensured by the compiler)• A thread which calls a busy monitor is automatically delayed until the monitor is free• No need for P and V, and correctly ordering them8MonitorsA monitor consists of:• a set of procedures that allow interaction with the shared resource • a mutual exclusion lock • the variables associated with the resource • a monitor invariant that defines the assumptions needed to avoid race conditionsA monitor procedure takes the lock before doing anything else, and holds it until it either finishes or waits for a conditionIf every procedure guarantees that the invariant is true before it releases the lock, then no task can ever find the resource in a state that might lead to a race condition.59MonitorsAs a simple example, consider a monitor for performing transactions on a bank account.monitor account { int balance := 0 function withdraw(int amount) { if amount < 0 then error "Amount may not be negative" else if balance < amount then error "Insufficient funds" else balance := balance - amount } function deposit(int amount) { if amount < 0 then error "Amount may not be negative" else balance := balance + amount }} 10Dining Philosophers using Monitorsclass Monitor{ boolean forks[]; public Monitor(int size) { forks=new boolean[size]; for(int i=0;i<forks.length;i++) forks[i]=true; } public synchronized void getForks(int id) throws Exception{ int index1; int index2; if(id==0) // Philosopher 0 { index1=0; index2=forks.length-1; } else { index1=id; index2=id-1; } while(!forks[index1] || !forks[index2]) wait(); forks[index1]=false; forks[index2]=false; System.out.println("Philospher "+id+" got the forks: "+index1+" "+index2); }611Dining Philosophers using Monitors (cont.)public synchronized void release(int id) throws Exception{ int index1; int index2; if(id==0) // Philosopher 0 { index1=0; index2=forks.length-1; } else { index1=id; index2=id-1; } forks[index1]=true; forks[index2]=true; System.out.println(" Philosopher "+id+" has finished a meal"); notifyAll(); } } 12Shared MemoryTask 1Task 2 Task 3Memory• Each process/task has access to the same memory713Distributed MemoryTask 1Task 2 Task 3Memory 1Memory 2Memory 3• Each process/task has access to its own memory•Processes/Tasks can communicate via messages14Message Passing• What is message passing?– A mechanism for communicating data between processes– In point-to-point communication, one process sends data and another process receives data– In collective communication, there may be more than one sending and/or receiving process• Message passing is important in many areas:– parallel computing– distributed computing– operating systems815What is message passing?• Data transfer plus synchronization• Requires cooperation of sender and receiverDataProcess 0Process 1May I Send?YesDataDataDataDataDataDataDataDataTime16Cooperative Operations for Communication• The message-passing approach makes the exchange of data cooperative.• Data is explicitly sent by one process and received by another.• An advantage is that any change in the receiving process’s memory is made with the receiver’s explicit participation.• Communication and synchronization are combined.Process 0Process 1Send(data)Receive(data)917One-Sided Operations for Communication• One-sided operations between processes include remote memory reads and writes• Only one process needs to explicitly participate.• An advantage is that communication and synchronization are decoupled• One-sided operations are part of MPI-2• Not part of the basic message passing approach!Process 0Process 1Put(data)(memory)(memory)Get(data)18Message Passing1019Messages20Message Passing Paradigms • MPI (Message Passing Interface)– aims to develop a standard for writing message passing programs – designed for homogeneous architectures• PVM (Parallel Virtual Machine)– allows a heterogeneous network of computers (parallel, vector, or serial) to appear as a single concurrent computational resource - a virtual machine. • Both provides libraries for C and Fortran programs1121Synchronous Sends 22Blocking Operations1223Buffered = Asynchronous Sends24Non-blocking Operations1325Non-blocking Operations (cont.)26Collective Communications1427BroadcastMPI_BCast(…);28Reduction


View Full Document

LSU CSC 4101 - Programming Languages

Download Programming Languages
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Programming Languages and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Programming Languages 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?