LSU CSC 4101 - Programming Languages (16 pages)

Previewing pages 1, 2, 3, 4, 5 of 16 page document View the full content.
View Full Document

Programming Languages



Previewing pages 1, 2, 3, 4, 5 of actual document.

View the full content.
View Full Document
View Full Document

Programming Languages

107 views


Pages:
16
School:
Louisiana State University
Course:
Csc 4101 - Prog Languages
Unformatted text preview:

Programming Languages Tevfik Ko ar Lecture XXVII May 2nd 2006 1 Roadmap Shared Memory Cigarette Smokers Problem Monitors Message Passing Cooperative Operations Synchronous and Asynchronous Sends Blocking and Non blocking Operations Collective Communications 2 1 Cigarette Smokers Problem Assume a cigarette requires three ingredients to smoke Tobacco Paper Match Assume there are also three smokers around a table each of whom has an infinite supply of one of the three ingredients Assume there is also a non smoking arbiter The arbiter enables the smokers to make their cigarettes by selecting two of the smokers at random taking one item out of each of their supplies and placing the items on the table He then notifies the third smoker that he has done this The third smoker removes the two items from the table and uses them along with his own supply to make a cigarette which he smokes for a while Meanwhile the arbiter seeing the table empty again chooses two smokers at random and places their items on the table This process continues forever 3 Solution Let us define an array of binary semaphores A one for each smoker and a binary semaphore for the table T Initialize the smokers semaphores to zero and the table s semaphore to 1 Then the arbiter s code is while true wait T choose smokers i and j randomly making the third smoker k signal A k 4 2 Solution cont And the code for smoker i is while true wait A i make a cigarette signal T smoke the cigarette 5 Semaphores Although widely used they are considered as too low level and generally used low level code eg operating systems Since their operations are simply subroutine calls it is very easy to leave out one eg forgetting to release a semaphore that has been taken Generally scattered throughout a program and hard to debug and maintain 6 3 Monitors Suggested by Dijkstra 72 Formulized by Hoare 74 Monitors encapsulate all synchronization code into a single structure like objects in OOP Only one operation of a given monitor is allowed to be active at a given time ensured by the compiler A thread which calls a busy monitor is automatically delayed until the monitor is free No need for P and V and correctly ordering them 7 Monitors A monitor consists of a set of procedures that allow interaction with the shared resource a mutual exclusion lock the variables associated with the resource a monitor invariant that defines the assumptions needed to avoid race conditions A monitor procedure takes the lock before doing anything else and holds it until it either finishes or waits for a condition If every procedure guarantees that the invariant is true before it releases the lock then no task can ever find the resource in a state that might lead to a race condition 8 4 Monitors As a simple example consider a monitor for performing transactions on a bank account monitor account int balance 0 function withdraw int amount if amount 0 then error Amount may not be negative else if balance amount then error Insufficient funds else balance balance amount function deposit int amount if amount 0 then error Amount may not be negative else balance balance amount 9 Dining Philosophers using Monitors class Monitor boolean forks public Monitor int size forks new boolean size for int i 0 i forks length i forks i true public synchronized void getForks int id throws Exception int index1 int index2 if id 0 Philosopher 0 index1 0 index2 forks length 1 else index1 id index2 id 1 while forks index1 forks index2 wait forks index1 false forks index2 false System out println Philospher id got the forks index1 index2 10 5 Dining Philosophers using Monitors cont public synchronized void release int id throws Exception int index1 int index2 if id 0 Philosopher 0 index1 0 index2 forks length 1 else index1 id index2 id 1 forks index1 true forks index2 true System out println Philosopher id has finished a meal notifyAll 11 Shared Memory Memory Task 1 Task 2 Task 3 Each process task has access to the same memory 12 6 Distributed Memory Memory 2 Memory 1 Task 1 Task 2 Memory 3 Task 3 Each process task has access to its own memory Processes Tasks can communicate via messages 13 Message Passing What is message passing A mechanism for communicating data between processes In point to point communication one process sends data and another process receives data In collective communication there may be more than one sending and or receiving process Message passing is important in many areas parallel computing distributed computing operating systems 14 7 What is message passing Data transfer plus synchronization Process 0 Data May I Send Process 1 Yes Data Data Data Data Data Data Data Data Time Requires cooperation of sender and receiver 15 Cooperative Operations for Communication The message passing approach makes the exchange of data cooperative Data is explicitly sent by one process and received by another An advantage is that any change in the receiving process s memory is made with the receiver s explicit participation Communication and synchronization are combined Process 0 Process 1 Send data Receive data 16 8 One Sided Operations for Communication One sided operations between processes include remote memory reads and writes Only one process needs to explicitly participate An advantage is that communication and synchronization are decoupled One sided operations are part of MPI 2 Not part of the basic message passing approach Process 0 Process 1 Put data memory memory Get data 17 Message Passing 18 9 Messages 19 Message Passing Paradigms MPI Message Passing Interface aims to develop a standard for writing message passing programs designed for homogeneous architectures PVM Parallel Virtual Machine allows a heterogeneous network of computers parallel vector or serial to appear as a single concurrent computational resource a virtual machine Both provides libraries for C and Fortran programs 20 10 Synchronous Sends 21 Blocking Operations 22 11 Buffered Asynchronous Sends 23 Non blocking Operations 24 12 Non blocking Operations cont 25 Collective Communications 26 13 Broadcast MPI BCast 27 Reduction Operations MPI Reduce 28 14 Barriers MPI Barrier 29 Broadcast 30 15 Scatter 31 Gather 32 16


View Full Document

Access the best Study Guides, Lecture Notes and Practice Exams

Loading Unlocking...
Login

Join to view Programming Languages and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Programming Languages and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?