Unformatted text preview:

Communication in MPI. ContinuedCME342 / AA220 / CS238Lecture 6-7April 11-13, 2005CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 1Announcements• If you have not registered for the cs238-class list, please do so.• On HW1, do not use implied Gauss-Seidel iteration in the code excerpt.Simply use a Jacobi update by operating on a temporary array a(i,j)doing the Jacobi sweep, and then copying the result to the b(i,j) array.It is much easier this way.CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 2Online MPI DocumentationNotice that on the course web page you have a series of links to webpages with useful information on MPI.In the past I have found:http://www-unix.mcs.anl.gov/mpiparticularly useful as a source of explanations of all the components ofMPI and as a reference manual to use while developing code. Click on MPIStandard 1.1 to get to the information. Go to the bottom and click onIndex for a full MPI Reference.On the same web page you can click on MPI Standard 2.0 if you wantto find out more about advanced features of the extension to the originalMPI standard (particularly parallel I/O and one-sided communication).CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 3MPI RECEIVE OperationsNotice that it is not necessary to specify either a source or a tag fora message that is being received. In fact, it often may be advantageous touse wildcards for these arguments. These wildcards are:• MPI ANY SOURCE• MPI ANY TAGand can be used both separately and in combination. Once you receive amessage from an arbitrary source or with an arbitrary tag, you may wantto know where it came from or what tag it carried, or other pieces ofinformation about the message. This information is encoded in the statusarray/structure and can be accessed either directly or through auxiliaryfunctions (MPI GET COUNT etc.)CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 4Buffered / Nonbuffered Comm.• No-buffering (phone calls). Proc 0 initiates the send request and rings Proc 1. It waits untilProc 1 is ready to receive. The transmission starts.. Synchronous comm. – completed only when the message was received bythe receiving proc.• Buffering (beeper). The message to be sent (by Proc 0) is copied to a system-controlledblock of memory (buffer).. Proc 0 can continue executing the rest of its program.. When Proc 1 is ready to receive the message, the system copiesthe buffered message to Proc 1.. Asynchronous comm. – may be completed even though the receiving prochas not received the message.CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 5Buffered Comm. (cont.)• Buffering requires system resources, e.g. memory, and can be slower ifthe receiving proc is ready at the time of requesting the send.• Application buffer: address space that holds the data.• System buffer: system space for storing messages. In buffered comm.,data in application buffer is copied to/from system buffer.• MPI allows comm. in buffered mode:MPI Bsend, MPI Ibsend.• User allocates the buffer by:MPI Buffer attach(buffer, buffer size)• Free the buffer by MPI Buffer detach.CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 6Blocking / Nonblocking Comm.• Blocking Comm. (McDonald’s). The receiving proc has to wait if the message is not ready.. Different from synchronous comm.. Proc 0 may have already buffered the message to system and Proc1 is ready, but the interconnection network is busy.• Nonblocking Comm. (In & Out). Proc 1 checks with the system if the message has arrived yet. Ifnot, it continues doing other stuff. Otherwise, get the messagefrom the system.• Useful when computation and comm. can be performed at the sametime.• MPI allows both nonblocking send & receive:CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 7MPI Isend, MPI Irecv.• In nonblocking send, program identifies an area in memory to serve asa send buffer. Processing continues immediately without waiting formessage to be copied out from application buffer.• The program should not modify the application buffer until thenonblocking send has completed.• Nonblocking comm. can combined with nonbuffering: MPI Issend, orbuffering: MPI Ibsend.• Use MPI Wait or MPI Test to determine if the nonblocking send orreceive has completed. Also available MPI WAITANY, MPI WAITALL,MPI TESTANY, MPI TESTALL.CME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 8Non-Blocking Send SyntaxMPI_ISEND(buf, count, datatype, dest, tag, comm, request)[ IN buf] initial address of send buffer (choice)[ IN count] number of elements in send buffer (integer)[ IN datatype] datatype of each send buffer element (handle)[ IN dest] rank of destination (integer)[ IN tag] message tag (integer)[ IN comm] communicator (handle)[ OUT request] communication request (handle)int MPI_Isend(void* buf, int count, MPI_Datatype datatype, int dest,int tag,MPI_Comm comm, MPI_Request *request)MPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR)<type> BUF(*)INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERRORCME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 9Non-Blocking Receive SyntaxMPI_IRECV (buf, count, datatype, source, tag, comm, request)[ OUT buf] initial address of receive buffer (choice)[ IN count] number of elements in receive buffer (integer)[ IN datatype] datatype of each receive buffer element (handle)[ IN source] rank of source (integer)[ IN tag] message tag (integer)[ IN comm] communicator (handle)[ OUT request] communication request (handle)int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int source,int tag, MPI_Comm comm, MPI_Request *request)MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR)<type> BUF(*)INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERRORCME342 / AA220 / CS238 - Parallel Methods in Numerical Analysis 10Communication CompletionWait until the communication operation associated with the specifiedrequest is completed. Note that for a send operation, this simply meansthat the message has been sent, and the send buffer is ready for reuse.This does NOT mean that the corresponding receive operation has alsocompleted.MPI_WAIT(request, status)[ INOUT request] request (handle)[ OUT status] status object (Status)int MPI_Wait(MPI_Request *request, MPI_Status *status)MPI_WAIT(REQUEST, STATUS, IERROR)INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERRORCME342 / AA220 / CS238 - Parallel Methods in


View Full Document

Stanford CME 342 - Communication in MPI

Documents in this Course
Load more
Download Communication in MPI
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Communication in MPI and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Communication in MPI 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?