DOC PREVIEW
Berkeley COMPSCI C267 - Message Passing

This preview shows page 1-2-17-18-19-35-36 out of 36 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS267 MPI Bill Saphir Berkeley Lab NERSC Phone 510 486 4373 wcsaphir lbl gov What is Message Passing Message passing is a model for programming distributed memory parallel computers Every processor executes an independent process Disjoint address spaces no shared data All communication between processes is done cooperatively through subroutine calls SPMD single program multiple data Every processes is the same e g a out may act on different data MPMD multiple program multiple data Not all processes are the same e g a out b out c out CS267 MPI 2 2005 W Saphir What is the Message Passing Interface MPI is the de facto standard for scientific programming on distributed memory parallel computers MPI is a library of routines that enable message passing applications MPI is an interface specification not a specific implementation Almost all high performance scientific applications run at NERSC and other supercomputer centers use MPI The message passing model is A painful experience for many application programmers Old technology assembly language for parallel programming Message passing has succeeded because It maps well to a wide range of hardware Parallelism is explicit and communication is explicit Forces the programmer to tackle parallelization from the beginning Parallelizing compilers are very hard MPI makes programs portable CS267 MPI 2 2005 W Saphir MPI History Before MPI different library for each type of computer CMMD Thinking Machines CM5 NX Intel iPSC 860 Paragon MPL SP2 and many more PVM tried to be a standard but not high performance not carefully specified MPI was developed by the MPI Forum voluntary organization representing industry government labs academia 1994 MPI 1 codified existing practice 1997 MPI 2 research project Both MPI 1 and MPI 2 were designed by committee There is a core of good stuff but just because it s in the standard doesn t mean you should use it CS267 MPI 2 2005 W Saphir What s in MPI MPI 1 Utilities who am I how many processes are there Send receive communication Collective communication e g broadcast reduction all to all Many other things MPI 2 Parallel I O C Fortran 90 One sided communication get put Many other things Not in MPI Process startup environment standard input output Fault tolerance CS267 MPI 2 2005 W Saphir An MPI Application An MPI application 0 1 2 3 The elements of the application are 4 processes numbered zero through three Communication paths between them The set of processes plus the communication channels is called MPI COMM WORLD More on the name later CS267 2 2000 Bill Saphir 8 Hello World C include mpi h main int argc char argv int me nprocs MPI Init argc argv MPI Comm size MPI COMM WORLD nprocs MPI Comm rank MPI COMM WORLD me printf Hi from node d of d n me nprocs MPI Finalize CS267 2 2000 Bill Saphir 9 Compiling and Running Different on every machine Compile mpicc o hello hello c mpif77 o hello hello c Start four processes somewhere mpirun np 4 hello CS267 2 2000 Bill Saphir 10 Hello world output Run with 4 processes Hi Hi Hi Hi from from from from node node node node 2 1 3 0 of of of of 4 4 4 4 Note Order of output is not specified by MPI Ability to use stdout is not even guaranteed by MPI CS267 2 2000 Bill Saphir 11 Point to point communication in MPI process 1 process 2 memory memory data MPI Send data CS267 2 2000 Bill Saphir MPI Recv data 12 Point to point Example Process 0 sends array A to process 1 which receives it as B 0 define TAG 123 double A 10 MPI Send A 10 MPI DOUBLE 1 TAG MPI COMM WORLD 1 define TAG 123 double B 10 MPI Recv B 10 MPI DOUBLE 0 TAG MPI COMM WORLD status or MPI Recv B 10 MPI DOUBLE MPI ANY SOURCE MPI ANY TAG MPI COMM WORLD status CS267 2 2000 Bill Saphir 13 Some Predefined datatypes C MPI INT MPI FLOAT MPI DOUBLE MPI CHAR MPI LONG MPI UNSIGNED Fortran MPI INTEGER MPI REAL MPI DOUBLE PRECISION MPI CHARACTER MPI COMPLEX MPI LOGICAL Language independent MPI BYTE CS267 2 2000 Bill Saphir 14 Source Destination Tag src dest dest Rank of process message is being sent to destination Must be a valid rank 0 N 1 in communicator src Rank of process message is being received from source Wildcard MPI ANY SOURCE matches any source tag On the sending side specifies a label for a message On the receiving side must match incoming message On receiving side MPI ANY TAG matches any tag CS267 2 2000 Bill Saphir 15 Status argument In C MPI Status is a structure status MPI TAG is tag of incoming message useful if MPI ANY TAG was specified status MPI SOURCE is source of incoming message useful if MPI ANY SOURCE was specified How many elements of given datatype were received MPI Get count IN status IN datatype OUT count In Fortran status is an array of integer integer status MPI STATUS SIZE status MPI SOURCE status MPI TAG In MPI 2 Will be able to specify MPI STATUS IGNORE CS267 2 2000 Bill Saphir Guidelines for using wildcards Unless there is a good reason to do so do not use wildcards Good reasons to use wildcards Receiving messages from several sources into the same buffer but don t care about the order use MPI ANY SOURCE Receiving several messages from the same source into the same buffer and don t care about the order use MPI ANY TAG CS267 2 2000 Bill Saphir 17 Exchanging Data Example with two processes 0 and 1 General data exchange is very similar process 0 process 1 memory memory A A B B MPI Send A MPI Recv B MPI Send A MPI Recv B Requires Buffering to succeed CS267 2 2000 Bill Saphir 18 Deadlock The MPI specification is wishy washy about deadlock A safe program does not rely on system buffering An unsafe program may rely on buffering but is not as portable Ignore this MPI is all about writing portable programs Better A correct program does not rely on buffering A program that relies on buffering to avoid deadlock is incorrect In other words it is your fault it your program deadlocks CS267 2 2000 Bill Saphir 19 Non blocking operations Split communication operations into two parts First part initiates the operation It does not block Second part waits for the operation to complete MPI Request request MPI Recv buf count type dest tag comm status MPI Irecv buf count type dest tag comm request MPI Wait request status MPI Send buf count type dest tag comm MPI Isend buf count type dest tag comm request MPI Wait request status CS267 2 2000 Bill Saphir 20 Using non blocking operations define MYTAG 123 define WORLD MPI COMM WORLD MPI Request request MPI Status status Process 0 MPI Irecv B 100 MPI DOUBLE 1 MYTAG WORLD request


View Full Document

Berkeley COMPSCI C267 - Message Passing

Documents in this Course
Lecture 4

Lecture 4

52 pages

Split-C

Split-C

5 pages

Lecture 5

Lecture 5

40 pages

Load more
Download Message Passing
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Message Passing and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Message Passing 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?