DOC PREVIEW
UMD CMSC 714 - Lecture 3 Message Passing with PVM and MPI

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CMSC 714 - Alan Sussman (from J. Hollingsworth) 1CMSC 714Lecture 3Message Passing with PVM and MPIAlan Sussman2CMSC 714 – Alan Sussman and J. HollingsworthPVMz Provide a simple, free, portable parallel environmentz Run on everything– Parallel Hardware: SMP, MPPs, Vector Machines– Network of Workstations: ATM, Ethernet,• UNIX machines and PCs running Win32 API– Works on a heterogenous collection of machines• handles type conversion as neededz Provides two things– message passing library• point-to-point messages• synchronization: barriers, reductions– OS support• process creation (pvm_spawn)CMSC 714 - Alan Sussman (from J. Hollingsworth) 23CMSC 714 – Alan Sussman and J. HollingsworthPVM Environment (UNIX)ApplicationProcessBus NetworkPVMDPVMDPVMDPVMDPVMDApplicationProcessApplicationProcessApplicationProcessApplicationProcessSun SPARC Sun SPARCIBM RS/6000 Cray Y-MPDECmmp 12000z One PVMD per machine– all processes communicate through pvmd (by default)z Any number of application processes per node4CMSC 714 – Alan Sussman and J. HollingsworthPVM Message Passingz All messages have tags– an integer to identify the message– defined by the userz Messages are constructed, then sent– pvm_pk{int,char,float}(*var, count, stride)– pvm_unpk{int,char,float} to unpackz All processes are named based on task ids (tids)– local/remote processes are the samez Primary message passing functions– pvm_send(tid, tag)– pvm_recv(tid, tag)CMSC 714 - Alan Sussman (from J. Hollingsworth) 35CMSC 714 – Alan Sussman and J. HollingsworthPVM Process Controlz Creating a process– pvm_spawn(task, argv, flag, where, ntask, tids)– flag and where provide control of where tasks are started– ntask controls how many copies are started– program must be installed on target machinez Ending a task– pvm_exit– does not exit the process, just the PVM machinez Info functions– pvm_mytid() - get the process task id6CMSC 714 – Alan Sussman and J. HollingsworthPVM Group Operationsz Group is the unit of communication– a collection of one or more processes– processes join group with pvm_joingroup(“<group name>“)– each process in the group has a unique id• pvm_gettid(“<group name>“)z Barrier– can involve a subset of the processes in the group– pvm_barrier(“<group name>“, count)z Reduction Operations– pvm_reduce( void (*func)(), void *data, int count, intdatatype, int msgtag, char *group, int rootinst)• result is returned to rootinst node• does not block– pre-defined funcs: PvmMin, PvmMax,PvmSum,PvmProductCMSC 714 - Alan Sussman (from J. Hollingsworth) 47CMSC 714 – Alan Sussman and J. HollingsworthPVM Performance Issuesz Messages have to go through PVMD– can use direct route option to prevent this problemz Packing messages– semantics imply a copy– extra function call to pack messagesz Heterogenous Support– information is sent in machine independent format– has a short circuit option for known homogenous comm.• passes data in native format then8CMSC 714 – Alan Sussman and J. HollingsworthSample PVM Programint main(int argc, char **argv) {int myGroupNum; int friendTid; int mytid; int tids[2]; int message[MESSAGESIZE]; int c,i,okSpawn; /* Initialize process and spawn if necessary */myGroupNum=pvm_joingroup("ping-pong");mytid=pvm_mytid();if (myGroupNum==0) { /* I am the first process */pvm_catchout(stdout);okSpawn=pvm_spawn(MYNAME,argv,0,"",1,&friendTid);if (okSpawn!=1) {printf("Can't spawn a copy of myself!\n");pvm_exit();exit(1);}tids[0]=mytid;tids[1]=friendTid;} else { /*I am the second process */friendTid=pvm_parent();tids[0]=friendTid;tids[1]=mytid;}pvm_barrier("ping-pong",2);if (myGroupNum==0) {/* Initialize the message */for (i=0 ; i<MESSAGESIZE ; i++) {message[i]='1';} }/* Now start passing the message back and forth */for (i=0 ; i<ITERATIONS ; i++) {if (myGroupNum==0) {pvm_initsend(PvmDataDefault);pvm_pkint(message,MESSAGESIZE,1);pvm_send(friendTid,msgid);pvm_recv(friendTid,msgid); pvm_upkint(message,MESSAGESIZE,1);}else {pvm_recv(friendTid,msgid); pvm_upkint(message,MESSAGESIZE,1);pvm_initsend(PvmDataDefault); pvm_pkint(message,MESSAGESIZE,1);pvm_send(friendTid,msgid);}}pvm_exit();exit(0);}CMSC 714 - Alan Sussman (from J. Hollingsworth) 59CMSC 714 – Alan Sussman and J. HollingsworthMPIz Goals:– Standardize previous message passing:• PVM, P4, NX, MPL, …– Support copy-free message passing– Portable to many platformsz Features:– point-to-point messaging– group/collective communications– profiling interface: every function has a name shifted versionz Buffering (in standard mode)– no guarantee that there are buffers– possible that send will block until receive is calledz Delivery Order– two sends from same process to same dest. will arrive in order– no guarantee of fairness between processes on receive10CMSC 714 – Alan Sussman and J. HollingsworthMPI Communicatorsz Provide a named set of processes for communication– plus a context – system allocated unique tagz All processes within a communicator can be named– numbered from 0…n-1z Allows libraries to be constructed– application creates communicators– library uses it– prevents problems with posting wildcard receives• adds a communicator scope to each receivez All programs start with MPI_COMM_WORLD– Functions for creating communicators from other communicators (split, duplicate, etc.)– Functions for finding out about processes within communicator (size, my_rank, …)CMSC 714 - Alan Sussman (from J. Hollingsworth) 611CMSC 714 – Alan Sussman and J. HollingsworthNon-Blocking Point-to-point Functionsz Two Parts– post the operation– wait for resultsz Also includes a poll/test option– checks if the operation has finishedz Semantics– must not alter buffer while operation is pending (wait returns or test returns true)– and data not valid for a receive until operation completes12CMSC 714 – Alan Sussman and J. HollingsworthCollective Communicationz Communicator specifies process group to participatez Various operations, that may be optimized in an MPI implementation– Barrier synchronization– Broadcast– Gather/scatter (with one destination, or all in group)– Reduction operations – predefined and user-defined• Also with one destination or all in group– Scan – prefix reductionsz Collective operations may or may


View Full Document

UMD CMSC 714 - Lecture 3 Message Passing with PVM and MPI

Documents in this Course
MTOOL

MTOOL

7 pages

BOINC

BOINC

21 pages

Eraser

Eraser

14 pages

Load more
Download Lecture 3 Message Passing with PVM and MPI
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 3 Message Passing with PVM and MPI and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 3 Message Passing with PVM and MPI 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?