DOC PREVIEW
RIT EECC 756 - Message Passing Interface

This preview shows page 1-2-14-15-29-30 out of 30 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Message Passing Interface (MPI)Major Features of MPI 1.2Compiling and Running MPI ProgramsMPI Static Process Creation & Process RankMPI Process Initialization & Clean-UpMPI Communicators, HandlesMPI DatatypesMPI Indispensable FunctionsMPI_Init , MPI_FinalizeSlide 10Slide 11Simple MPI C Program Hello.cMultiple Program Multiple Data MPMD in MPIVariables Within Individual MPI ProcessesBlocking Send/Receive RoutinesBlocking Send/Receive RoutinesSlide 17Slide 18MPI TagsBlocking Send/Receive Code ExampleNonblocking Send/Receive RoutinesNonblocking Send Code ExampleSlide 23Slide 24MPI Collective Communications FunctionsPartitioned Sum MPI Program Example: sum.c (part 1 of 3)Partitioned Sum Program Example: sum.c (part 2 of 3)Partitioned Sum MPI Program Example: sum.c (part 3 of 3)MPI Program Example To Compute PiSlide 30EECC756 - ShaabanEECC756 - Shaaban#1 lec # 7 Spring2002 4-11-2002Message Passing Interface (MPI)Message Passing Interface (MPI)•MPI, the Message Passing Interface, is a library, and a software standard developed by the MPI Forum to make use of the most attractive features of existing message passing systems for parallel programming. •The public release of version 1.0 of MPI was made in June 1994 followed by version 1.1 in June 1995 and and shortly after with version 1.2 which mainly included corrections and specification clarifications. •An MPI 1 process consists of a C or Fortran 77 program which communicates with other MPI processes by calling MPI routines. The MPI routines provide the programmer with a consistent interface across a wide variety of different platforms.•Version 2.0, a major update of MPI, was released July 1997 adding among other features support for dynamic process creation, one-sided communication and bindings for Fortran 90 and C++. MPI 2.0 features are not covered here.•Several commercial and free implementations of MPI 1.2 exist Most widely used free implementations of MPI 1.2 :–LAM-MPI : Developed at University of Notre Dame , http://www.lam-mpi.org/–MPI-CH: Developed at Argonne National Laboratory http://www-unix.mcs.anl.gov/mpi/mpich/•(MPI-CH 1.2.3 is the MPI implementation installed on the CE cluster).EECC756 - ShaabanEECC756 - Shaaban#2 lec # 7 Spring2002 4-11-2002Major Features of MPI 1.2Major Features of MPI 1.2Standard includes 125 functions to provide:–Point-to-point message passing–Collective communication–Support for process groups–Support for communication contexts–Support for application topologies–Environmental inquiry routines–Profiling interfaceEECC756 - ShaabanEECC756 - Shaaban#3 lec # 7 Spring2002 4-11-2002Compiling and Running MPI Programs •To compile MPI C programs use: mpicc [linking flags] program_name.c -o program_name Ex: mpicc hello.c -o hello•To run an MPI compiled program use: mpirun -np <number of processes> [mpirun_options] -machinefile < machinefile> <program name and arguments> The machinefile contains a list of the machines on which you want your MPI programs to run. EX: mpirun -np 4 -machinefile .rhosts hello starts four processes on the top four machines from machinefile .rhosts all running the program helloEECC756 - ShaabanEECC756 - Shaaban#4 lec # 7 Spring2002 4-11-2002MPI Static Process Creation & Process RankMPI Static Process Creation & Process Rank•MPI 1.2 does not support dynamic process creation, i.e. one cannot spawn a process from within a process as can be done with pvm:–All processes must be started together at the beginning of the computation. Ex: mpirun -np 4 -machinefile .rhosts hello–There is no equivalent to the pvm pvm_spawn( ) call. –This restriction leads to the direct support of MPI for single program-multiple data (SPMD) model of computation where each process has the same executable code. •MPI process Rank: A number between 0 to N-1 identifying the process where N is the total number of MPI processes involved in the computation. –The MPI function MPI_Comm_rank reports the rank of the calling process.–The MPI function MPI_Comm_size reports the total number of MPI processes.EECC756 - ShaabanEECC756 - Shaaban#5 lec # 7 Spring2002 4-11-2002MPI Process Initialization & Clean-Up •The first MPI routine called in any MPI program mu st be the initialization routine MPI_INIT. Every MPI program must call this routine once, before any other MPI routines. •An MPI program should call the MPI routine MPI_FINALIZE when all communications have completed. This routine cleans up all MPI data structures etc. •MPI_FINALIZE does NOT cancel outstanding communications, so it is the responsibility of the programmer to make sure all communications have completed. –Once this routine is called, no other calls can be made to MPI routines, not even MPI_INIT, so a process cannot later re-enroll in MPI.EECC756 - ShaabanEECC756 - Shaaban#6 lec # 7 Spring2002 4-11-2002MPI MPI Communicators, Handles•MPI_INIT defines a default communicator called MPI_COMM_WORLD for each process that calls it. •All MPI communication calls require a communicator argument and MPI processes can only communicate if they share a communicator. •Every communicator contains a group which is a list of processes. The processes are ordered and numbered consecutively from zero, the number of each process being its rank. The rank identifies each process within the communicator. •The group of MPI_COMM_WORLD is the set of all MPI processes. •MPI maintains internal data structures related to communications etc. and these are referenced by the user through handles. Handles are returned to the user from some MPI calls and can be used in other MPI calls.EECC756 - ShaabanEECC756 - Shaaban#7 lec # 7 Spring2002 4-11-2002MPI Datatypes•The data in a message to sent or received is described by a triple (address, count, datatype), where•An MPI datatype is recursively defined as:–Predefined, corresponding to a data type from the language (e.g., MPI_INT, MPI_DOUBLE_PRECISION)–A contiguous array of MPI datatypes–A strided block of datatypes–an indexed array of blocks of datatypes–An arbitrary structure of datatypes•There are MPI functions to construct custom datatypes, such an array of (int, float) pairs, or a row of a matrix stored columnwise.EECC756 - ShaabanEECC756 - Shaaban#8


View Full Document

RIT EECC 756 - Message Passing Interface

Documents in this Course
Load more
Download Message Passing Interface
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Message Passing Interface and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Message Passing Interface 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?