Unformatted text preview:

Introduction to MPISlide 2Overview of MPIBackground on MPIReasons for using a MPI standardMPI OperationMPI Programming ModelMPI LibraryEnvironment Management RoutinesEnvironment Management Routines contd.Slide 11MPI Sample Program: Environment Management RoutinesPoint to Point Communication RoutinesPoint to Point Communication Routines contd.MPI Sample Program: Send and ReceiveCollective Communication RoutinesSources of DeadlocksMPICH – MPI ImplementationMPI Program Compilation (Unix)MPI Program ExecutionMPI Program Execution contd.MPICH on WindowsMPICH on Windows contd.Slide 24THE ENDIntroduction to MPINischint [email protected] November 2007What can you expect ? Overview of MPI Basic MPI commands How to parallelize and execute a program using MPI & MPICH2 What is outside the scope ? Technical details of MPI MPI implementations other than MPICH  Hardware specific optimization techniquesOverview of MPI•MPI stands for Message Passing Interface•What is Message Passing Interface?It is not a programming language or compiler specificationIt is not a specific implementation or productMPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be.The specifications lets you create libraries that allow you to do problems in parallel using message passing to communicate between processesIt provides binding for widely used programming languages like Fortran, C/C++Background on MPI•Early vendor systems (Intel’s NX, IBM’s EUI, TMC’s CMMD) were not portable (or very capable)•Early portable systems (PVM, p4, TCGMSG, Chameleon) were mainly research efforts–Did not address the full spectrum of issues–Lacked vendor support–Were not implemented at the most efficient level•The MPI Forum organized in 1992 with broad participation by:–vendors: IBM, Intel, TMC, SGI, Convex, Meiko–portability library writers: PVM, p4–users: application scientists and library writers–finished in 18 months–Library standard defined by a committee of vendors, implementers, and parallel programmersReasons for using a MPI standard•Standardization - MPI is the only message passing library which can be considered a standard. It is supported on virtually all HPC platforms. Practically, it has replaced all previous message passing libraries. •Portability - There is no need to modify your source code when you port your application to a different platform that supports (and is compliant with) the MPI standard. •Performance Opportunities - Vendor implementations should be able to exploit native hardware features to optimize performance. •Functionality - Over 115 routines are defined in MPI-1 alone. •Availability - A variety of implementations are available, both vendor and public domain.MPI Operation01230123B1B2B4B3A A1A2A4A3BSendReceiveProcessingCommunicatorMPI Programming ModelMPI LibraryMPI LibraryEnvironment Management RoutinesPoint to Point CommunicationRoutinesCollective CommunicationRoutinesEx: MPI_INIT, MPI_Comm_size, MPI_Comm_rank, MPI_Finalize Non-Blocking RoutinesBlockingRoutinesEx: MPI_Barrier,MPI_BcastEx: MPI_Send, MPI_RecvEx:MPI_Isend,MPI_IrecvEnvironment Management Routines•MPI_Init–Initializes the MPI execution environment. This function must be called in every MPI program, must be called before any other MPI functions and must be called only once in an MPI program. For C programs, MPI_Init may be used to pass the command line arguments to all processes, although this is not required by the standard and is implementation dependent. C: MPI_Init (&argc,&argv) Fortran: MPI_INIT (ierr)Environment Management Routines contd.•MPI_Comm_Rank–Determines the rank of the calling process within the communicator. Initially, each process will be assigned a unique integer rank between 0 and number of processors - 1 within the communicator MPI_COMM_WORLD. This rank is often referred to as a task ID. If a process becomes associated with other communicators, it will have a unique rank within each of these as well. C: MPI_Comm_rank (comm,&rank) FORTRAN: MPI_COMM_RANK (comm,rank,ierr)Environment Management Routines contd.•MPI_Comm_size–Determines the number of processes in the group associated with a communicator. Generally used within the communicator MPI_COMM_WORLD to determine the number of processes being used by your application. C: MPI_Comm_size (comm,&size) Fortran: MPI_COMM_SIZE (comm,size,ierr) •MPI_Finalize–Terminates the MPI execution environment. This function should be the last MPI routine called in every MPI program - no other MPI routines may be called after it. C: MPI_Finalize ()Fortran: MPI_FINALIZE (ierr)MPI Sample Program: Environment Management Routines! In C! /* the mpi include file */ #include "mpi.h"#include <stdio.h>int main( argc, argv )int argc;char *argv[];{ int rank, size;!/* Initialize MPI */ MPI_Init( &argc, &argv );!/* How many processors are there?*/ MPI_Comm_size( MPI_COMM_WORLD, &size );!/* What processor am I (what is my rank)? */ MPI_Comm_rank( MPI_COMM_WORLD, &rank );printf( "I am %d of %d\n", rank, size ); MPI_Finalize(); return 0;}! In Fortranprogram main! /* the mpi include file */include 'mpif.h'integer ierr, rank, size!/* Initialize MPI */call MPI_INIT( ierr )!/* How many processors are there?*/call MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr )!/* What processor am I (what is my rank)? */call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr )print *, 'I am ', rank, ' of ', sizecall MPI_FINALIZE( ierr )endPoint to Point Communication RoutinesMPI_Send •Basic blocking send operation. Routine returns only after the application buffer in the sending task is free for reuse. Note that this routine may be implemented differently on different systems. The MPI standard permits the use of a system buffer but does not require it. C:MPI_Send (&buf,count,datatype,dest,tag,comm) Fortran: MPI_SEND(buf,count,datatype,dest,tag,comm,ierr)Point to Point Communication Routines contd.MPI_Recv • Receive a message and block until the requested data is available in the application buffer in the receiving


View Full Document

GT AE 6382 - Introduction to MPI

Download Introduction to MPI
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Introduction to MPI and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Introduction to MPI 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?