DOC PREVIEW
Berkeley COMPSCI C267 - Message Passing Programming (MPI)

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

9/19/2001CS267, Yelick 19/19/2001 CS267, YelickCS 267: Applications of Parallel ComputersLecture 7:Message Passing Programming (MPI)Kathy Yelickhttp://www-inst.eecs.berkeley.edu/~cs2679/19/2001 CS267, YelickData Parallel Overview• Single thread of control consisting of parallel operations.• Imagine a processor per array element• Basic kinds of operations:• int s; double a [1000];•Pointwise a = a + a• Broadcast a = s * a• Nearest neighbor communication a(0..n-1) = a(1..n)• Reductions s = sum(a)A:fA:fsums:9/19/2001 CS267, YelickExample: Fish in a Current in Matlabinteger, parameter :: nfish = 10000complex fishp(nfish), fishv(nfish), force(nfish),accel(nfish)real fishm(nfish)...do while (t < tfinal)t=t+dtfishp = fishp + dt*fishvcall compute_current(force,fishp)accel = force/fishmfishv = fishv + dt*accel...enddo...*+parallel assignmentpoint-wise parallel operator9/19/2001 CS267, YelickFish in a Current Reduce an array to a scalar under an associative binary operation• sum, product, min, max, etc.(((a1 + a2) + a3) + a4) = (a1 + a2) + (a3 + a4)• opposite of broadcastdo while (t < tfinal)t=t+dtfishp = fishp + dt*fishvcall compute_current(force,fishp)accel = force/fishmfishv = fishv + dt*accelfishspeed = abs(fishv)mnsqvel = sqrt(sum(fishspeed*fishspeed)/nfish)dt = .1*maxval(fishspeed) / maxval(abs(accel))enddoreductionexecute in parallel9/19/2001 CS267, YelickFrom Data Parallel to Message Passing• Data parallelism is elegant, but• Wrong level of granularity for current machines• Mapping problem (1M-fold ! 100-fold parallelism) is hard• HPF compiler does this with some success• Mentality of users: performance is everything• Message passing is• More general than data parallelism: not tightly synchronized • Potentially faster: programmer does mapping• What people use in practice on any machine larger than 100 processors (including shared memory machines). 9/19/2001 CS267, YelickWhat is MPI?•A message-passing library specification• extended message-passing model• not a language or compiler specification• not a specific implementation or product• For parallel computers, clusters, and heterogeneous networks• Designed to provide access to advanced parallel hardware for• end users•library writers• tool developers• Not designed for fault tolerance9/19/2001CS267, Yelick 29/19/2001 CS267, YelickHistory of MPIMPI Forum: government, industry and academia.•Formal process began November 1992• Draft presented at Supercomputing 1993• Final standard (1.0) published May 1994• Clarifications (1.1) published June1995• MPI-2 process began April, 1995• MPI-1.2 finalized July 1997• MPI-2 finalized July 1997Current status (MPI-1)•Public domain versions available from ANL/MSU (MPICH), OSC (LAM)• Proprietary versions available from all vendors• Portability is the key reason why MPI is important.9/19/2001 CS267, YelickParallel Programming Overview Basic parallel programming problems (for MPI):1. Creating parallelism• SPMD Model2. Communication between processors•Basic• Collective• Non-blocking3. Synchronization• Point-to-point synchronization is done by message passing• Global synchronization done by collective communication9/19/2001 CS267, YelickSPMD Model• Single Program Multiple Data model of programming:• Each processor has a copy of the same program• All run them at their own rate • May take different paths through the code• Process-specific control through variables like:• My process number• Total number of processors• Processors may synchronize, but none is implicit• Many people equate SPMD programming with Message Passing, but they shouldn’t9/19/2001 CS267, YelickHello World (Trivial)• A simple, but not very interesting SPMD Program.• To make this legal MPI, we need to add 2 lines.#include "mpi.h"#include <stdio.h>int main( int argc, char *argv[] ){printf( "Hello, world!\n" );return 0;}MPI_Init( &argc, &argv );MPI_Finalize();9/19/2001 CS267, YelickHello World (Independent Processes)• We can use MPI calls to get basic values for controlling processes#include "mpi.h"#include <stdio.h>int main( int argc, char *argv[] ){int rank, size;MPI_Init( &argc, &argv );MPI_Comm_rank( MPI_COMM_WORLD, &rank );MPI_Comm_size( MPI_COMM_WORLD, &size );printf( "I am %d of %d\n", rank, size );MPI_Finalize();return 0;}• May print in any order.9/19/2001 CS267, YelickMPI Basic Send/Receive• We need to fill in the details in• Things that need specifying:• How will processes be identified?• How will “data” be described?• How will the receiver recognize/screen messages?• What will it mean for these operations to complete?Process 0Process 1Send(data)Receive(data)9/19/2001CS267, Yelick 39/19/2001 CS267, YelickIdentifying Processes: MPI Communicators• In general, processes can be subdivided into groups:• Group for each component of a model (chemistry, mechanics,…)• Group to work on a subdomain • Supported using a “communicator:” a message contextand a group of processes• More on this later…• In a simple MPI program all processes do the same thing:• The set of all processes make up the “world”:• MPI_COMM_WORLD• Name processes by number (called “rank”)9/19/2001 CS267, YelickPoint-to-Point ExampleProcess 0 sends array “A” to process 1 which receives it as “B”1:#define TAG 123double A[10];MPI_Send(A, 10, MPI_DOUBLE,1,TAG, MPI_COMM_WORLD)2:#define TAG 123double B[10];MPI_Recv(B, 10, MPI_DOUBLE,0,TAG, MPI_COMM_WORLD, &status)orMPI_Recv(B, 10, MPI_DOUBLE, MPI_ANY_SOURCE,MPI_ANY_TAG, MPI_COMM_WORLD, &status)9/19/2001 CS267, YelickDescribing Data: MPI Datatypes• The data in a message to be sent or received is described by a triple (address, count, datatype), where•An MPI datatype is recursively defined as:• predefined, corresponding to a data type from the language (e.g., MPI_INT, MPI_DOUBLE_PRECISION)• a contiguous array of MPI datatypes• a strided block of datatypes• an indexed array of blocks of datatypes• an arbitrary structure of datatypes• There are MPI functions to construct custom datatypes, such an array of (int, float) pairs, or a row of a matrix stored columnwise.9/19/2001 CS267, YelickMPI Predefined DatatypesC:• MPI_INT• MPI_FLOAT• MPI_DOUBLE• MPI_CHAR• MPI_LONG• MPI_UNSIGNEDLanguage-independent• MPI_BYTEFortran:• MPI_INTEGER• MPI_REAL• MPI_DOUBLE_PRECISION• MPI_CHARACTER• MPI_COMPLEX• MPI_LOGICAL9/19/2001 CS267,


View Full Document

Berkeley COMPSCI C267 - Message Passing Programming (MPI)

Documents in this Course
Lecture 4

Lecture 4

52 pages

Split-C

Split-C

5 pages

Lecture 5

Lecture 5

40 pages

Load more
Download Message Passing Programming (MPI)
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Message Passing Programming (MPI) and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Message Passing Programming (MPI) 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?