DOC PREVIEW
Berkeley COMPSCI C267 - Lecture Notes

This preview shows page 1-2-3-22-23-24-44-45-46 out of 46 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 46 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 267: Distributed Memory Machines and ProgrammingProgramming Distributed Memory Machines with Message PassingMessage Passing Libraries (1)Message Passing Libraries (2)Novel Features of MPIMPI ReferencesBooks on MPIProgramming With MPIFinding Out About the EnvironmentHello (C)Notes on Hello WorldMPI Basic Send/ReceiveSome Basic ConceptsMPI DatatypesMPI TagsMPI Basic (Blocking) SendMPI Basic (Blocking) ReceiveA Simple MPI ProgramRetrieving Further InformationTags and ContextsMPI is SimpleAnother Approach to ParallelismCollective Operations in MPIMore on Message PassingBuffersAvoiding BufferingBlocking and Non-blocking CommunicationSources of DeadlocksSome Solutions to the “unsafe” ProblemMore Solutions to the “unsafe” ProblemMPI’s Non-blocking OperationsCommunication ModesOther Point-to Point FeaturesMPI Collective CommunicationSynchronizationCollective Data MovementComments on BroadcastMore Collective Data MovementCollective ComputationMPI Collective RoutinesMPI Built-in Collective Computation OperationsNot CoveredBackup SlidesImplementing Synchronous Message PassingImplementing Asynchronous Message PassingSafe Asynchronous Message Passing02/13/2007 CS267 Lecture 71CS 267:Distributed Memory Machines and Programming Jonathan [email protected]/~skamil/cs26702/13/2007 CS267 Lecture 72ProgrammingDistributed Memory Machineswith Message PassingMost slides from Kathy Yelick’s 2007 lecture2/13/2008 CS 267 Lecture 73Message Passing Libraries (1)•Many “message passing libraries” were once available•Chameleon, from ANL.•CMMD, from Thinking Machines.•Express, commercial.•MPL, native library on IBM SP-2.•NX, native library on Intel Paragon.•Zipcode, from LLL.•PVM, Parallel Virtual Machine, public, from ORNL/UTK.•Others...•MPI, Message Passing Interface, now the industry standard.•Need standards to write portable code.2/13/2008 CS 267 Lecture 74Message Passing Libraries (2)•All communication, synchronization require subroutine calls•No shared variables•Program run on a single processor just like any uniprocessor program, except for calls to message passing library•Subroutines for•Communication •Pairwise or point-to-point: Send and Receive•Collectives all processor get together to–Move data: Broadcast, Scatter/gather–Compute and move: sum, product, max, … of data on many processors•Synchronization •Barrier•No locks because there are no shared variables to protect•Enquiries•How many processes? Which one am I? Any messages waiting?2/13/2008 CS 267 Lecture 75Novel Features of MPI•Communicators encapsulate communication spaces for library safety•Datatypes reduce copying costs and permit heterogeneity•Multiple communication modes allow precise buffer management•Extensive collective operations for scalable global communication•Process topologies permit efficient process placement, user views of process layout•Profiling interface encourages portable toolsSlide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 76MPI References•The Standard itself:•at http://www.mpi-forum.org•All MPI official releases, in both postscript and HTML•Other information on Web:•at http://www.mcs.anl.gov/mpi•pointers to lots of stuff, including other talks and tutorials, a FAQ, other MPI pagesSlide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 77Books on MPI•Using MPI: Portable Parallel Programming with the Message-Passing Interface (2nd edition), by Gropp, Lusk, and Skjellum, MIT Press, 1999.•Using MPI-2: Portable Parallel Programming with the Message-Passing Interface, by Gropp, Lusk, and Thakur, MIT Press, 1999.•MPI: The Complete Reference - Vol 1 The MPI Core, by Snir, Otto, Huss-Lederman, Walker, and Dongarra, MIT Press, 1998.•MPI: The Complete Reference - Vol 2 The MPI Extensions, by Gropp, Huss-Lederman, Lumsdaine, Lusk, Nitzberg, Saphir, and Snir, MIT Press, 1998.•Designing and Building Parallel Programs, by Ian Foster, Addison-Wesley, 1995.•Parallel Programming with MPI, by Peter Pacheco, Morgan-Kaufmann, 1997.Slide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 78Programming With MPI•MPI is a library•All operations are performed with routine calls•Basic definitions in •mpi.h for C•mpif.h for Fortran 77 and 90•MPI module for Fortran 90 (optional)•First Program:•Write out process number •Write out some variables (illustrate separate name space)Slide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 79Finding Out About the Environment•Two important questions that arise early in a parallel program are:•How many processes are participating in this computation?•Which one am I?•MPI provides functions to answer these questions:•MPI_Comm_size reports the number of processes.•MPI_Comm_rank reports the rank, a number between 0 and size-1, identifying the calling processSlide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 710Hello (C)#include "mpi.h"#include <stdio.h>int main( int argc, char *argv[] ){ int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); printf( "I am %d of %d\n", rank, size ); MPI_Finalize(); return 0;}Slide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 713Notes on Hello World•All MPI programs begin with MPI_Init and end with MPI_Finalize•MPI_COMM_WORLD is defined by mpi.h (in C) or mpif.h (in Fortran) and designates all processes in the MPI “job”•Each statement executes independently in each process•including the printf/print statements•I/O not part of MPI-1but is in MPI-2•print and write to standard output or error not part of either MPI-1 or MPI-2•output order is undefined (may be interleaved by character, line, or blocks of characters),•The MPI-1 Standard does not specify how to run an MPI program, but many implementations provide mpirun –np 4 a.outSlide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 714MPI Basic Send/Receive•We need to fill in the details in•Things that need specifying:•How will “data” be described?•How will processes be identified?•How will the receiver recognize/screen messages?•What will it mean for these operations to complete?Process 0Process 1Send(data)Receive(data)Slide source: Bill Gropp, ANL2/13/2008 CS 267 Lecture 715Some Basic Concepts•Processes can be collected into groups•Each message is sent in a context, and must be received in the same context•Provides necessary support for


View Full Document

Berkeley COMPSCI C267 - Lecture Notes

Documents in this Course
Lecture 4

Lecture 4

52 pages

Split-C

Split-C

5 pages

Lecture 5

Lecture 5

40 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?