DOC PREVIEW
UT CS 378 - MPI Groups, Communicators and Topologies

This preview shows page 1-2-3-4 out of 11 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MPI Groups, Communicators and TopologiesGroups and communicatorsSlide 3Facts about groups and communicatorsTypical usageSlide 6Slide 7Virtual topologiesSlide 9OutputGeneral splitting of communicatorsMPI Groups, Communicators and TopologiesGroups and communicators•In our case studies, we saw examples where collective communication needed to be performed by subsets of the processes in the computation–(eg) 2-D version of MVM•MPI provides routines for–defining new process groups from subsets of existing process groups like MPI_COMM_WORLD–creating a new communicator for a new process group–performing collective communication within that process group–organizing process groups into virtual topologies:•most common use: permits processes within a group to be given multidimensional numbers rather than the standard 0...n-1 numberingGroups and communicatorsFacts about groups and communicators•Group:–ordered set of processes–each process in group has a unique integer id called its rank within that group–process can belong to more than one group•rank is always relative to a group–groups are “opaque objects”•use only MPI provided routines for manipulating groups•Communicators:–all communication must specify a communicator–from the programming viewpoint, groups and communicators are equivalent–communicators are also “opaque objects”•Groups and communicators are dynamic objects and can be created and destroyed during the execution of the programTypical usage1. Extract handle of global group from MPI_COMM_WORLD using MPI_Comm_group 2. Form new group as a subset of global group using MPI_Group_incl or MPI_Group_excl3. Create new communicator for new group using MPI_Comm_create 4. Determine new rank in new communicator using MPI_Comm_rank 5. Conduct communications using any MPI message passing routine 6. When finished, free up new communicator and group (optional) using MPI_Comm_free and MPI_Group_freemain(int argc, char **argv) { int me, count, count2; void *send_buf, *recv_buf, *send_buf2, *recv_buf2; MPI_Group MPI_GROUP_WORLD, grprem; MPI_Comm commslave; static int ranks[] = {0}; MPI_Init(&argc, &argv); MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); MPI_Comm_rank(MPI_COMM_WORLD, &me); MPI_Group_excl(MPI_GROUP_WORLD, 1, ranks, &grprem); MPI_Comm_create(MPI_COMM_WORLD, grprem, &commslave); if(me != 0){ /* compute on slave */ MPI_Reduce(send_buf,recv_buff,count, MPI_INT, MPI_SUM, 1, commslave); } /* zero falls through immediately to this reduce, others do later... */ MPI_Reduce(send_buf2, recv_buff2, count2, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); MPI_Comm_free(&commslave); MPI_Group_free(&MPI_GROUP_WORLD); MPI_Group_free(&grprem); MPI_Finalize();}#include “mpi.h”#include <stdio.h>#define NPROCS 8int main(argc,argv)int argc;char *argv[]; {int rank, new_rank, sendbuf, recvbuf, numtasks, ranks1[4]={0,1,2,3}, ranks2[4]={4,5,6,7};MPI_Group orig_group, new_group;MPI_Comm new_comm;MPI_Init(&argc,&argv);MPI_Comm_rank(MPI_COMM_WORLD, &rank);MPI_Comm_size(MPI_COMM_WORLD, &numtasks);sendbuf = rank;/* Extract the original group handle */MPI_Comm_group(MPI_COMM_WORLD, &orig_group);/* Divide tasks into two distinct groups based upon rank */if (rank < NPROCS/2) { MPI_Group_incl(orig_group, NPROCS/2, ranks1, &new_group); }else { MPI_Group_incl(orig_group, NPROCS/2, ranks2, &new_group); }/* Create new new communicator and then perform collective communications */MPI_Comm_create(MPI_COMM_WORLD, new_group, &new_comm);MPI_Allreduce(&sendbuf, &recvbuf, 1, MPI_INT, MPI_SUM, new_comm);MPI_Group_rank (new_group, &new_rank);printf("rank= %d newrank= %d recvbuf= %d\n",rank,new_rank,recvbuf);MPI_Finalize();}Sample output:rank= 7 newrank= 3 recvbuf= 22 rank= 0 newrank= 0 recvbuf= 6 rank= 1 newrank= 1 recvbuf= 6rank= 2 newrank= 2 recvbuf= 6 rank= 6 newrank= 2 recvbuf= 22 rank= 3 newrank= 3 recvbuf= 6 rank= 4 newrank= 0 recvbuf= 22 rank= 5 newrank= 1 recvbuf= 22Virtual topologies•Current view of group/communicator: one dimensional numbering•In many algorithms, it is convenient to view the processes as being arranged in a grid of some number of dimensions•One approach: perform the translation from grid co-ordinates to 1D numbers yourself–Hard to write the program–Even harder to read the program•MPI solution: –routines for letting you view processes as if they were organized in grid of some dimension–MPI also permits you to organize processes logically into general graphs but we will not worry about this•Key routines:–MPI_Cart_Create: create a grid from an existing 1D communicator–MPI_Cart_get: my grid coordinates–MPI_Cart_rank: grid coordinates  1D rank–MPI_Cart_shift: offset in some grid dimension  1D rank of sender and receiver#include <mpi.h> #include <stdio.h> #define NP 3 main(int argc, char **argv){ /* Grid setup for 3x3 Grid */……../* MPI Cartesian Grid information */int ivdim[2] = {NP,NP}, ivper[2]={1,1}; int ivdimx[2], ivperx[2], mygrid[2]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &mype); /* Create Cartesian Grid and extract information */ MPI_Cart_create(MPI_COMM_WORLD,2,ivdim ,ivper, 0,&igcomm); MPI_Cart_get( igcomm,2,ivdimx,ivperx, mygrid); MPI_Cart_shift( igcomm,1,1, &isrca,&idesa); //column offset of +1myrow=mygrid[0]; mycol=mygrid[1]; printf("A (%d) [%d,%d]: %2d ->%2d\n",mype,myrow,mycol,isrca,idesa); MPI_Finalize(); }Outputmype [myrow,mycol]src dst0 [0,0] 2 11 [0,1] 0 22 [0,2] 1 03 [1,0] 5 44 [1,1] 3 55 [1,2] 4 36 [2,0] 8 77 [2,1] 6 88 [2,2] 7 6General splitting of communicators•MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm *newcomm) • This is a very powerful routine for splitting a communicator into some number of smaller communicators (depending on color) and assigning ranks within the new communicators• Will be useful for 2D MVM:•split MPI_COMM_WORLD twice, once to define row communicators and once to define column communicators•use row & column indices in Cartesian topology as colors for calls to


View Full Document

UT CS 378 - MPI Groups, Communicators and Topologies

Documents in this Course
Epidemics

Epidemics

31 pages

Discourse

Discourse

13 pages

Phishing

Phishing

49 pages

Load more
Download MPI Groups, Communicators and Topologies
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MPI Groups, Communicators and Topologies and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MPI Groups, Communicators and Topologies 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?