UH ECE 6347 - Introduction to MPI IV – Derived Data Types

Unformatted text preview:

1Edgar GabrielCOSC 6374Parallel ComputationIntroduction to MPI IV –Derived Data TypesEdgar GabrielFall 2010Parallel ComputationEdgar GabrielDerived Datatypes• Basic idea: interface to describe memory layout of user data structurese.g. a structure in Ctypedef struct {char a;int b;double c;} mystruct;Memory layout2Parallel ComputationEdgar GabrielDerived Datatype examples• E.g. describing a column or a row of a matrix • Memory layout in C• Memory layout in Fortran Parallel ComputationEdgar GabrielHow to describe non-contiguous data structurestypedef struct {char a;int b;double c;} mystruct;• using a list-I/O interface, e.g. <address, size><baseaddr, sizeof(char)><address1, sizeof(int)><address2, sizeof(double)>• or<baseaddr, sizeof(char)><baseaddr+gap1, sizeof(int)><baseaddr+gap2, sizeof(double)>baseaddr address1address2gap1gap23Parallel ComputationEdgar Gabriel…or in MPI terminology…• a list of <address, count, datatype> sequences<baseaddr, 1, MPI_CHAR><baseaddr+gap1, 1, MPI_INT><baseaddr+gap2, 1, MPI_DOUBLE>• …leading to the following interface…MPI_Type_struct (int count, int blocklength[], MPI_Aint displacements[], MPI_Datatype datatypes[], MPI_Datatype *newtype );MPI_Type_create_struct (int count, int blocklength[], MPI_Aint displacements[], MPI_Datatype datatypes[], MPI_Datatype *newtype );Parallel ComputationEdgar GabrielMPI_Type_struct/MPI_Type_create_struct• MPI_Aint:– Is an MPI Address integer– An integer being able to store a memory address• Displacements are considered to be relative offsets⇒ displacement[0] = 0 in most cases!⇒ Displacements are not required to be positive, distinct or in increasing order• How to determine the address of an elementMPI_Address (void *element, MPI_Aint *address);MPI_Get_address (void *element, MPI_Aint *address);4Parallel ComputationEdgar GabrielAddresses in MPI• Why not use the & operator in C ?– ANSI C does NOT require that the value of the pointer returned by & is the absolute address of the object!– Might lead to problems in segmented memory space– Usually not a problem• In Fortran: all data elements passed to a single MPI_Type_struct call have to be in the same common -blockParallel ComputationEdgar GabrielType map vs. Type signature• Type signature is the sequence of basic datatypes used in a derived datatype, e.g.typesig(mystruct) = {char, int, double}• Type map is sequence of basic datatypes + sequence of displacementstypemap(mystruct) = {(char,0),(int,8),(double,16)}• Type matching rule of MPI: type signature of sender and receiver has to match– Including the count argument in Send and Recv operation (e.g. unroll the description)– Receiver must not define overlapping datatypes– The message need not fill the whole receive buffer5Parallel ComputationEdgar GabrielCommitting and freeing a datatype• If you want to use a datatype for communication or in an MPI-I/O operation, you have to commit it firstMPI_Type_commit (MPI_Datatype *datatype);• Need not commit a datatype, if just used to create more complex derived datatypesMPI_Type_free (MPI_Datatype *datatype);• It is illegal to free any predefined datatypesParallel ComputationEdgar GabrielOur previous example looks like follows:mystruct mydata;MPI_Address ( &mydata, &baseaddr); MPI_Address ( &mydata.b, &addr1); MPI_Address ( &mydata.c, &addr2); displ[0] = 0;displ[1] = addr1 – baseaddr;displ[2] = addr2 – baseaddr;dtype[0] = MPI_CHAR; blength[0] = 1;dtype[1] = MPI_INT; blength[1] = 1;dtype[2] = MPI_DOUBLE; blength[2] = 1;MPI_Type_struct ( 3, blength, displ, dtype, &newtype );MPI_Type_commit ( &newtype );baseaddr address1address26Parallel ComputationEdgar GabrielBasically we are done…• With MPI_Type_struct we can describe any pattern in the memory • Why other MPI datatype constructors ?– Because description of some datatypes can become rather complex– For convenienceParallel ComputationEdgar GabrielMPI_Type_contiguousMPI_Type_contiguous ( int count, MPI_Datatype datatype, MPI_Datatype *newtype );• count elements of the same datatype forming a contiguous chunk in the memoryint myvec[4];MPI_Type_contiguous ( 4, MPI_INT, &mybrandnewdatatype);MPI_Type_commit ( &mybrandnewdatatype );MPI_Send ( myvec, 1, mybrandnewdatatype, … );• Input datatype can be a derived datatype– End of one element of the derived datatype has to be exactly at the beginning of the next element of the derived datatype7Parallel ComputationEdgar GabrielMPI_Type_vectorMPI_Type_vector( int count, int blocklength, int stride, MPI_Datatype datatype, MPI_Datatype *newtype );• count blocks of blocklength elements of the same datatype• Between the start of each block there are stride elements of the same datatypeblocklength=2stride=3count=3Parallel ComputationEdgar GabrielExample using MPI_Type_vector• Describe a column of a 2-D matrix in Cdtype = MPI_DOUBLE;stride = 8;blength = 1;count = 8;MPI_Type_vector (count,blength,stride,dtype,&newtype);MPI_Type_commit (&newtype);• Which column you are really sending depends on the pointer which you pass to the according MPI_Send routine!8Parallel ComputationEdgar GabrielMPI_Type_hvectorMPI_Type_hvector( int count, int blocklength, MPI_Aint stride, MPI_Datatype datatype, MPI_Datatype *newtype );MPI_Type_create_hvector( int count, int blocklength, MPI_Aint stride, MPI_Datatype datatype, MPI_Datatype *newtype );• Identical to MPI_Type_vector, except that the stride is given in bytes rather than in number of elementsParallel ComputationEdgar GabrielMPI_Type_indexedMPI_Type_indexed( int count, int blocklengths[], int displacements[], MPI_Datatype datatype, MPI_Datatype *newtype );• The number of elements per block do not have to be identical • displacements gives the distance from the ‘base’ to the beginning of the block in multiples of the used datatypecount = 3 blocklengths[0] = 2 displacements[0] = 0blocklengths[1] = 1 displacements[1] = 3blocklengths[2] = 4 displacements[2] = 59Parallel ComputationEdgar GabrielMPI_Type_hindexedMPI_Type_hindexed( int count, int blocklengths[], MPI_Aint displacements[], MPI_Datatype datatype, MPI_Datatype *newtype );MPI_Type_create_hindexed( int count, int blocklengths[], MPI_Aint displacements[], MPI_Datatype datatype, MPI_Datatype *newtype );• Identical to MPI_Type_indexed, except that the displacements are given in bytes and not in multiples of the datatypesParallel ComputationEdgar GabrielDuplicating a


View Full Document
Download Introduction to MPI IV – Derived Data Types
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Introduction to MPI IV – Derived Data Types and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Introduction to MPI IV – Derived Data Types 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?