Unformatted text preview:

AMPI: Adaptive MPI TutorialMotivationOutlineMPI BasicsSlide 5Slide 6AMPI StatusMPI Code Example: Hello World!Another Example: Send/RecvSlide 10Charm++Slide 12AMPI: MPI with VirtualizationComparison with Native MPIBuilding Charm++ / AMPISlide 16How to write AMPI programs (1)How to write AMPI programs (2)How to write AMPI programs (3)How to run AMPI programs (1)How to run AMPI programs (2)How to run AMPI programs (3)Slide 23How to convert an MPI programSlide 25Slide 26Slide 27Slide 28AMPI ExtensionsAutomatic Load BalancingSlide 31Slide 32Slide 33Slide 34Collective OperationsMotivation for Collective Communication OptimizationAsynchronous CollectivesSlide 38Checkpoint/Restart MechanismSlide 40Interoperability with Charm++ELF and global variablesPerformance VisualizationSlide 44Future WorkThank You!AMPI: Adaptive MPI TutorialGengbin ZhengParallel Programming LaboratoryUniversity of Illinois of Urbana-ChampaignCS420 201/17/19MotivationChallengesNew generation parallel applications are:Dynamically varying: load shifting, adaptive refinementTypical MPI implementations are:Not naturally suitable for dynamic applicationsSet of available processors:May not match the natural expression of the algorithmAMPI: Adaptive MPIMPI with virtualization: VP (“Virtual Processors”)CS420 301/17/19OutlineMPI basicsCharm++/AMPI introductionHow to write AMPI programsRunning with virtualizationHow to convert an MPI programUsing AMPI extensionsAutomatic load balancingNon-blocking collectivesCheckpoint/restart mechanismInteroperability with Charm++ELF and global variablesFuture workCS420 401/17/19MPI BasicsStandardized message passing interfacePassing messages between processesStandard contains the technical features proposed for the interfaceMinimally, 6 basic routines:int MPI_Init(int *argc, char ***argv)int MPI_Finalize(void)int MPI_Comm_size(MPI_Comm comm, int *size) int MPI_Comm_rank(MPI_Comm comm, int *rank)int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)CS420 501/17/19MPI BasicsMPI-1.1 contains 128 functions in 6 categories:Point-to-Point Communication Collective Communication Groups, Contexts, and Communicators Process Topologies MPI Environmental Management Profiling Interface Language bindings: for Fortran, C20+ implementations reportedCS420 601/17/19MPI BasicsMPI-2 Standard contains:Further corrections and clarifications for the MPI-1 documentCompletely new types of functionalityDynamic processesOne-sided communicationParallel I/O Added bindings for Fortran 90 and C++Lots of new functions: 188 for C bindingCS420 701/17/19AMPI StatusCompliance to MPI-1.1 StandardMissing: error handling, profiling interfacePartial MPI-2 supportOne-sided communicationROMIO integrated for parallel I/OMissing: dynamic process management, language bindingsCS420 801/17/19MPI Code Example: Hello World!#include <stdio.h>#include <mpi.h>int main( int argc, char *argv[] ){ int size,myrank; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); printf( "[%d] Hello, parallel world!\n", myrank ); MPI_Finalize(); return 0;}[Demo: hello, in MPI…]CS420 901/17/19Another Example: Send/Recv ... double a[2], b[2]; MPI_Status sts; if(myrank == 0){ a[0] = 0.3; a[1] = 0.5; MPI_Send(a,2,MPI_DOUBLE,1,17,MPI_COMM_WORLD); }else if(myrank == 1){ MPI_Recv(b,2,MPI_DOUBLE,0,17,MPI_COMM_WORLD,&sts); printf(“[%d] b=%f,%f\n”,myrank,b[0],b[1]); } ...[Demo: later…]CS420 1001/17/19OutlineMPI basicsCharm++/AMPI introductionHow to write AMPI programsRunning with virtualizationHow to convert an MPI programUsing AMPI extensionsAutomatic load balancingNon-blocking collectivesCheckpoint/restart mechanismInteroperability with Charm++ELF and global variablesFuture workCS420 1101/17/19Charm++User ViewSystem implementationBasic idea of processor virtualizationUser specifies interaction between objects (VPs)RTS maps VPs onto physical processors Typically, # virtual processors > # processorsCS420 1201/17/19Charm++Charm++ characteristicsData driven objectsAsynchronous method invocationMapping multiple objects per processorLoad balancing, static and run timePortabilityCharm++ features explored by AMPIUser level threads, do not block CPULight-weight: context-switch time ~ 1μsMigratable threadsCS420 1301/17/19AMPI: MPI with VirtualizationEach virtual process implemented as a user-level thread embedded in a Charm++ objectMPI processesReal ProcessorsMPI “processes”Implemented as virtual processes (user-level migratable threads)CS420 1401/17/19Problem setup: 3D stencil calculation of size 2403 run on Lemieux. AMPI runs on any # of PE’s (eg 19, 33, 105). Native MPI needs P=K3 Comparison with Native MPIPerformanceSlightly worse w/o optimizationBeing improved, via Charm++Flexibility Big runs on any number of processorsFits the nature of algorithmsCS420 1501/17/19Building Charm++ / AMPIDownload website:http://charm.cs.uiuc.edu/download/Please register for better supportBuild Charm++/AMPI > ./build <target> <version> <options> [charmc-options]To build AMPI:> ./build AMPI net-linux -g (-O3)CS420 1601/17/19OutlineMPI basicsCharm++/AMPI introductionHow to write AMPI programsRunning with virtualizationHow to convert an MPI programUsing AMPI extensionsAutomatic load balancingNon-blocking collectivesCheckpoint/restart mechanismInteroperability with Charm++ELF and global variablesFuture workCS420 1701/17/19How to write AMPI programs (1)Write your normal MPI program, and then…Link and run with Charm++Build your charm with target AMPICompile and link with charmcinclude charm/bin/ in your path> charmc -o hello hello.c -language ampiRun with charmrun> charmrun helloCS420 1801/17/19How to write AMPI programs (2)Now we can run most MPI programs with Charm++mpirun –npK  charmrun prog +pKMPI’s machinefile: Charm’s nodelist fileDemo - Hello World! (via charmrun)CS420 1901/17/19How to write AMPI programs (3)Avoid using global variablesGlobal variables are dangerous in


View Full Document

ILLINOIS CS 420 - Adaptive MPI Tutorial

Download Adaptive MPI Tutorial
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Adaptive MPI Tutorial and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Adaptive MPI Tutorial 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?