DOC PREVIEW
A pilot study to compare programming effort for two parallel programming models

This preview shows page 1-2-3-24-25-26 out of 26 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

A pilot study to compare programming effortfor two parallel programming modelsLorin Hochsteina,∗, Victor R. Basilib, Uzi Vishkinc,John Gilbertd,aUniversity of Nebraska, Lincoln, Department of Computer Science & EngineeringbUniversity of Maryland, Computer Science DepartmentcUniversity of Maryland, Institute for Advanced Computer StudiesdUniversity of California, Santa Barbara, Computer Science DepartmentAbstractContext. Writing software for the current generation of parallel systems requiressignificant programmer effort, and the community is seeking alternatives that reduceeffort while still achieving good performance.Objective. Measure the effect of parallel programming models (message-passingvs. PRAM-like) on programmer effort.Design, Setting, and Subjects. One group of subjects implemented sparse-matrixdense-vector multiplication using message-passing (MPI), and a second group solvedthe same problem using a PRAM-like model (XMTC). The subjects were studentsin two graduate-level classes: one class was taught MPI and the other was taughtXMTC.Main Outcome Measures. Development time, program correctness.Results. Mean XMTC development time was 4.8 hours less than mean MPI devel-opment time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programswere more likely to be correct, but the difference in correctness rates was not sta-tistically significant (p=.16).Conclusions. XMTC solutions for this particular problem required less effort thanMPI equivalents, but further studies are necessary which examine different types ofproblems and different levels of programmer experience.Key words: MPI, XMT, message-passing, PRAM, empirical study, parallelprogramming, effort∗Corresponding author.Email addresses: [email protected] (Lorin Hochstein),Preprint submitted to Elsevier 28 December 20071 IntroductionWhile desktop computers today are very powerful, there remain many com-putational tasks of interest that conventional computers cannot complete ina reasonable time. Such tasks are especially common in the domain of com-putational science, where physical phenomena (e.g., nuclear reactions, earth-quakes, planetary weather and climate) are studied through computer simula-tion. For these problems, scientists must turn to high-performance computing(HPC) systems. These systems are able to provide more processing powerthan conventional systems through parallelism: by connecting many process-ing units together in parallel, such HPC systems are able to obtain muchgreater performance, at least in principle. In practice, it can be difficult toachieve performance gains on HPC systems because of the complexities in-volved in implementing efficient parallel programs. While the challenges ofparallel programming have have traditionally been a concern for the HPCcommunity alone, the rise of multicore architectures is making the parallelprogramming challenge increasingly relevant to all programmers[38].Programmers must specify parallelism explicitly in their source code to takeadvantage of HPC machines. Researchers have proposed many different paral-lel programming models to express parallelism. It is through the program-ming model that the programmer specifies how the different processes ina parallel program coordinate to complete a task. Many models have beenproposed, with corresponding implementations as libraries, extensions of se-quential languages (e.g. C, Fortran), and new parallel languages. These mod-els include: message-passing[16,37], threaded[15,31,28,34]), partitioned globaladdress space (PGAS)[9,32,43], data-parallel[5,11], dataflow [17], bulk syn-chronous parallel (BSP)[22], tuple space[27] and parallel random access mem-ory (PRAM)[41,26].The pilot study in this paper addresses the following research question: would aPRAM-like system offer measurable benefits over alternative parallel systems?We conducted a study in an academic setting to compare the time required tosolve a particular programming problem using the XMTC[2] extensions to theC language (which supports a PRAM-like model) versus using the MPI[16]library (which supports a message-passing model)[email protected] (Victor R. Basili), [email protected] (Uzi Vishkin),[email protected] (John Gilbert).2P P PM M Mnetwork backplaneFig. 1. Message-passing model of a parallel computer1.1 Message-passing with MPIIn the message-passing model, the parallel machine is modeled as a set of pro-cessing elements that each have their own bank of addressable local memory.The processing elements are connected to each other over a network. Figure 1depicts this model: boxes labeled P are processing elements and boxes labeledM are memory banks. Processing elements coordinate to complete tasks byexchanging messages over the network.The MPI library is one implementation of the message-passing model withbindings to languages such as Fortran, C and C++. When an MPI programruns, a fixed number of processes are launched on the parallel machine, whereeach process is typically assigned to a separate processor. Each process hasa unique ID, which can be retrieved with a function call. Programmers usesend and receive function calls to communicate among the different processes.There are six basic function calls in MPI:• MPI Init - initialize MPI environment (called at beginning of program)• MPI Finalize - clean up MPI environment (called at end of program)• MPI Comm size - returns total number of processes• MPI Comm rank - returns ID of the current process• MPI Send - send a message to another process• MPI Recv - receive a message from another processWhile these six calls are sufficient to implement any message-passing programin MPI, many other functions are provided for convenience and which mayprovide better performance than the basic send/receive calls. They includedifferent types of send/receive calls (buffered vs. unbuffered, blocking vs. non-blocking), multipoint communications (e.g. broadcast, scatter, gather), reduc-3Listing 1. MPI code# include <mpi .h ># include < stdio .h ># define N 3int main ( int argc , char * argv []) {int my_id , num_procs ;int data [N ];MPI_Init (& argc ,& argv );MPI_Comm_rank ( MPI_COMM_WORLD ,& my_id );MPI_Comm_size ( MPI_COMM_WORLD ,& num_procs );printf (" Hello from process %d of %d\ n" ,my_id , num_procs );/* Send data from process 0 to process 1 */if ( my_id ==0) {data [0]=1; data [1]=3; data [2]=5;MPI_Send ( data ,N , MPI_INT ,1 ,0 , MPI_COMM _WORLD );} else if ( my_id ==1) {MPI_Recv ( data ,N ,


A pilot study to compare programming effort for two parallel programming models

Download A pilot study to compare programming effort for two parallel programming models
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A pilot study to compare programming effort for two parallel programming models and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A pilot study to compare programming effort for two parallel programming models 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?