Unformatted text preview:

Parallel Processing (CS 730) Lecture 1: Introduction to Parallel Programming with Linda*IntroductionGoal of ParallelismBasic IdeaCoordinationParadigmsApplication of Paradigms to ProgrammingProgramming MethodsAn Example: N-Body ProblemSlide 10Slide 11MethodologyProgram TransformationsTransformations for EfficiencySlide 15LindaC-LindaLinda TuplesTuple OperationsExample Tuple operationsDistributed Data StructuresData StructuresStructures with Identical ElementsParallel LoopName Accessed StructuresBarrier SynchronizationPosition Accessed StructuresDistributed TableOrdered or Linked Data StructuresStreamsImplementing Streams in LindaSlide 32More StreamsMessage Passing and Live Data StructuresExample: Stream of ProcessesSept. 25, 2002 Parallel Processing 1Parallel Processing (CS 730) Lecture 1: Introduction to Parallel Programming with Linda*Jeremy R. Johnson*This lecture was derived from material in Carriero and GelernterSept. 25, 2002 Parallel Processing 2Introduction•Objective: To introduce a methodology for designing and implementing parallel programs. To illustrate the Linda coordination language for implementing and running parallel programs.•Topics–Basic Paradigms of Parallelism•result parallelism•specialist parallelism•agenda parallelism–Methods for Implementing the Paradigms•live data structures•message passing•distributed data structures–Linda Coordination Language–An ExampleSept. 25, 2002 Parallel Processing 3Goal of Parallelism•To run large and difficult programs fast.Sept. 25, 2002 Parallel Processing 4Basic Idea•One way to solve a problem fast is to break the problem into pieces, and arrange for all of the pieces to be solved simultaneously. •The more pieces, the faster the job goes - upto a point where the pieces become too small to make the effort of breaking-up and distributing worth the bother.•A “parallel program” is a program that uses the breaking up and handing-out approach to solve large or difficult problems.Sept. 25, 2002 Parallel Processing 5Coordination•We use the term coordination to refer to the process of building programs by gluing together active pieces.•Each active piece is a process, task, thread, or any locus of execution independent of the rest.•To glue active pieces together means to gather them into an ensemble in such a way that we can regard the ensemble itself as the program. The glued pieces are working on the same problem.•The glue must allow these independent activities to communicate and to synchronize with each other exactly as they need to. A coordination language provides this kind of glue.Sept. 25, 2002 Parallel Processing 6Paradigms•Result Parallelism–focuses on the shape of the finished product–Break the result into components, and assign processes to work on each part of the result•Specialist Parallelism–focuses on the make-up of the work crew–Collect a group a specialists and assign different parts of the problem to the appropriate specialist•Agenda Parallelism–focuses on the list of tasks to be performed–Break the problem into an agenda of tasks and assign workers to execute the tasksSept. 25, 2002 Parallel Processing 7Application of Paradigms to Programming•Result Parallelism–Plan a parallel application around the data structures yielded as the ultimate result; we get parallelism be computing all elements of the result simultaneously•Specialist Parallelism–We can plan an application around an ensemble of specialists connected in a logical network of some kind. Parallelism results from all nodes of the logical network (all the specialists) being active simultaneously.•Agenda Parallelism–We can plan an application around a particular agenda of tasks, and then assign many workers to execute the tasks.–Master-slave programsSept. 25, 2002 Parallel Processing 8Programming Methods•Live Data Structures–Build a program in the shape of the data structure that will ultimately be yielded as the result. Each element of this data structure is implicitly a separate process. –To communicate, these implicit processes don’t exchange messages, they simply refer to each other as elements of some data structure.•Message Passing–Create many concurrent processes and enclose every data structure within some process; processes communicate by exchanging messages–In order to communicate, processes must send data objects from one local space to another (use explicit send and receive operations)•Distributed Data Structures–Many processes share direct access to many data objects or structures–Processes communicate and coordindate by leaving data in shared objectsSept. 25, 2002 Parallel Processing 9An Example: N-Body Problem•Consider a naive n-body simulator: on each iteration of the simulation we calculate the prevailing forces between each body and all the rest, and update each body’s position accordingly.•Assume n bodies and q iterations. Let M[i,j] contain the position of the i-th body after the j-th iteration•Result Parallelism: Create a live data structure for M, and a function position(i,j) that computes the position of body i after the j-th iteration. This function will need to refer to elements of M corresponding the the (j-1)-st iteration.Sept. 25, 2002 Parallel Processing 10An Example: N-Body Problem•Agenda Parallelism: At each iteration, workers repeatedly pull a task out of a distributed bag and compute the corresponding body’s new position, referring to a distributed table for information on the previous position of each body. After each computation, a worker might update the table (without erasing information on the previous positions, which may still be needed), or might send newly-computed data to a master process, which updates the table in a single sweep at the end of each iteration.Sept. 25, 2002 Parallel Processing 11An Example: N-Body Problem•Specialist Parallelism: Create one process for each body. On each iteration, the process (specialist) associated with the i-th body updates it’s position. It must get previous position information from each other process via message passing. Similarly, it must send its previous position to each other process so that they can update their positions.Sept. 25, 2002 Parallel Processing 12Methodology•To write a parallel program, (1) choose the paradigm that is most natural for the problem, (2) write a program using the method most natural for that paradigm, and (3) if the


View Full Document

DREXEL CS 730 - Lecture 1

Documents in this Course
Load more
Download Lecture 1
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 1 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 1 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?