2/4/200408/29/2002 CS267 Lecure 5 1CS 267: Introduction to Parallel Machines and Programming ModelsKatherine [email protected] http://www.cs.berkeley.edu/~yelick/cs26702/04/2004 CS267 Lecure 5 2Outline• Overview of parallel machines and programming models• Shared memory• Shared address space• Message passing• Data parallel• Clusters of SMPs• Trends in real machines02/04/2004 CS267 Lecure 5 3A generic parallel architecturePPPPInterconnection NetworkM M MM°Where is the memory physically located?Memory02/04/2004 CS267 Lecure 5 4Parallel Programming Models• Control• How is parallelism created?• What orderings exist between operations?• How do different threads of control synchronize?•Data• What data is private vs. shared?• How is logically shared data accessed or communicated?• Operations• What are the atomic (indivisible) operations?•Cost• How do we account for the cost of each of the above?02/04/2004 CS267 Lecure 5 5Simple ExampleConsider a sum of an array function: • Parallel Decomposition: • Each evaluation and each partial sum is a task.• Assign n/p numbers to each of p procs• Each computes independent “private” results and partial sum.• One (or all) collects the p partial sums and computes the global sum.Two Classes of Data: • Logically Shared• The original n numbers, the global sum.• Logically Private• The individual function evaluations.• What about the individual partial sums?∑−=10])[(niiAf02/04/2004 CS267 Lecure 5 6Programming Model 1: Shared Memory• Program is a collection of threads of control.• Can be created dynamically, mid-execution, in some languages• Each thread has a set of private variables, e.g., local stack variables • Also a set of shared variables, e.g., static variables, shared common blocks, or global heap.• Threads communicate implicitly by writing and reading shared variables.• Threads coordinate by synchronizing on shared variablesPnP1P0s s = ...y = ..s ...Shared memoryi: 2i: 5Private memoryi: 802/04/2004 CS267 Lecure 5 7Shared Memory Code for Computing a SumThread 1for i = 0, n/2-1s = s + f(A[i])Thread 2for i = n/2, n-1s = s + f(A[i])static int s = 0;• Problem is a race condition on variable s in the program•A race condition or data race occurs when:- two processors (or two threads) access the same variable, and at least one does a write.- The accesses are concurrent (not synchronized) so they could happen simultaneously02/04/2004 CS267 Lecure 5 8Shared Memory Code for Computing a SumThread 1….compute f([A[i]) and put in reg0reg1 = s reg1 = reg1 + reg0 s = reg1…Thread 2…compute f([A[i]) and put in reg0reg1 = s reg1 = reg1 + reg0 s = reg1…static int s = 0;• Assume s=27, f(A[i])=7 on Thread1 and =9 on Thread2• For this program to work, s should be 43 at the end• but it may be 43, 34, or 36• The atomic operations are reads and writes• Never see ½ of one number• All computations happen in (private) registers7 9272734 36363402/04/2004 CS267 Lecure 5 9Improved Code for Computing a SumThread 1local_s1= 0for i = 0, n/2-1local_s1 = local_s1 + f(A[i])s = s + local_s1Thread 2local_s2 = 0for i = n/2, n-1local_s2= local_s2 + f(A[i])s = s +local_s2static int s = 0;• Since addition is associative, it’s OK to rearrange order• Most computation is on private variables- Sharing frequency is also reduced, which might improve speed - But there is still a race condition on the update of shared s- The race condition can be fixed by adding locks (only one thread can hold a lock at a time; others wait for it)static lock lk;lock(lk);unlock(lk);lock(lk);unlock(lk);02/04/2004 CS267 Lecure 5 10Machine Model 1a: Shared MemoryP1network/bus$memory• Processors all connected to a large shared memory.• Typically called Symmetric Multiprocessors (SMPs)• Sun, HP, Intel, IBM SMPs (nodes of Millennium, SP)• “Local” memory is not (usually) part of the hardware abstraction.• Difficulty scaling to large numbers of processors• <32 processors typical• Advantage: uniform memory access (UMA)• Cost: much cheaper to access data in cache than main memory.P2$Pn$02/04/2004 CS267 Lecure 5 11Problems Scaling Shared Memory• Why not put more processors on (with larger memory?)• The memory bus becomes a bottleneck• Example from a Parallel Spectral Transform Shallow Water Model (PSTSWM) demonstrates the problem• Experimental results (and slide) from Pat Worley at ORNL• This is an important kernel in atmospheric models• 99% of the floating point operations are multiplies or adds, which generally run well on all processors• But it does sweeps through memory with little reuse of operands, which exercises the memory system• These experiments show serial performance, with one “copy” of the code running independently on varying numbers of procs• The best case for shared memory: no sharing• But the data doesn’t all fit in the registers/cache02/04/2004 CS267 Lecure 5 12From Pat Worley, ORNLExample: Problem in Scaling Shared Memory• Performance degradation is a “smooth” function of the number of processes.• No shared data between them, so there should be perfect parallelism.• (Code was run for a 18 vertical levels with a range of horizontal sizes.)02/04/2004 CS267 Lecure 5 13Machine Model 1b: Distributed Shared Memory• Memory is logically shared, but physically distributed• Any processor can access any address in memory• Cache lines (or pages) are passed around machine• SGI Origin is canonical example (+ research machines)• Scales to 100s• Limitation is cache coherent protocols – need to keep cached copies of the same address consistent P1network$memoryP2$Pn$memory memory02/04/2004 CS267 Lecure 5 14Programming Model 2: Message Passing• Program consists of a collection of named processes.• Usually fixed at program startup time• Thread of control plus local address space -- NO shared data.• Logically shared data is partitioned over local processes.• Processes communicate by explicit send/receive pairs• Coordination is implicit in every communication event.• MPI is the most common examplePnP1P0y = ..s ...s: 12 i: 2Private memorys: 14 i: 3s: 11 i: 1send P1,sNetworkreceive Pn,s02/04/2004 CS267 Lecure 5 15Computing s = A[1]+A[2] on each processor°First possible solution – what could go wrong?Processor 1xlocal = A[1]send xlocal, proc2receive xremote, proc2s = xlocal + xremoteProcessor 2xloadl = A[2]receive xremote,
View Full Document