DOC PREVIEW
Berkeley COMPSCI C267 - Lecture Notes

This preview shows page 1-2-3-23-24-25-26-46-47-48 out of 48 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 267: Introduction to Parallel Machines and Programming ModelsOutlineA generic parallel architectureParallel Programming ModelsSimple ExampleProgramming Model 1: Shared MemoryShared Memory Code for Computing a SumSlide 8Improved Code for Computing a SumMachine Model 1a: Shared MemoryProblems Scaling Shared MemoryPowerPoint PresentationMachine Model 1b: Distributed Shared MemoryProgramming Model 2: Message PassingComputing s = A[1]+A[2] on each processorMPI – the de facto standardMachine Model 2a: Distributed MemoryTflop/s ClustersMachine Model 2b: Internet/Grid ComputingProgramming Model 2b: Global Addr SpaceMachine Model 2c: Global Address SpaceProgramming Model 3: Data ParallelMachine Model 3a: SIMD SystemMachine Model 3b: Vector MachinesVector ProcessorsSlide 26Cray X1: Parallel Vector ArchitectureEarth Simulator ArchitectureMachine Model 4: Clusters of SMPsSlide 30Slide 31Slide 3222nd List: The TOP10 (2003)Continents PerformanceSlide 35Customer TypesManufacturersManufacturers PerformanceProcessor TypesArchitecturesNOW – ClustersAnalysis of TOP500 DataSummaryReading AssignmentExtra SlidesPC Clusters: Contributions of BeowulfOpen Source Software Model for HPCCluster of SMP Approach01/26/2005 CS267 Lecture 31CS 267: Introduction to Parallel Machines and Programming ModelsJames [email protected] www.cs.berkeley.edu/~demmel/cs267_Spr0501/26/2005 CS267 Lecture 32Outline•Overview of parallel machines and programming models•Shared memory•Shared address space•Message passing•Data parallel•Clusters of SMPs•Grid•Trends in real machines01/26/2005 CS267 Lecture 33A generic parallel architecturePPP PInterconnection NetworkM M MM° Where is the memory physically located?Memory01/26/2005 CS267 Lecture 34Parallel Programming Models•Control•How is parallelism created?•What orderings exist between operations?•How do different threads of control synchronize?•Data•What data is private vs. shared?•How is logically shared data accessed or communicated?•Operations•What are the atomic (indivisible) operations?•Cost•How do we account for the cost of each of the above?01/26/2005 CS267 Lecture 35Simple ExampleConsider a sum of an array function: •Parallel Decomposition: •Each evaluation and each partial sum is a task.•Assign n/p numbers to each of p procs•Each computes independent “private” results and partial sum.•One (or all) collects the p partial sums and computes the global sum.Two Classes of Data: •Logically Shared•The original n numbers, the global sum.•Logically Private•The individual function evaluations.•What about the individual partial sums?10])[(niiAf01/26/2005 CS267 Lecture 36Programming Model 1: Shared Memory•Program is a collection of threads of control.•Can be created dynamically, mid-execution, in some languages•Each thread has a set of private variables, e.g., local stack variables •Also a set of shared variables, e.g., static variables, shared common blocks, or global heap.•Threads communicate implicitly by writing and reading shared variables.•Threads coordinate by synchronizing on shared variablesPnP1P0s s = ...y = ..s ...Shared memoryi: 2i: 5Private memoryi: 801/26/2005 CS267 Lecture 37Shared Memory Code for Computing a SumThread 1 for i = 0, n/2-1 s = s + f(A[i])Thread 2 for i = n/2, n-1 s = s + f(A[i])static int s = 0;•Problem is a race condition on variable s in the program•A race condition or data race occurs when:-two processors (or two threads) access the same variable, and at least one does a write.-The accesses are concurrent (not synchronized) so they could happen simultaneously01/26/2005 CS267 Lecture 38Shared Memory Code for Computing a SumThread 1 …. compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 …Thread 2 … compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 …static int s = 0;•Assume s=27, f(A[i])=7 on Thread1 and =9 on Thread2•For this program to work, s should be 43 at the end•but it may be 43, 34, or 36•The atomic operations are reads and writes•Never see ½ of one number•All computations happen in (private) registers7 9272734 36363401/26/2005 CS267 Lecture 39Improved Code for Computing a SumThread 1 local_s1= 0 for i = 0, n/2-1 local_s1 = local_s1 + f(A[i]) s = s + local_s1 Thread 2 local_s2 = 0 for i = n/2, n-1 local_s2= local_s2 + f(A[i]) s = s +local_s2 static int s = 0;•Since addition is associative, it’s OK to rearrange order•Most computation is on private variables-Sharing frequency is also reduced, which might improve speed -But there is still a race condition on the update of shared s-The race condition can be fixed by adding locks (only one thread can hold a lock at a time; others wait for it)static lock lk;lock(lk);unlock(lk);lock(lk);unlock(lk);01/26/2005 CS267 Lecture 310Machine Model 1a: Shared MemoryP1bus$memory•Processors all connected to a large shared memory.•Typically called Symmetric Multiprocessors (SMPs)•Sun, HP, Intel, IBM SMPs (nodes of Millennium, SP)•Difficulty scaling to large numbers of processors•<32 processors typical•Advantage: uniform memory access (UMA)•Cost: much cheaper to access data in cache than main memory.P2$Pn$01/26/2005 CS267 Lecture 311Problems Scaling Shared Memory•Why not put more processors on (with larger memory?)•The memory bus becomes a bottleneck•Example from a Parallel Spectral Transform Shallow Water Model (PSTSWM) demonstrates the problem•Experimental results (and slide) from Pat Worley at ORNL•This is an important kernel in atmospheric models•99% of the floating point operations are multiplies or adds, which generally run well on all processors•But it does sweeps through memory with little reuse of operands, which exercises the memory system•These experiments show serial performance, with one “copy” of the code running independently on varying numbers of procs•The best case for shared memory: no sharing•But the data doesn’t all fit in the registers/cache01/26/2005 CS267 Lecture 312From Pat Worley, ORNLExample: Problem in Scaling Shared Memory•Performance degradation is a “smooth” function of the number of processes.•No shared data between them, so there should be perfect parallelism.•(Code was run for a 18 vertical levels with a range of horizontal sizes.)01/26/2005 CS267 Lecture 313Machine Model 1b: Distributed Shared Memory•Memory is logically


View Full Document

Berkeley COMPSCI C267 - Lecture Notes

Documents in this Course
Lecture 4

Lecture 4

52 pages

Split-C

Split-C

5 pages

Lecture 5

Lecture 5

40 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?