DOC PREVIEW
Berkeley COMPSCI C267 - Lecture 4

This preview shows page 1-2-3-25-26-27-28-50-51-52 out of 52 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 52 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 267: Introduction to Parallel Machines and Programming ModelsOutlineA generic parallel architectureParallel Programming ModelsSimple ExampleProgramming Model 1: Shared MemorySlide 7Shared Memory “Code” for Computing a SumShared Memory Code for Computing a SumImproved Code for Computing a SumMachine Model 1a: Shared MemoryProblems Scaling Shared Memory HardwareSlide 13Machine Model 1b: Multithreaded ProcessorEldorado Processor (logical view)Machine Model 1c: Distributed Shared MemoryProgramming Model 2: Message PassingComputing s = A[1]+A[2] on each processorMPI – the de facto standardMachine Model 2a: Distributed MemoryTflop/s ClustersMachine Model 2b: Internet/Grid ComputingProgramming Model 2b: Global Address SpaceMachine Model 2c: Global Address SpaceProgramming Model 3: Data ParallelMachine Model 3a: SIMD SystemMachine Model 3b: Vector MachinesVector ProcessorsSlide 29Cray X1: Parallel Vector ArchitectureEarth Simulator ArchitectureProgramming Model 4: HybridsMachine Model 4: Clusters of SMPsSlide 34Slide 35Extra SlidesSlide 3722nd List: The TOP10 (2003)Continents PerformanceSlide 40Customer TypesManufacturersManufacturers PerformanceProcessor TypesArchitecturesNOW – ClustersAnalysis of TOP500 DataSummaryReading AssignmentPC Clusters: Contributions of BeowulfOpen Source Software Model for HPCCluster of SMP Approach01/25/2007 CS267 Lecture 41CS 267: Introduction to Parallel Machines and Programming ModelsKathy [email protected] www.cs.berkeley.edu/~yelick/cs267_sp0701/25/2007 CS267 Lecture 42Outline•Overview of parallel machines (~hardware) and programming models (~software)•Shared memory•Shared address space•Message passing•Data parallel•Clusters of SMPs•Grid•Parallel machine may or may not be tightly coupled to programming model•Historically, tight coupling•Today, portability is important•Trends in real machines01/25/2007 CS267 Lecture 43A generic parallel architectureProcInterconnection Network•Where is the memory physically located?•Is it connect directly to processors?•What is the connectivity of the network?MemoryProcProcProcProc ProcMemoryMemoryMemory Memory01/25/2007 CS267 Lecture 44Parallel Programming Models•Programming model is made up of the languages and libraries that create an abstract view of the machine•Control•How is parallelism created?•What orderings exist between operations?•How do different threads of control synchronize?•Data•What data is private vs. shared?•How is logically shared data accessed or communicated?•Synchronization•What operations can be used to coordinate parallelism•What are the atomic (indivisible) operations?•Cost•How do we account for the cost of each of the above?01/25/2007 CS267 Lecture 45Simple Example•Consider applying a function f to the elements of an array A and then computing its sum: •Questions:•Where does A live? All in single memory? Partitioned?•What work will be done by each processors?•They need to coordinate to get a single result, how?10])[(niiAfA:fA:fsumA = array of all datafA = f(A)s = sum(fA)s:01/25/2007 CS267 Lecture 46Programming Model 1: Shared Memory•Program is a collection of threads of control.•Can be created dynamically, mid-execution, in some languages•Each thread has a set of private variables, e.g., local stack variables •Also a set of shared variables, e.g., static variables, shared common blocks, or global heap.•Threads communicate implicitly by writing and reading shared variables.•Threads coordinate by synchronizing on shared variablesPnP1P0s s = ...y = ..s ...Shared memoryi: 2i: 5Private memoryi: 801/25/2007 CS267 Lecture 47Simple Example•Shared memory strategy:•small number p << n=size(A) processors •attached to single memory•Parallel Decomposition: •Each evaluation and each partial sum is a task.•Assign n/p numbers to each of p procs•Each computes independent “private” results and partial sum.•Collect the p partial sums and compute a global sum.Two Classes of Data: •Logically Shared•The original n numbers, the global sum.•Logically Private•The individual function evaluations.•What about the individual partial sums?10])[(niiAf01/25/2007 CS267 Lecture 48Shared Memory “Code” for Computing a SumThread 1 for i = 0, n/2-1 s = s + f(A[i])Thread 2 for i = n/2, n-1 s = s + f(A[i])static int s = 0;•Problem is a race condition on variable s in the program•A race condition or data race occurs when:-two processors (or two threads) access the same variable, and at least one does a write.-The accesses are concurrent (not synchronized) so they could happen simultaneously01/25/2007 CS267 Lecture 49Shared Memory Code for Computing a SumThread 1 …. compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 …Thread 2 … compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 …static int s = 0;•Assume A = [3,5], f is the square function, and s=0 initially•For this program to work, s should be 34 at the end•but it may be 34,9, or 25•The atomic operations are reads and writes•Never see ½ of one number, but no += operation is not atomic•All computations happen in (private) registers9 25009 252593 5A f = square01/25/2007 CS267 Lecture 410Improved Code for Computing a SumThread 1 local_s1= 0 for i = 0, n/2-1 local_s1 = local_s1 + f(A[i]) s = s + local_s1 Thread 2 local_s2 = 0 for i = n/2, n-1 local_s2= local_s2 + f(A[i]) s = s +local_s2 static int s = 0;•Since addition is associative, it’s OK to rearrange order•Most computation is on private variables-Sharing frequency is also reduced, which might improve speed -But there is still a race condition on the update of shared s-The race condition can be fixed by adding locks (only one thread can hold a lock at a time; others wait for it)static lock lk;lock(lk);unlock(lk);lock(lk);unlock(lk);01/25/2007 CS267 Lecture 411Machine Model 1a: Shared MemoryP1bus$memory•Processors all connected to a large shared memory.•Typically called Symmetric Multiprocessors (SMPs)•SGI, Sun, HP, Intel, IBM SMPs (nodes of Millennium, SP)•Multicore chips, except that all caches are shared•Difficulty scaling to large numbers of processors•<= 32 processors typical•Advantage: uniform memory access (UMA) •Cost: much cheaper to access data in cache than main memory.P2$Pn$Note: $ = cacheshared $01/25/2007 CS267 Lecture 412Problems Scaling Shared Memory


View Full Document

Berkeley COMPSCI C267 - Lecture 4

Documents in this Course
Split-C

Split-C

5 pages

Lecture 5

Lecture 5

40 pages

Load more
Download Lecture 4
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 4 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 4 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?