DOC PREVIEW
Berkeley COMPSCI C267 - High Performance Programming on a Single Processor: Memory Hierarchies

This preview shows page 1-2-3-21-22-23-43-44-45 out of 45 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

08/29/2002 CS267 Lecure 2 1High Performance Programming on a Single Processor: Memory HierarchiesKatherine [email protected] http://www.cs.berkeley.edu/~yelick/cs26701/26/2004 CS267 Lecure 2 2Outline• Goal of parallel computing• Solve a problem on a parallel machine that is impractical on a serial one• How long does/will the problem take on P processors?• Quick look at parallel machines• Understanding parallel performance• Speedup: the effectiveness of parallelism• Limits to parallel performance• Understanding serial performance• Parallelism in modern processors• Memory hierarchies01/26/2004 CS267 Lecure 2 3Microprocessor revolution• Moore’s law in microprocessor performance made desktop computing in 2000 what supercomputing was in 1990• Massive parallelism has changed the high end• From a small number of very fast (vector) processors to• A large number (hundreds or thousands) of desktop processors• Use the fastest “commodity” workstations as building blocks• Sold in enough quantity to make them inexpensive• Start with the best performance available (at a reasonable price)• Today, most parallel machines are clusters of SMPs:• An SMP is a tightly couple shared memory multiprocessor • A cluster is a group of this connected by a high speed network01/26/2004 CS267 Lecure 2 4A Parallel Computer Today: NERSC-3 Vital Statistics• 5 Teraflop/s Peak Performance – 3.05 Teraflop/s with Linpack• 208 nodes, 16 CPUs per node at 1.5 Gflop/s per CPU• 4.5 TB of main memory• 140 nodes with 16 GB each, 64 nodes with 32 GBs, and 4 nodes with 64 GBs.• 40 TB total disk space• 20 TB formatted shared, global, parallel, file space; 15 TB local disk for system usage• Unique 512 way Double/Single switch configuration01/26/2004 CS267 Lecure 2 5Performance Levels (for example on NERSC-3)• Peak advertised performance (PAP): 5 Tflop/s• LINPACK: 3.05 Tflop/s• Gordon Bell Prize, application performance : 2.46 Tflop/s• Material Science application at SC01• Average sustained applications performance: ~0.4 Tflop/s• Less than 10% peak!01/26/2004 CS267 Lecure 2 6Millennium and CITRIS• Millennium Central Cluster• 99 Dell 2300/6350/6450 Xeon Dual/Quad: • 332 processors total• Total: 211GB memory, 3TB disk• Myrinet 2000 + 1000Mb fiber ethernet • CITRIS Cluster 1: 3/2002 deployment• 4 Dell Precision 730 Itanium Duals: 8 processors• Total: 20 GB memory, 128GB disk• Myrinet 2000 + 1000Mb copper ethernet• CITRIS Cluster 2: 2002-2004 deployment • ~128 Dell McKinley class Duals: 256 processors• Total: ~512GB memory, ~8TB disk• Myrinet 2000 (subcluster) + 1000Mb copper ethernet••~32 nodes available now~32 nodes available now01/26/2004 CS267 Lecure 2 7Outline• Goal of parallel computing• Solve a problem on a parallel machine that is impractical on a serial one• How long does/will the problem take on P processors?• Quick look at parallel machines• Understanding parallel performance• Speedup: the effectiveness of parallelism• Limits to parallel performance• Understanding serial performance• Parallelism in modern processors• Memory hierarchies01/26/2004 CS267 Lecure 2 8Speedup• The speedup of a parallel application isSpeedup(p) = Time(1)/Time(p)• Where• Time(1) = execution time for a single processor and• Time(p) = execution time using p parallel processors• If Speedup(p) = p we have perfect speedup (also calledlinear scaling)• As defined, speedup compares an application with itself on one and on p processors, but it is more useful to compare• The execution time of the best serial application on 1 processorversus• The execution time of best parallel algorithm on p processors01/26/2004 CS267 Lecure 2 9Efficiency• The parallel efficiency of an application is defined asEfficiency(p) = Speedup(p)/p• Efficiency(p) <= 1• For perfect speedup Efficiency (p) = 1• We will rarely have perfect speedup.• Lack of perfect parallelism in the application or algorithm• Imperfect load balancing (some processors have more work)• Cost of communication• Cost of contention for resources, e.g., memory bus, I/O• Synchronization time• Understanding why an application is not scaling linearly will help finding ways improving the applications performance on parallel computers.01/26/2004 CS267 Lecure 2 10Superlinear SpeedupQuestion: can we find “superlinear” speedup, that isSpeedup(p) > p ?• Choosing a bad “baseline” for T(1)• Old serial code has not been updated with optimizations• Avoid this, and always specify what your baseline is• Shrinking the problem size per processor• May allow it to fit in small fast memory (cache)• Application is not deterministic• Amount of work varies depending on execution order• Search algorithms have this characteristic01/26/2004 CS267 Lecure 2 11Amdahl’s Law• Suppose only part of an application runs in parallel• Amdahl’s law• Let s be the fraction of work done serially, • So (1-s) is fraction done in parallel• What is the maximum speedup for P processors?Speedup(p) = T(1)/T(p)T(p) = (1-s)*T(1)/p +s*T(1)= T(1)*((1-s) + p*s)/pEven if the parallel part speeds up perfectly, we may be limited by the sequential portion of code.Speedup(p) = p/(1 + (p-1)*s)assumes perfect speedup for parallel part01/26/2004 CS267 Lecure 2 12Amdahl’s Law (for 1024 processors)Speedup012825638451264076889610240 0.01 0.02 0.03 0.04sDoes this mean parallel computing is a hopeless enterprise?See: Gustafson, Montry, Benner, “Development of Parallel Methods for a 1024 Processor Hypercube”, SIAM J. Sci. Stat. Comp. 9, No. 4, 1988, pp.609.01/26/2004 CS267 Lecure 2 13Scaled Speedup• Speedup improves as the problem size grows• Among other things, the Amdahl effect is smaller• Consider• scaling the problem size with the number of processors (add problem size parameter, n)• for problem in which running time scales linearly with the problem size: T(1,n) = T(1)*n• let n=p (problem size on p processors increases by p)ScaledSpeedup(p,n) = T(1,n)/T(p,n)T(p,n) = (1-s)*n*T(1,1)/p +s*T(1,1)= (1-s)*T(1,1) + s*T(1,1)=T(1,1)ScaledSpeedup(p,n) = n = passumes serial work does not grow with n01/26/2004 CS267 Lecure 2 14Scaled Efficiency• Previous definition of parallel efficiency wasEfficiency(p) = Speedup(p)/p• We often want to scale problem size with the number of processors, but scaled speedup can be tricky• Previous definition depended on a


View Full Document

Berkeley COMPSCI C267 - High Performance Programming on a Single Processor: Memory Hierarchies

Documents in this Course
Lecture 4

Lecture 4

52 pages

Split-C

Split-C

5 pages

Lecture 5

Lecture 5

40 pages

Load more
Download High Performance Programming on a Single Processor: Memory Hierarchies
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view High Performance Programming on a Single Processor: Memory Hierarchies and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view High Performance Programming on a Single Processor: Memory Hierarchies 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?