DOC PREVIEW
Berkeley COMPSCI C267 - Uniprocessor Optimizations and Matrix Multiplication

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS267 Lecture 2 18/29/2001 CS267 Lecure 2 1Uniprocessor Optimizations and Matrix Multiplication Kathy Yelickhttp://inst.eecs.berkeley.edu/~cs2678/29/2001 CS267 Lecure 2 2Outline• Parallelism in Modern Processors• Memory Hierarchies• Matrix Multiply Cache Optimizations• Bag of Tricks8/29/2001 CS267 Lecure 2 3Idealized Uniprocessor Model• Processor names bytes, words, etc. in its address space• These represent integers, floats, pointers, arrays, etc.• Exist in the program stack, static region, or heap• Operations include• Read and write (given an address/pointer)• Arithmetic and other logical operations• Order specified by program• Read returns the most recently written data• Compiler and architecture translate high level expressions into “obvious” lower level instructions• Hardware executes instructions in order specified by compiler• Cost• Each operations has roughly the same cost(read, write, add, multiply, etc.)8/29/2001 CS267 Lecure 2 4Uniprocessors in the Real World• Real processors have• registers and caches• small amounts of fast memory• store values of recently used or nearby data• different memory ops can have very different costs• parallelism• multiple “functional units” that can run in parallel• different orders, instruction mixes have different costs• pipelining• a form of parallelism, like an assembly line in a factory• Why is this your problem?In theory, compilers understand all of this and can optimize your program; in practice they don’t.8/29/2001 CS267 Lecure 2 5What is Pipelining? • In this example:• Sequential execution takes 4 * 90min = 6 hours• Pipelined execution takes 30+4*40+20 = 3.3 hours• Pipelining helps throughput, but not latency• Pipeline rate limited by slowest pipeline stage• Potential speedup = Number pipe stages• Time to “fill” pipeline and time to “drain” it reduces speedupABCD6 PM7 8 9TaskOrderTime30 40 40 40 40 20Dave Patterson’s Laundry example: 4 people doing laundrywash (30 min) + dry (40 min) + fold (20 min)8/29/2001 CS267 Lecure 2 6Example: 5 Steps of MIPS DatapathFigure 3.4, Page 134 , CA:AQA 2e by Patterson and HennessyMemoryAccessWriteBackInstructionFetchInstr. DecodeReg. FetchExecuteAddr. CalcALUMemoryReg FileMUX MUXDataMemoryMUXSignExtendZero?IF/IDID/EXMEM/WBEX/MEM4AdderNext SEQ PC Next SEQ PCRD RD RDWB Data• Pipelining is also used within arithmetic units– a fp multiply may have latency 10 cycles, but throughput of 1/cycleNext PCAddressRS1RS2ImmMUXCS267 Lecture 2 28/29/2001 CS267 Lecure 2 7Limits to Instruction Level Parallelism (ILP)• Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle• Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away)• Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock)• Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps).• The hardware and compiler will try to reduce these:• Reordering instructions, multiple issue, dynamic branch prediction, speculative execution…• You can also enable parallelism by careful coding8/29/2001 CS267 Lecure 2 8Dependences (Data Hazards) Limit Parallelism• A dependence or data hazard is one of the following:• true of flow dependence:• a writes a location that b later reads• (read-after write or RAW hazard)• anti-dependence• a reads a location that b later writes• (write-after-read or WAR hazard)• output dependence• a writes a location that b later writes• (write-after-write or WAW hazard)a = = aa = a = = aa =outputantitrue8/29/2001 CS267 Lecure 2 9Outline• Parallelism in Modern Processors• Memory Hierarchies• Matrix Multiply Cache Optimizations• Bag of Tricks8/29/2001 CS267 Lecure 2 10Memory Hierarchy• Most programs have a high degree of localityin their accesses• spatial locality: accessing things nearby previous accesses• temporal locality: reusing an item that was previously accessed• Memory hierarchy tries to exploit localityon-chip cacheregistersdatapathcontrolprocessorSecond level cache (SRAM)Main memory(DRAM)Secondary storage (Disk)Tertiary storage(Disk/Tape)Speed (ns): 1 10 100 10 ms 10 secSize (bytes): 100s Ks Ms Gs Ts 8/29/2001 CS267 Lecure 2 11Processor-DRAM Gap (latency)µProc60%/yr.DRAM7%/yr.110100100019801981198319841985198619871988198919901991199219931994199519961997199819992000DRAMCPU1982Processor-MemoryPerformance Gap:(grows 50% / year)PerformanceTime“Moore’s Law”• Memory hierarchies are getting deeper• Processors get faster more quickly than memory8/29/2001 CS267 Lecure 2 12Cache BasicsX000 X001X010 X011X100 X101X110 X111• Cache hit: in-cache memory access—cheap• Cache miss: non-cached memory access—expensive• Consider a tiny cache (for illustration only)• Cache line length: # of bytes loaded together in one entry• Associativity• direct-mapped: only one address (line) in a given range in cache• n-way: 2 or more lines with different addresses existCS267 Lecture 2 38/29/2001 CS267 Lecure 2 13Experimental Study of Memory• Microbenchmark for memory system performance• time the following program for each size(A) and stride s(repeat to obtain confidence and mitigate timer resolution)for array A of size from 4KB to 8MB by 2xfor stride s from 8 Bytes (1 word) to size(A)/2 by 2xfor i from 0 to size by sload A[i] from memory (8 Bytes)8/29/2001 CS267 Lecure 2 14Memory Hierarchy on a Sun Ultra-IIiL2: 2 MB, 36 ns(12 cycles)L1: 16K, 6 ns(2 cycle)Mem: 396 ns(132 cycles)Sun Ultra-IIi, 333 MHzL2: 64 byte line8 K pagesSee www. cs.berkeley.edu/~yelick/arvindk /t3d -isca95. ps for detailsL1: 16 byte lineArray size8/29/2001 CS267 Lecure 2 15Memory Hierarchy on a Pentium IIIL1: 32 byte line ?L2: 512 KB 60 nsL1: 64K5 ns, 4 -way?Katmai processor on Millennium, 550 MHzArray size8/29/2001 CS267 Lecure 2 16Lessons• Actual performance of a simple program can be a complicated function of the architecture• Slight changes in the architecture or program change the performance significantly• To write fast programs, need to consider architecture• We would like simple models to help us design efficient algorithms• Is this possible?• We will illustrate with a common technique for


View Full Document

Berkeley COMPSCI C267 - Uniprocessor Optimizations and Matrix Multiplication

Documents in this Course
Lecture 4

Lecture 4

52 pages

Split-C

Split-C

5 pages

Lecture 5

Lecture 5

40 pages

Load more
Download Uniprocessor Optimizations and Matrix Multiplication
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Uniprocessor Optimizations and Matrix Multiplication and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Uniprocessor Optimizations and Matrix Multiplication 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?