Carnegie Mellon Introduction to Computer Systems 15 213 18 243 spring 2009 12th Lecture Feb 19th Instructors Gregory Kesden and Markus P schel Carnegie Mellon Last Time Program optimization Optimization blocker Memory aliasing One solution Scalar replacement of array accesses that are reused for i 0 i n i b i 0 for j 0 j n j b i a i n j for i 0 i n i double val 0 for j 0 j n j val a i n j b i val Carnegie Mellon Last Time Instruction level parallelism Latency versus throughput Integer Branch Integer Multiply Step 1 1 cycle General Integer FP Add FP Mult Div Load latency cycles issue 10 1 Step 2 1 cycle Store Functional Units Step 10 1 cycle Carnegie Mellon Last Time Consequence 1 d0 Twice as fast d1 1 d0 d2 d3 1 d1 d4 d2 d4 d5 d3 d6 d5 d6 d7 d7 Carnegie Mellon Today Memory hierarchy caches locality Cache organization Program optimization Cache optimizations Carnegie Mellon Problem Processor Memory Bottleneck Processor performance doubled about every 18 months CPU Bus bandwidth evolved much slower Reg Core 2 Duo Can process at least 256 Bytes cycle 1 SSE two operand add and mult Core 2 Duo Bandwidth 2 Bytes cycle Latency 100 cycles Solution Caches Main Memory Carnegie Mellon Cache Definition Computer memory with short access time used for the storage of frequently or recently used instructions or data Carnegie Mellon General Cache Mechanics Cache 8 4 9 3 Data is copied in block sized transfer units 10 4 Memory 14 10 Smaller faster more expensive memory caches a subset of the blocks 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Larger slower cheaper memory viewed as partitioned into blocks Carnegie Mellon General Cache Concepts Hit Request 14 Cache 8 9 14 3 Memory 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Data in block b is needed Block b is in cache Hit Carnegie Mellon General Cache Concepts Miss Request 12 Cache 8 9 12 3 Request 12 12 Memory 14 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Data in block b is needed Block b is not in cache Miss Block b is fetched from memory Block b is stored in cache Placement policy determines where b goes Replacement policy determines which block gets evicted victim Carnegie Mellon Cache Performance Metrics Miss Rate Fraction of memory references not found in cache misses accesses 1 hit rate Typical numbers in percentages 3 10 for L1 can be quite small e g 1 for L2 depending on size etc Hit Time Time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache Typical numbers 1 2 clock cycle for L1 5 20 clock cycles for L2 Miss Penalty Additional time required because of a miss typically 50 200 cycles for main memory Trend increasing Carnegie Mellon Lets think about those numbers Huge difference between a hit and a miss Could be 100x if just L1 and main memory Would you believe 99 hits is twice as good as 97 Consider cache hit time of 1 cycle miss penalty of 100 cycles Average access time 97 hits 1 cycle 0 03 100 cycles 4 cycles 99 hits 1 cycle 0 01 100 cycles 2 cycles This is why miss rate is used instead of hit rate Carnegie Mellon Types of Cache Misses Cold compulsory miss Occurs on first access to a block Conflict miss Most hardware caches limit blocks to a small subset sometimes a singleton of the available cache slots e g block i must be placed in slot i mod 4 Conflict misses occur when the cache is large enough but multiple data objects all map to the same slot e g referencing blocks 0 8 0 8 would miss every time Capacity miss Occurs when the set of active cache blocks working set is larger than the cache Carnegie Mellon Why Caches Work Locality Programs tend to use data and instructions with addresses near or equal to those they have used recently Temporal locality Recently referenced items are likely block to be referenced again in the near future Spatial locality Items with nearby addresses tend to be referenced close together in time block Carnegie Mellon Example Locality sum 0 for i 0 i n i sum a i return sum Data Temporal sum referenced in each iteration Spatial array a accessed in stride 1 pattern Instructions Temporal cycle through loop repeatedly Spatial reference instructions in sequence Being able to assess the locality of code is a crucial skill for a programmer Carnegie Mellon Locality Example 1 int sum array rows int a M N int i j sum 0 for i 0 i M i for j 0 j N j sum a i j return sum Carnegie Mellon Locality Example 2 int sum array cols int a M N int i j sum 0 for j 0 j N j for i 0 i M i sum a i j return sum Carnegie Mellon Locality Example 3 int sum array 3d int a M N N int i j k sum 0 for i 0 i M i for j 0 j N j for k 0 k N k sum a k i j return sum How can it be fixed Carnegie Mellon Memory Hierarchies Some fundamental and enduring properties of hardware and software systems Faster storage technologies almost always cost more per byte and have lower capacity The gaps between memory technology speeds are widening True of registers DRAM DRAM disk etc Well written programs tend to exhibit good locality These properties complement each other beautifully They suggest an approach for organizing memory and storage systems known as a memory hierarchy Carnegie Mellon An Example Memory Hierarchy L0 registers L1 Smaller faster costlier per byte L2 CPU registers hold words retrieved from L1 cache on chip L1 cache SRAM off chip L2 cache SRAM L1 cache holds cache lines retrieved from L2 cache L2 cache holds cache lines retrieved from main memory L3 Larger slower cheaper per byte L5 main memory DRAM L4 local secondary storage local disks remote secondary storage tapes distributed file systems Web servers Main memory holds disk blocks retrieved from local disks Local disks hold files retrieved from disks on remote network servers Carnegie Mellon Examples of Caching in the Hierarchy Cache Type What is Cached Where is it Cached Latency cycles Managed By Registers 4 byte words CPU core TLB Address translations On Chip TLB 0 Hardware L1 cache 64 bytes block On Chip L1 1 Hardware L2 cache 64 bytes block Off Chip L2 10 Hardware Virtual Memory 4 KB page Main memory 100 Hardware OS Buffer cache Parts of files Main memory 100 OS Network buffer cache Parts of files Local disk 10 000 000 AFS NFS client Browser cache Web pages Local disk 10 000 000 Web browser Web cache Web pages Remote server disks 0 Compiler 1 …
View Full Document