U of U CS 6810 - Cache Innovations and DRAM

Unformatted text preview:

Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 151Lecture 14: Cache Innovations and DRAM• Today: cache access basics and innovations, DRAM (Sections 5.1-5.3)2Reducing Miss Rate• Large block size – reduces compulsory misses, reduces miss penalty in case of spatial locality – increases traffic between different levels, space wastage, and conflict misses• Large caches – reduces capacity/conflict misses – access time penalty• High associativity – reduces conflict misses – rule of thumb: 2-way cache of capacity N/2 has the same miss rate as 1-way cache of capacity N – access time penalty• Way prediction – by predicting the way, can reduce power consumption3Cache Misses• On a write miss, you may either choose to bring the block into the cache (write-allocate) or not (write-no-allocate)• On a read miss, you always bring the block in (spatial and temporal locality) – but which block do you replace? no choice for a direct-mapped cache randomly pick one of the ways to replace replace the way that was least-recently used (LRU) FIFO replacement (round-robin)4Writes• When you write into a block, do you also update the copy in L2? write-through: every write to L1  write to L2 write-back: mark the block as dirty, when the block gets replaced from L1, write it to L2• Writeback coalesces multiple writes to an L1 block into one L2 write• Writethrough simplifies coherency protocols in a multiprocessor system as the L2 always has a current copy of data5Reducing Cache Miss Penalty• Multi-level caches• Critical word first• Priority for reads• Victim caches6Multi-Level Caches• The L2 and L3 have properties that are different from L1 access time is not as critical for L2 as it is for L1 (every load/store/instruction accesses the L1) the L2 is much larger and can consume more power per access• Hence, they can adopt alternative design choices serial tag and data access high associativity7Read/Write Priority• For writeback/thru caches, writes to lower levels are placed in write buffers• When we have a read miss, we must look up the write buffer before checking the lower level• When we have a write miss, the write can merge with another entry in the write buffer or it creates a new entry• Reads are more urgent than writes (probability of an instr waiting for the result of a read is 100%, while probability of an instr waiting for the result of a write is much smaller) – hence, reads get priority unless the write buffer is full8Victim Caches• A direct-mapped cache suffers from misses because multiple pieces of data map to the same location• The processor often tries to access data that it recently discarded – all discards are placed in a small victim cache (4 or 8 entries) – the victim cache is checked before going to L2• Can be viewed as additional associativity for a few sets that tend to have the most conflicts9Tolerating Miss Penalty• Out of order execution: can do other useful work while waiting for the miss – can have multiple cache misses -- cache controller has to keep track of multiple outstanding misses (non-blocking cache)• Hardware and software prefetching into prefetch buffers – aggressive prefetching can increase contention for buses10DRAM Main Memory• Main memory is stored in DRAM cells that have much higher storage density• DRAM cells lose their state over time – must be refreshed periodically, hence the name Dynamic• DRAM access suffers from long access time and high energy overhead• Since the pins on a processor chip are expected to not increase much, we will hit a memory bandwidth wall11DRAM Organization11…Memory bus or channelRankDRAMchip ordeviceBankArray1/8th of therow bufferOne word ofdata outputDIMMOn-chip Memory Controller12DRAM Array Access1M DRAM = 1024 x 1024 array of bits10 row address bitsarrive firstColumn decoder10 column address bitsarrive nextSubset of bitsreturned to CPU1024 bitsare read outRow Access Strobe (RAS)Column Access Strobe (CAS) Row Buffer13Salient Points• DIMM, rank, bank, array  form a hierarchy in the storage organization; banks can be simultaneously working on different requests• A cache line is spread across several DRAM chips to increase data transfer bandwidth• To maximize density, arrays are made large  rows are wide  row buffers are wide (8KB read for a 64B request)• The memory controller schedules memory accesses to maximize row buffer hit rates and bank parallelism14Technology Trends• Improvements in technology (smaller devices)  DRAM capacities double every two years• Will soon hit a density wall; may have to be replaced by other technologies (phase change memory, STT-RAM)• Interconnects may have to be photonic to overcome the bandwidth limitation imposed by pins on the chip15Title•


View Full Document

U of U CS 6810 - Cache Innovations and DRAM

Documents in this Course
Caches

Caches

13 pages

Pipelines

Pipelines

14 pages

Load more
Download Cache Innovations and DRAM
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Cache Innovations and DRAM and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Cache Innovations and DRAM 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?