This preview shows page 1-2-3-24-25-26 out of 26 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 26 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Cache MemoriesSlide 2Conceptual Operation of CacheSlide 4PowerPoint PresentationWrite ProtocolsMapping AlgorithmsMapping FunctionsRead ProtocolsWrite MissSlide 11Main MemoryDirect MappingPlacement of block in CacheAssociative MappingSet Associative MappingValid BitCache CoherenceReplacement AlgorithmsCaches in Commercial ProcessorsCaches in Commercial Processors Pentium III (high performance processor)Level 2 Cache of Pentium IIISlide 23Which method is better?Pentium 4 CachesL2 of Pentium 4Cache Memories•Effectiveness of cache is based on a property of computer programs called locality of reference•Most of programs time is spent in loops or procedures called repeatedly. The remainder of the program is accessed infrequently.•Temporal referencing – a recently executed instruction is likely to be called again.•Spatial referencing – instructions in close proximity to a recently executed instruction are likely to be called again.Cache Memories•Based on locality of reference–Temporal•Recently executed instructions are likely to executed again soon–Spatial•Instructions in close proximity to a recently executed instruction (with respect to an address) are also likely to be executed soon.•Cache Block – a set of contiguous address locations (cache block = cache line)Conceptual Operation of Cache•Memory control circuitry is designed to take advantage of locality of reference.•Temporal ––Whenever an information (instruction or data) is first needed, this item should be brought into the cache where it will hopefully remain until it is needed again.•Spatial ––Instead of fetching just one item from the main memory to the cache, it is useful to fetch several items that reside at adjacent addresses well.•A set of contiguous addresses are called a block –cache block or cache lineCache Memories•Using an example cache size of 128 blocks of 16 words each. (total of 2048 – 2K words)•Main memory is addressable by a 16-bit address bus (64K words – viewed as 4K blocks of 16 words each)•Write through Protocol–Cache and main memory are updated simultaneously•Write Back Protocol–Update on the cache and mark it with an associated flag bit (dirty or modified bit)–Main memory is updated later, when the block containing this marked word is to be removed from cache to make room for a new block.Write Protocols•Write through–Simpler, but results in unnecessary Write operations in main memory when a cache word is updated several times during its cache residency.•write back –can result in unnecessary write operations because when a cache block is written back to the memory all words of the block are written back, even if only a single word has been changed while the block was in the cache.Mapping Algorithms•Processor does not need to know explicitly that there is a cache. •Based on R/W operations, the cache control circuitry determines whether the requested word currently exists in the cache. (Hit)•If information is in cache for a read, main memory is not involved. For write operations, system can either use write-through protocol or write-back protocolMapping Functions•Specification of correspondence between the main memory blocks and those in cache.•Hit or Miss–Write through Protocol–Write back protocol (uses dirty bit)–Read miss–Load through or early restart on read miss–Write MissRead Protocols•Read miss–Addressed word is not in cache–Block of words containing requested word is written from main memory to cache.–After entire block is written to cache, particular word is forwarded to processor.Or word may be sent to processor as soon as it is read from main memory (load-through or early-restart)reduces processor’s wait time but requires more complex circuitry.Write Miss•If addressed word is not in cache for a write operation, write miss occurs.•write-through– information is written directly into main memory.•Write-back– block containing word is brought into cache, then the desired word in the cache is overwritten with the new information.Mapping FunctionsBlock 0Block 1Block 127CachetagtagtagCache consists of 128 blocks of 16 words each, total of 2048 (2K words)Main Memory5Block 0Block 1Block 127Block 128Block 129Block 255Block 256Block 257Block 40957Tag Block Word4Main memory addressMain memory hasx 64K words, viewed as 4K blocks of 16 words eachDirect Mapping•Block J maps to Block J modulo 128 of the cache–Main memory blocks 0, 128, 256, … map to block 0 of cache–Blocks 1, 129, 257, … map to block 1–…•Contention can arise for the position even if the cache is not full.•Contention resolved by allowing new block to overwrite the currently resident blockPlacement of block in Cache•Direct mapping - easy to implement – not very flexible.•Determined from memory address•Low-order 4 bits select one of 16 words in a block•When a new block enters cache, 7-bit block field determines cache position•5-bit high order are stored in tag address. They identify which of the 32 blocks that are mapped to this position are currently resident.5 7Tag Block Word4Main memory addressAssociative Mapping•Much more flexible – higher costs (must search all 128 tag patterns to determine if a given block is in cache.–All tags must be searched in parallel•A main memory block can be placed into any cache block position.•Existing blocks only need to be ejected if cache is full.12Tag4WordMain memory addressSet Associative Mapping•Blocks of cache are grouped into sets•A block of main memory can reside in any block of a specific set.•Reduces contention problem of direct mapped; reduces hardware necessary for searching tag addresses as seen in associative mapped.•K-blocks per set is a k-way set associative cache6 6 4Main memory addressTag WordSetValid Bit•Provided for each block•Indicates whether the block contains valid data•Not the same as dirty bit (used with the write-through method) which indicated whether the block has been modified during its cache residency.•Transfers from disk to main memory are normally handled with DMA transfers, bypassing cache for both cost and performance reasons.•Valid bit is set to 1 first time loaded into cache from main memory. Whenever a main memory block is updated by a source that bypasses cache, checks are meade to determine if block being loaded is in cache. If it is, valid bit is cleared to 0.Cache Coherence•Also, before a DMA transfer,


View Full Document

MSU ECE 3724 - Cache Memories

Documents in this Course
Timers

Timers

38 pages

TEST 4

TEST 4

9 pages

Flags

Flags

6 pages

Timers

Timers

6 pages

Timers

Timers

54 pages

TEST2

TEST2

8 pages

Load more
Download Cache Memories
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Cache Memories and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Cache Memories 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?