DOC PREVIEW
SJSU CS 147 - Cache Memory

This preview shows page 1-2-17-18-19-35-36 out of 36 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 36 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CACHE MEMORYLOCALITYSlide 3PowerPoint PresentationSlide 5Slide 6CACHE LINES / BLOCKSTAG / INDEXVALID BIT / DIRTY BITSlide 10Slide 11Slide 12CACHE HITS / MISSESCACHE MEMORY : PLACEMENT POLICYAssociative MappingDirect MappingSlide 17Set-Associative MappingDIFFERENCE BETWEEN LINES, SETS AND BLOCKSSlide 20ASSOCIATIVITYREPLACEMENT ALGORITHMSlide 23Slide 24HIT RATIO and EFFECTIVE ACCESS TIMESLOAD-THROUGH STORE-THROUGHSlide 27WRITE METHODSSlide 29Slide 30Slide 31CACHE CONFLICTSlide 33CACHE COHERENCYSlide 35REFERENCESCACHE MEMORYCS 147October 2, 2008Sampriya ChandraLOCALITYPRINCIPAL OF LOCALITY is the tendency to reference data items that are near other recently referenced data items, or that were recently referenced themselves.TEMPORAL LOCALITY : memory location that is referenced once is likely to be referenced multiple times in near future.SPATIAL LOCALITY : memory location that is referenced once, then the program is likely to be reference a nearby memory location in near future.CACHE MEMORYPrinciple of locality helped to speed up main memory access by introducing small fast memories known as CACHE MEMORIES that hold blocks of the most recently referenced instructions and data items. Cache is a small fast storage device that holds the operands and instructions most likely to be used by the CPU.Memory Hierarchy of early computers: 3 levels– CPU registers – DRAM Memory– Disk storageDue to increasing gap between CPU and main Memory, small SRAM memory called L1 cache inserted.L1 caches can be accessed almost as fast as the registers, typically in 1 or 2 clock cycle Due to even more increasing gap between CPU and main memory, Additional cache: L2 cache inserted between L1 cache and main memory : accessed in fewer clock cycles.•L2 cache attached to the memory bus or to its own cache bus•Some high performance systems also include additional L3 cache which sits between L2 and main memory . It has different arrangement but principle same. •The cache is placed both physically closer and logically closer to the CPU than the main memory.CACHE LINES / BLOCKS •Cache memory is subdivided into cache lines•Cache Lines / Blocks: The smallest unit of memory than can be transferred between the main memory and the cache.TAG / INDEX•Every address field consists of two primary parts: a dynamic (tag) which contains the higher address bits, and a static (index) which contains the lower address bits•The first one may be modified during run-time while the second one is fixed.VALID BIT / DIRTY BIT When a program is first loaded into main memory, the cache is cleared, and so while a program is executing, a valid bit is needed to indicate whether or not the slot holds a line that belongs to the program being executed. There is also a dirty bit that keeps track of whether or not a line has been modified while it is in the cache. A slot that is modified must be written back to the main memory before the slot is reused for another line.•Example: Memory segments and cache segments are exactly of the same size. Every memory segment contains equally sized N memory lines. Memory lines and cache lines are exactly of the same size. Therefore, to obtain an address of a memory line, it needs to determine the number of its memory segment first and the number of the memory line inside of that segment second, then to merge both numbers. Substitute the segment number with the tag and the line number with the index, and you should have realized the idea in general.Therefore, cache line's tag size depends on 3 factors:•Size of cache memory;•Associativity of cache memory;•Cacheable range of operating memory. Stag — size of cache tag, in bits; Smemory — cacheable range of operating memory, in bytes; Scache — size of cache memory, in bytes; A — associativity of cache memory, in ways.Here,CACHE HITS / MISSES•Cache Hit: a request to read from memory, which can satisfy from the cache without using the main memory.•Cache Miss: A request to read from memory, which cannot be satisfied from the cache, for which the main memory has to be consulted.CACHE MEMORY : PLACEMENT POLICYThere are three commonly used methods totranslate main memory addresses to cachememory addresses.•Associative Mapped Cache•Direct-Mapped Cache•Set-Associative Mapped CacheThe choice of cache mapping scheme affectscost and performance, and there is no singlebest method that is appropriate for all situationsAssociative Mapping•a block in the Main Memory can be mapped to any block in the Cache Memory available (not already occupied)•Advantage: Flexibility. An Main Memory block can be mapped anywhere in Cache Memory. •Disadvantage: Slow or expensive. A search through all the Cache Memory blocks is needed to check whether the address can be matched to any of the tags.Direct Mapping•To avoid the search through all CM blocks needed by associative mapping, this method only allows # blocks in main memory .# blocks in cache memoryblocks to be mapped to each Cache Memory block.•Advantage: Direct mapping is faster than the associative mapping as it avoids searching through all the CM tags for a match.•Disadvantage: But it lacks mapping flexibility. For example, if two MM blocks mapped to same CM block are needed repeatedly (e.g., in a loop), they will keep replacing each other, even though all other CM blocks may be available.Set-Associative Mapping•This is a trade-off between associative and direct mappings where each address is mapped to a certain set of cache locations. •The cache is broken into sets where each set contains "N" cache lines, let's say 4. Then, each memory address is assigned a set, and can be cached in any one of those 4 locations within the set that it is assigned to. In other words, within each set the cache is associative, and thus the name.DIFFERENCE BETWEEN LINES, SETS AND BLOCKS•In direct-mapped caches, sets and lines are equivalent. However in associative caches, sets and lines are very different things and the terms cannot be interchanged.•BLOCK: fixed sized packet of information that moves back and forth between a cache and main memory.•LINE: container in a cache that stores a block as well as other information such as the valid bit and tag bits.•SET: collection of one or more lines. Sets in direct-mapped caches consist of a single line. Set in fully associative and set associative caches consists of multiple lines.ASSOCIATIVITY•Associativity : N-way set


View Full Document

SJSU CS 147 - Cache Memory

Documents in this Course
Cache

Cache

24 pages

Memory

Memory

54 pages

Memory

Memory

70 pages

Lecture 1

Lecture 1

53 pages

Cisc

Cisc

18 pages

Quiz 1

Quiz 1

4 pages

LECTURE 2

LECTURE 2

66 pages

RISC

RISC

40 pages

LECTURE 2

LECTURE 2

66 pages

Lecture 2

Lecture 2

67 pages

Lecture1

Lecture1

53 pages

Chapter 5

Chapter 5

14 pages

Memory

Memory

27 pages

Counters

Counters

62 pages

Load more
Download Cache Memory
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Cache Memory and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Cache Memory 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?