DOC PREVIEW
Berkeley COMPSCI 252 - Lecture 12 - Caches

This preview shows page 1-2-3-21-22-23-43-44-45 out of 45 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 45 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

EECS 252 Graduate Computer Architecture Lec 12 - CachesReview: Who Cares About the Memory Hierarchy?Review: What is a cache?Review: TerminologyWhy it worksBlock Placement1 KB Direct Mapped Cache, 32B blocksReview: Set Associative CacheQ2: How is a block found if it is in the upper level?Q3: Which block should be replaced on a miss?Q4: What happens on a write?Write Buffer for Write ThroughReview: Cache performanceImpact on PerformanceExample: Harvard ArchitectureThe Cache Design SpaceReview: Improving Cache PerformanceReducing Misses3Cs Absolute Miss Rate (SPEC92)2:1 Cache Rule3Cs Relative Miss RateHow Can Reduce Misses?1. Reduce Misses via Larger Block Size2. Reduce Misses via Higher AssociativityExample: Avg. Memory Access Time vs. Miss Rate3. Reducing Misses via a “Victim Cache”4. Reducing Misses via “Pseudo-Associativity”5. Reducing Misses by Hardware Prefetching of Instructions & Data6. Reducing Misses by Software Prefetching Data7. Reducing Misses by Compiler OptimizationsMerging Arrays ExampleLoop Interchange ExampleLoop Fusion ExampleBlocking ExampleSlide 35Reducing Conflict Misses by BlockingSummary of Compiler Optimizations to Reduce Cache Misses (by hand)Impact of Memory Hierarchy on AlgorithmsQuicksort vs. Radix as vary number keys: InstructionsQuicksort vs. Radix as vary number keys: Instrs & TimeQuicksort vs. Radix as vary number keys: Cache missesReview: What happens on Cache miss?Disadvantage of Set Associative CacheReview: Four Questions for Memory Hierarchy DesignersSummaryEECS 252 Graduate Computer Architecture Lec 12 - Caches David CullerElectrical Engineering and Computer SciencesUniversity of California, Berkeleyhttp://www.eecs.berkeley.edu/~cullerhttp://www-inst.eecs.berkeley.edu/~cs2521/28/2004CS252-S05 L12 Caches2Review: Who Cares About the Memory Hierarchy?µProc60%/yr.DRAM7%/yr.110100100019801981198319841985198619871988198919901991199219931994199519961997199819992000DRAMCPU1982Processor-MemoryPerformance Gap:(grows 50% / year)Performance“Moore’s Law”•Processor Only Thus Far in Course:–CPU cost/performance, ISA, Pipelined Execution CPU-DRAM Gap•1980: no cache in µproc; 1995 2-level cache on chip(1989 first Intel µproc with a cache on chip)“Less’ Law?”1/28/2004CS252-S05 L12 Caches3Review: What is a cache?•Small, fast storage used to improve average access time to slow memory.•Exploits spacial and temporal locality•In computer architecture, almost everything is a cache!–Registers a cache on variables–First-level cache a cache on second-level cache–Second-level cache a cache on memory–Memory a cache on disk (virtual memory)–TLB a cache on page table–Branch-prediction a cache on prediction information?Proc/RegsL1-CacheL2-CacheMemoryDisk, Tape, etc.Bigger Faster1/28/2004CS252-S05 L12 Caches4Review: Terminology•Hit: data appears in some block in the upper level (example: Block X) –Hit Rate: the fraction of memory access found in the upper level–Hit Time: Time to access the upper level which consists ofRAM access time + Time to determine hit/miss•Miss: data needs to be retrieve from a block in the lower level (Block Y)–Miss Rate = 1 - (Hit Rate)–Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor•Hit Time << Miss Penalty (500 instructions on 21264!)Lower LevelMemoryUpper LevelMemoryTo ProcessorFrom ProcessorBlk XBlk Y1/28/2004CS252-S05 L12 Caches5Why it works•Exploit the statistical properties of programs•Locality of reference–Temporal–Spatial•Simple hardware structure that observes program behavior and reacts to improve future performance•Is the cache visible in the ISA?yMissPenaltMissRateHitTimeAMAT   DataDataDataInstInstInstyMissPenaltMissRateHitTimeyMissPenaltMissRateHitTime addressP(access,t)Average Memory Access Time1/28/2004CS252-S05 L12 Caches6Block Placement•Q1: Where can a block be placed in the upper level? –Fully Associative, –Set Associative, –Direct Mapped1/28/2004CS252-S05 L12 Caches71 KB Direct Mapped Cache, 32B blocks•For a 2 ** N byte cache:–The uppermost (32 - N) bits are always the Cache Tag–The lowest M bits are the Byte Select (Block Size = 2 ** M)Cache Index0123: Cache DataByte 00431:Cache Tag Example: 0x50Ex: 0x010x50Stored as partof the cache “state”Valid Bit:31Byte 1Byte 31:Byte 32Byte 33Byte 63:Byte 992Byte 1023: Cache TagByte SelectEx: 0x0091/28/2004CS252-S05 L12 Caches8Review: Set Associative Cache•N-way set associative: N entries for each Cache Index–N direct mapped caches operates in parallel–How big is the tag?•Example: Two-way set associative cache–Cache Index selects a “set” from the cache–The two tags in the set are compared to the input in parallel–Data is selected based on the tag resultCache DataCache Block 0Cache TagValid:: :Cache DataCache Block 0Cache Tag Valid: ::Cache IndexMux01Sel1 Sel0Cache BlockCompareAdr TagCompareORHit1/28/2004CS252-S05 L12 Caches9Q2: How is a block found if it is in the upper level?•Index identifies set of possibilities•Tag on each block–No need to check index or block offset•Increasing associativity shrinks index, expands tagBlockOffsetBlock AddressIndexTagCache size = Associativity * 2index_size * 2offest_size1/28/2004CS252-S05 L12 Caches10Q3: Which block should be replaced on a miss?•Easy for Direct Mapped•Set Associative or Fully Associative:–Random–LRU (Least Recently Used)Assoc: 2-way 4-way 8-waySize LRU Ran LRU Ran LRU Ran16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0%64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5%256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12%1/28/2004CS252-S05 L12 Caches11Q4: What happens on a write?•Write through—The information is written to both the block in the cache and to the block in the lower-level memory.•Write back—The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced.–is block clean or dirty?•Pros and Cons of each?–WT: read misses cannot result in writes–WB: no repeated writes to same location•WT always combined with write buffers so that don’t wait for lower level memory•What about on a miss?–Write_no_allocate vs write_allocate1/28/2004CS252-S05 L12 Caches12Write Buffer for Write Through•A Write Buffer is needed between the Cache and Memory–Processor: writes data into the cache and the write buffer–Memory controller: write


View Full Document

Berkeley COMPSCI 252 - Lecture 12 - Caches

Documents in this Course
Quiz

Quiz

9 pages

Caches I

Caches I

46 pages

Lecture 6

Lecture 6

36 pages

Lecture 9

Lecture 9

52 pages

Figures

Figures

26 pages

Midterm

Midterm

15 pages

Midterm

Midterm

14 pages

Midterm I

Midterm I

15 pages

ECHO

ECHO

25 pages

Quiz  1

Quiz 1

12 pages

Load more
Download Lecture 12 - Caches
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 12 - Caches and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 12 - Caches 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?