DOC PREVIEW
Berkeley COMPSCI 162 - Lecture 14 Caching and Demand Paging

This preview shows page 1-2-14-15-29-30 out of 30 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS162 Operating Systems and Systems Programming Lecture 14 Caching and Demand PagingReview: Memory Hierarchy of a Modern Computer SystemReview: A Summary on Sources of Cache MissesGoals for TodayReview: Set Associative CacheReview: Where does a Block Get Placed in a Cache?Which block should be replaced on a miss?What happens on a write?Caching Applied to Address TranslationWhat Actually Happens on a TLB Miss?What happens on a Context Switch?AdministrativeUsing of Compare&Swap (CAS) for queues from ExamWhat TLB organization makes sense?TLB organization: include protectionExample: R3000 pipeline includes TLB “stages”Reducing translation time furtherOverlapping TLB & Cache AccessDemand PagingIllusion of Infinite MemoryDemand Paging is CachingReview: What is in a PTE?Demand Paging MechanismsSoftware-Loaded TLBTransparent ExceptionsConsider weird things that can happenPrecise ExceptionsPage Replacement PoliciesReplacement Policies (Con’t)SummaryCS162Operating Systems andSystems ProgrammingLecture 14Caching and Demand PagingOctober 20, 2008Prof. John Kubiatowiczhttp://inst.eecs.berkeley.edu/~cs162Lec 14.210/20/08 Kubiatowicz CS162 ©UCB Fall 2008Review: Memory Hierarchy of a Modern Computer System•Take advantage of the principle of locality to:–Present as much memory as in the cheapest technology–Provide access at speed offered by the fastest technologyOn-ChipCacheRegistersControlDatapathSecondaryStorage(Disk)ProcessorMainMemory(DRAM)SecondLevelCache(SRAM)1s10,000,000s (10s ms)Speed (ns): 10s-100s 100s100s GsSize (bytes): Ks-Ms MsTertiaryStorage(Tape)10,000,000,000s (10s sec)TsLec 14.310/20/08 Kubiatowicz CS162 ©UCB Fall 2008•Compulsory (cold start): first reference to a block–“Cold” fact of life: not a whole lot you can do about it–Note: When running “billions” of instruction, Compulsory Misses are insignificant•Capacity:–Cache cannot contain all blocks access by the program–Solution: increase cache size•Conflict (collision):–Multiple memory locations mapped to same cache location–Solutions: increase cache size, or increase associativity•Two others:–Coherence (Invalidation): other process (e.g., I/O) updates memory –Policy: Due to non-optimal replacement policyReview: A Summary on Sources of Cache MissesLec 14.410/20/08 Kubiatowicz CS162 ©UCB Fall 2008Goals for Today•Finish discussion of Caching/TLBs•Concept of Paging to Disk•Page Faults and TLB Faults•Precise Interrupts•Page Replacement PoliciesNote: Some slides and/or pictures in the following areadapted from slides ©2005 Silberschatz, Galvin, and Gagne Note: Some slides and/or pictures in the following areadapted from slides ©2005 Silberschatz, Galvin, and Gagne. Many slides generated from my lecture notes by Kubiatowicz.Lec 14.510/20/08 Kubiatowicz CS162 ©UCB Fall 2008Cache Index0431Cache TagByte Select8Cache DataCache Block 0Cache TagValid:: :Cache DataCache Block 0Cache Tag Valid: ::Mux01Sel1 Sel0ORHitReview: Set Associative Cache•N-way set associative: N entries per Cache Index–N direct mapped caches operates in parallel•Example: Two-way set associative cache–Cache Index selects a “set” from the cache–Two tags in the set are compared to input in parallel–Data is selected based on the tag resultCompareCompareCache BlockLec 14.610/20/08 Kubiatowicz CS162 ©UCB Fall 2008•Example: Block 12 placed in 8 block cache0 1 2 3 4 5 6 7Blockno.Direct mapped:block 12 can go only into block 4 (12 mod 8)Set associative:block 12 can go anywhere in set 0 (12 mod 4)0 1 2 3 4 5 6 7Blockno.Set0Set1Set2Set3Fully associative:block 12 can go anywhere0 1 2 3 4 5 6 7Blockno.0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 132-Block Address Space:1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3Blockno.Review: Where does a Block Get Placed in a Cache?Lec 14.710/20/08 Kubiatowicz CS162 ©UCB Fall 2008•Easy for Direct Mapped: Only one possibility•Set Associative or Fully Associative:–Random–LRU (Least Recently Used) 2-way 4-way 8-waySize LRU Random LRU Random LRU Random16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0%64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5%256 KB1.15% 1.17% 1.13% 1.13% 1.12% 1.12%Which block should be replaced on a miss?Lec 14.810/20/08 Kubiatowicz CS162 ©UCB Fall 2008•Write through: The information is written to both the block in the cache and to the block in the lower-level memory•Write back: The information is written only to the block in the cache. –Modified cache block is written to main memory only when it is replaced–Question is block clean or dirty?•Pros and Cons of each?–WT: »PRO: read misses cannot result in writes»CON: Processor held up on writes unless writes buffered–WB: »PRO: repeated writes not sent to DRAM processor not held up on writes»CON: More complex Read miss may require writeback of dirty dataWhat happens on a write?Lec 14.910/20/08 Kubiatowicz CS162 ©UCB Fall 2008Caching Applied to Address Translation•Question is one of page locality: does it exist?–Instruction accesses spend a lot of time on the same page (since accesses sequential)–Stack accesses have definite locality of reference–Data accesses have less page locality, but still some…•Can we have a TLB hierarchy?–Sure: multiple levels at different sizes/speedsData Read or Write(untranslated)CPUPhysicalMemoryTLBTranslate(MMU)NoVirtualAddressPhysicalAddressYesCached?SaveResultLec 14.1010/20/08 Kubiatowicz CS162 ©UCB Fall 2008What Actually Happens on a TLB Miss?•Hardware traversed page tables:–On TLB miss, hardware in MMU looks at current page table to fill TLB (may walk multiple levels)»If PTE valid, hardware fills TLB and processor never knows»If PTE marked as invalid, causes Page Fault, after which kernel decides what to do afterwards•Software traversed Page tables (like MIPS)–On TLB miss, processor receives TLB fault–Kernel traverses page table to find PTE»If PTE valid, fills TLB and returns from fault»If PTE marked as invalid, internally calls Page Fault handler•Most chip sets provide hardware traversal–Modern operating systems tend to have more TLB faults since they use translation for many things–Examples: »shared segments»user-level portions of an operating systemLec 14.1110/20/08 Kubiatowicz CS162 ©UCB Fall 2008What happens on a Context Switch?•Need to do something, since TLBs map virtual addresses to physical addresses–Address Space just changed, so TLB entries no longer


View Full Document

Berkeley COMPSCI 162 - Lecture 14 Caching and Demand Paging

Documents in this Course
Lecture 1

Lecture 1

12 pages

Nachos

Nachos

41 pages

Security

Security

39 pages

Load more
Download Lecture 14 Caching and Demand Paging
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 14 Caching and Demand Paging and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 14 Caching and Demand Paging 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?