DOC PREVIEW
Berkeley COMPSCI 162 - Lecture Notes

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 162 Spring 2004 Lecture 15 1/15 CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004 Lecture 15: Caching: Demand Paged Virtual Memory 15.0 Main Points: • Concept of paging to disk • Replacement policies For implementation details, read Levy and Lipman paper! 15.1 Demand Paging Up to now, all of a job’s virtual address space must be in physical memory. But programs don’t use all of their memory all of the time. In fact, there is a 90-10 rule: programs spend 90% of their time in 10% of their code. Instead, use main memory as a cache for disk. Some pages in memory, some on disk. virtual address spacepage tablephysical memorydisk Benefits: • Bigger virtual address space: illusion of infinite memory CS 162 Spring 2004 Lecture 15 2/15 • Allow more programs than will fit in memory, to be running at same time 15.2 Demand Paging Mechanism 1. Page table has “present” (valid) bit • If present, pointer to page frame in memory • If not present, go to disk 2. Hardware traps to OS on reference to invalid page • (In MIPS/Nachos, trap on TLB miss, OS checks page table valid bit) 3. OS software: a. Choose an old page to replace b. If old page has been modified, write contents back to disk c. Change its page table entry and TLB entry d. Load new page into memory from disk e. Update page table entry f. Continue thread All this is transparent: OS just runs another job in the meantime. 15.2.1 Software-loaded TLB Instead of having the hardware load the TLB when a translation doesn’t match, the MIPS/Snake/Nachos TLB is software loaded. Idea is, if have high TLB hit rate, ok to trap to software to fill TLB, even if it’s a bit slower. How do we implement this? How can a process run without access to a page table? Basic mechanism (just generalization of above): 1. TLB has “present” (valid) bit • if present, pointer to page frame in memory • if not present, use software page table 2. Hardware traps to OS on reference not in TLB 3. OS software: a. check if page is in memory b. if yes, load page table entry into TLB c. if no, perform page fault operations outlined above d. continue threadCS 162 Spring 2004 Lecture 15 3/15 Paging to disk, or even having software load the TLB – all this is transparent – job doesn’t know it happened. 15.2.2 Transparent page faults Need to transparently re-start faulting instruction. Hardware must help out, by saving: 1. Faulting instruction (need to know which instruction caused fault) 2. Processor state What if an instruction has side effects (CISC processors)? mov (sp)+,10 Two options: • Unwind side-effects • Finish off side-effects Are RISCs easier? What about delayed loads? Delayed branches? ld (sp), r1 What if next instruction causes page fault, while load is still in progress? Have to save enough state to allow CPU to restart. Lots of hardware designers don’t think about virtual memory. For example: block transfer instruction. Source, destination can be overlapping (destination before source). Overwrite part of source as instruction proceeds. CS 162 Spring 2004 Lecture 15 4/15 dest begindest endsource beginsource end No way to unwind instruction! 15.3 Page replacement policies Replacement policy is an issue with any caching system. 15.3.1 Random? Typical solution for TLB’s. Easy to implement in hardware. 15.3.2 FIFO? Throw out oldest page. Be fair – let every page live in memory for the same amount of time, then toss it. Bad, because throws out heavily used pages instead of those that are not frequently used. 15.3.3 MIN Replace page that won’t be used for the longest time into the future. 15.3.4 LRU Replace page that hasn’t been used for the longest time. If induction works, LRU is a good approximation to MIN. Actually, people don’t use even LRU, they approximate it.CS 162 Spring 2004 Lecture 15 5/15 15.3.5 Example Suppose we have 3 page frames, and 4 virtual pages, with the reference string: A B C A B D A D B C B (virtual page references) What happens with FIFO? Ref String slot A B C A B D A D B C B 1 A D C 2 B A 3 C B Page faults in physical memory, with FIFO replacement What about MIN? Ref String s slot A B C A B D A D B C B 1 A C 2 B 3 C D Page faults in physical memory, with MIN replacement What about LRU? Same decisions as MIN, but won’t always be this way! When will LRU perform badly? When next reference is to the least recently used page. Reference string: A B C D A B C D A B C D Ref String s slot A B C D A B C D A B C D 1 A D C B 2 B A D C 3 C B A D Page faults in physical memory, with LRU replacement Same behavior with FIFO! What about MIN? CS 162 Spring 2004 Lecture 15 6/15 Ref String s slot A B C D A B C D A B C D 1 A B 2 B C 3 C D Page faults in physical memory, with MIN replacement 15.3.6 Does adding memory always reduce the number of page faults? Yes, for LRU, MIN. No, for FIFO (Belady’s anomaly) Ref String s slot A B C D A B E A B C D E 1 A D E 2 B A C 3 C B D Ref String s slot A B C D A B E A B C D E 1 A E D 2 B A E 3 C B 4 D C Example of Belady’s Anomaly with FIFO replacement With FIFO, contents of memory can be completely different with different number of page frames. By contrast, with LRU or MIN, the contents of memory with X pages are a subset of contents with X +1 pages. So with LRU or MIN, having more pages never hurts.CS 162 Spring 2004 Lecture 15 7/15 15.4 Implementing LRU 15.4.1 Perfect Timestamp page on each reference Keep list of pages ordered by time of reference 15.4.2 Clock algorithm Approximate LRU (approx to approx to MIN) Replace an old page, not the oldest page Clock algorithm: arrange physical pages in a circle, with a clock hand. 1. Hardware keeps use bit per physical page frame 2. Hardware sets use bit on each reference If use bit isn’t set, means not referenced in a long time 3. On page fault: • Advance clock hand (not real time) • Check use bit 1 –> clear, go on 0 –> replace page Will it always find a page or loop infinitely? Even if all use bits are set, it will eventually loop around, clearing all use bits -> FIFO What if hand is moving slowly? Not many page faults and/or find page quickly What if hand is moving quickly? Lots of


View Full Document

Berkeley COMPSCI 162 - Lecture Notes

Documents in this Course
Lecture 1

Lecture 1

12 pages

Nachos

Nachos

41 pages

Security

Security

39 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?