DOC PREVIEW
UCLA COMSCI M151B - 10-vm

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

VM Page ReplacementAll Paging Schemes Depend on LocalityDemand PagingPagingHigh LevelPage ReplacementFinding the Best PageOptimal AlgorithmEvaluating Replacement PoliciesFIFOAn Example of Optimal and FIFO in ActionLeast Recently Used (LRU)Using the Reference BitLRU Clock (Not Recently Used)Fixed Space Vs. Variable SpaceWorking Set ModelWorking Set SizeWSPage Fault FrequencyWhat do you do to pages?01/14/19 1VM Page ReplacementHank Levy01/14/19 2All Paging Schemes Depend on Locality•Processes tend to reference pages in localized patterns•Temporal locality»locations referenced recently likely to be referenced again•Spatial locality»locations near recently referenced locations are likely to be referenced soon. »Goal of a paging system is–stay out of the way when there is plenty of available memory–don’t bring the system to its knees when there is not.01/14/19 3Demand Paging•Demand Paging refers to a technique where program pages are loaded from disk into memory as they are referenced. •Each reference to a page not previously touched causes a page fault.•The fault occurs because the reference found a page table entry with its valid bit off.•As a result of the page fault, the OS allocates a new page frame and reads the faulted page from the disk.•When the I/O completes, the OS fills in the PTE, sets its valid bit, and restarts the faulting process.01/14/19 4Paging•Demand paging»don’t load page until absolutely necessary»commonly used in most systems»doing things one at a time can be slower than batching them.•Prepaging»anticipate fault before it happens»overlap fetch with computation»hard to predict the future»some simple schemes (hints from programmer or program behavior) can work.–vm_advise–larger “virtual” page size–sequential pre-paging from mapped files01/14/19 5High Level•Imagine that when a program starts, it has:–no pages in memory–a page table with all valid bits off•The first instruction to be executed faults, loading the first page.•Instructions fault until the program has enough pages to execute for a while.•It continues until the next page fault.•Faults are expensive, so once the program is running they should not occur frequently, assuming the program is “well behaved” (has good locality).01/14/19 6Page Replacement•When a fault occurs, the OS loads the faulted page from disk into a page of memory.•At some point, the process has used all of the page frames it is allowed to use.•When this happens, the OS must replace a page for each page faulted in. That is, it must select a page to throw out of primary memory to make room.•How it does this is determined by the page replacement algorithm.•The goal of the replacement algorithm is to reduce the fault rate by selecting the best victim page to remove.01/14/19 7Finding the Best Page•A good property»if you put more memory on the machine, then your page fault rate will go down. »Increasing the size of the resource pool helps everyone.•The best page to toss out is the one you’ll never need again»that way, no faults.•Never is a long time, so picking the one closest to “never” is the next best thing.»Replacing the page that won’t be used for the longest period of time absolutely minimizes the number of page faults.»Example:01/14/19 8Optimal Algorithm•The optimal algorithm, called Belady’s algorithm, has the lowest fault rate for any reference string.•Basic idea: replace the page that will not be used for the longest time in the future.•Basic problem: phone calls to psychics are expensive.•Basic use: gives us an idea of how well any implementable algorithm is doing relative to the best possible algorithm.»compare the fault rate of any proposed algorithm to Optimal»if Optimal does not do much better, then your proposed algorithm is pretty good.»If your proposed algorithm doesn’t do much better than Random, go home.01/14/19 9Evaluating Replacement PoliciesEffective Access.Time = (1-p)*Tm + p*TdTm = time to access main memory Td = time to faultExecution time = (roughly) #memory refs * E.A.T.# of physical page framesexecutiontimeRandomLRUOptDown in this range,it doesn’t matter somuch what you do.In here, you canexpect to have some effect.Up here,forget it.Few frames Lots of frames01/14/19 10FIFO•FIFO is an obvious algorithm and simple to implement.•Basic idea, maintain a list or queue of pages in the order in which they were paged into memory. •On replacement, remove the one brought in the longest time ago.•Why might it work? »Maybe the one brought in the longest ago is one we’re not using now.•Why it might not work? »Maybe it’s not. »We have no real information to tell us if it’s being used or not.•FIFO suffers from “Belady’s anomaly” »the fault rate might actually increase when the algorithm is given more memory -- a bad property.This pagewas faultedrecentlyThis pagewas faulteda long time ago.01/14/19 11An Example of Optimal and FIFO in ActionReference stream is A B C A B D A D B COPTIMALA B C A B D A D B C Btoss A or Dtoss C5 FaultsFIFOA B C A B D A D B C Btoss AABCDABCtoss ?7 Faults01/14/19 12Least Recently Used (LRU)•Basic idea: we can’t look into the future, but let’s look at past experience to make a good guess.•LRU: on replacement, remove the page that has not been used for the longest time in the past.•Implementation: to really implement this, we would need to time stamp every reference, or maintain a stack that’s updated on every reference. This would be too costly.•So, we can’t implement this exactly, but we can try to approximate it.»why is an approximate solution totally acceptable?This page was most recently used.This pagewas leastrecently used.Our “bet” is that pageswhich you used recentlyare ones which you willuse again (principle of locality) and, by implication, those thatyou didn’t, you won’t.01/14/19 13Using the Reference Bit•Various LRU approximations use the PTE reference bit.–keep a counter for each page–at regular intervals, do:– for every page:• if ref bit = 0, increment its counter• if ref bit = 1, zero its counter• zero the reference bit–the counter will thus contain the number of intervals since the last reference to the page.–the page with the largest counter will be least recently used one.•If we don’t have a reference bit, we can simulate it using the VALID bit and taking a few extra


View Full Document

UCLA COMSCI M151B - 10-vm

Download 10-vm
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view 10-vm and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view 10-vm 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?