New version page

ILLINOIS CS 241 - Memory Implementation Issues

Upgrade to remove ads
Upgrade to remove ads
Unformatted text preview:

CS 241 Fall 2007System Programming1Memory Implementation IssuesLawrence Angrave2ContentsBrief Discussion of Second Chance ReplacementAlgorithmPaging basic process implementationFrame allocation for multiple processesThrashingWorking SetMemory-Mapped Files3Second Chance Example12 references,9 faults4Basic Paging Process Implementation(1)Separate page out from page inKeep a pool of free frameswhen a page is to be replaced, use a free frameread the faulting page and restart the faulting processwhile page out is occurringWhy?Alternative: Before a frame is needed to read in thefaulted page from disk, just evict a pageDisadvantage with alternative:A page fault may require 2 disk accesses: 1 for writing-out theevicted page, 1 for reading in the faulted page5Basic Paging Process Implementation(2)Paging outWrite dirty pages to disk whenever the paging device isfree and reset the dirty bitBenefit?Remove the paging out (disk writes) process from the criticalpathallows page replacement algorithms to replace clean pagesWhat should we do with paged out pages?Cache paged out pages in primary memory (giving it asecond chance)Return paged-out pages to a free pool but remember whichpage frame they are.If system needs to map page in again, reuse page.6Frame Allocation for Multiple ProcessesHow are the page frames allocated to individualvirtual memories of the various jobs running ina multi-programmed environment?.Simple solutionAllocate a minimum number (??) of frames perprocess.One page from the current executed instructionMost instructions require two operandsinclude an extra page for paging out and one forpaging in7Multi-Programming Frame AllocationSolution 2allocate an equal number of frames per jobbut jobs use memory unequallyhigh priority jobs have same number of page frames and lowpriority jobsdegree of multiprogramming might vary8Multi-Programming Frame AllocationSolution 3:allocate a number of frames per job proportional to jobsizehow do you determine job size: by run command parametersor dynamically?Why multi-programming frame allocation isimportant?If not solved appropriately, it will result in a severeproblem--- Thrashing9Thrashing: exposing the lie of VMThrashing: As page frames per VM space decrease, thepage fault rate increases.Each time one page is brought in, another page, whose contentswill soon be referenced, is thrown out.Processes will spend all of their time blocked, waiting for pagesto be fetched from diskI/O devs at 100% utilization but system not getting much usefulwork doneMemory and CPU mostly idleReal memP1P2P310Page Fault Rate vs. Size Curve11Why Thrashing?Computations have localityAs page frames decrease, the page framesavailable are not large enough to contain thelocality of the process.The processes start faulting heavilyPages that are read in, are used and immediatelypaged out.12Results of Thrashing13Why?As the page fault rate goes up, processes get suspended onpage out queues for the disk.The system may try to optimize performance by starting newjobs.Starting new jobs will reduce the number of page framesavailable to each process, increasing the page fault requests.System throughput plunges.14Solution: Working SetMain ideafigure out how much memory does a process need tokeep most the recent computation in memory withvery few page faults?How?The working set model assumes localitythe principle of locality states that a program clusters itsaccess to data and text temporallyA recently accessed page is more likely to be accessed againThus, as the number of page frames increases abovesome threshold, the page fault rate will dropdramatically15Working set (1968, Denning)What we want to know: collection of pages process musthave in order to avoid thrashingThis requires knowing the future. And our trick is?Working set:Pages referenced by process in last Τ seconds of executionconsidered to comprise its working setΤ : the working set parameterUsages of working set sizes?Cache partitioning: give each app enough space for WSPage replacement: preferentially discard non-WS pagesScheduling: process not executed unless WS in memory16Working SetAt least allocatethis many framesfor this process17Calculating Working Set12 references,8 faultsWindow sizeis 18Working Set in Action to Prevent ThrashingAlgorithmif #free page frames > working set of some suspendedprocessi , then activate processi and map in all itsworking setif working set size of some processk increases and no pageframe is free, suspend processk and release all itspages19Working sets of real programsTypical programs have phasesWorking set sizetransition, stableSum of both20Working Set Implementation IssuesMoving window over reference string used fordeterminationKeeping track of working set21Working Set ImplementationApproximate working set model using timer andreference bitSet timer to interrupt after approximately xreferences, τ.Remove pages that have not been referenced andreset reference bit.22Page Fault Frequency Working SetAnother approximation of pure working setAssume that if the working set is correct there will not be many pagefaults.If page fault rate increases beyond assumed knee of curve, thenincrease number of page frames available to process.If page fault rate decreases below foot of knee of curve, then decreasenumber of page frames available to process.23Page Fault Frequency Working Set24Page Size Considerationssmall pages require large page tableslarge pages imply significant amounts of page may not be referencedlocality of reference tends to be small (256), implying small pagesi/o transfers have high seek time, implying larger pages. (more data perseek.)internal fragmentation minimized with small page sizeReal systems (can be reconfigured)Windows: default 8KBLinux: default 4 KB25Memory Mapped FilesMemory Mapped FileIn BlocksVM of UserMmap requestsDiskFileBlocks of dataFrom file mappedTo VM26Memory Mapped FilesDynamic loading. By mapping executable files andshared libraries into its address space, aprogram can load and unload executable codesections dynamically.Fast File I/O. When you call file I/O functions, suchas read() and write(), the data is copied to akernel's intermediary buffer before it istransferred to the physical file or the process.This intermediary buffering is slow andexpensive. Memory mapping eliminates thisintermediary buffering, thereby improvingperformance significantly.27Memory Mapped FilesStreamlining file access. Once you map afile to a memory region, you access it

View Full Document
Download Memory Implementation Issues
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...

Join to view Memory Implementation Issues and access 3M+ class-specific study document.

We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Memory Implementation Issues 2 2 and access 3M+ class-specific study document.


By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?