Memory ManagementOutlinePowerPoint PresentationFragmentationStorage Placement Algorithmsmalloc Revisitedmalloc Revisited (continued)PagingSlide 9Why Virtual Memory?Principle of LocalityVM Address TranslationPage TableHandling a Page FaultAddressingTranslation Lookaside Buffer (TLB)Slide 17Address ConversionMultilevel Page TablesProblem 1Problem 2Memory ManagementCS 241 Discussion Section Week 1111/5/07 – 11/9/07OutlineMemory ManagementFragmentationStorage Placement Algorithmsmalloc revisitedpagingVirtual MemoryWhy Virtual MemoryVirtual Memory AddressingTLB (Translation Lookaside Buffer)Memory ManagementVirtual MemoryFragmentationExternal FragmentationFree space becomes divided into many small piecesCaused over time by allocating and freeing the storage of different sizesInternal FragmentationResult of reserving space without ever using its partCaused by allocating fixed size of storageStorage Placement Algorithms●Best Fit●First Fit●Next Fit●Worst Fitmalloc RevisitedFree storage is kept as a list of free blocksEach block contains a size, a pointer to the next block, and the space itselfWhen a request for space is made, the free list is scanned until a big-enough block can be foundWhich storage placement algorithm is used?If the block is found, return it and adjust the free list. Otherwise, another large chunk is obtained from the OS and linked into the free listmalloc Revisited (continued)typedef long Align; /* for alignment to long */union header { /* block header */ struct { union header *ptr; /* next block if on free list */ unsigned size; /* size of this block */ } s; Align x; /* force alignment of blocks */};typedef union header Header;sizepoints to next free blockPagingDivide memory into pages, all of equal sizeWe don’t need to assign contiguous chunksInternal fragmentation can only occur on the last page assigned to a processExternal fragmentation cannot occur at allMemory ManagementVirtual MemoryWhy Virtual Memory?Use main memory as a Cache for the DiskAddress space of a process can exceed physical memory sizeSum of address spaces of multiple processes can exceed physical memorySimplify Memory ManagementMultiple processes resident in main memory.Each process with its own address spaceOnly “active” code and data is actually in memoryProvide ProtectionOne process can’t interfere with another.because they operate in different address spaces.User process cannot access privileged informationdifferent sections of address spaces have different permissions.Principle of Locality●Program and data references within a process tend to cluster●Only a few pieces of a process will be needed over a short period of time (active data or code)●Possible to make intelligent guesses about which pieces will be needed in the future●This suggests that virtual memory may work efficientlyVM Address Translation●ParametersP = 2p = page size (bytes). N = 2n = Virtual address limitM = 2m = Physical address limitvirtual page number page offsetvirtual addressphysical page number page offsetphysical address0p–1address translationpm–1n–1 0p–1pPage offset bits don’t change as a result of translationPage Table●Keeps track of what pages are in memory●Provides a mapping from virtual address to physical addressHandling a Page Fault●Page faultLook for an empty page in RAMMay need to write a page to disk and free itLoad the faulted page into that empty pageModify the page tableAddressing●64MB RAM (226)●231 (2GB) total memory●4KB page size (212)●So we need 212 for the offset, we can use the remainder bits for the page19 bits, we have 219 pages (524288 pages)Virtual Page number (19 bits)Page offset (12 bits)Virtual Address (31 bits)Translation Lookaside Buffer (TLB)●Each virtual memory reference can cause two physical memory accessesOne to fetch the page tableOne to fetch the data●To overcome this problem a high-speed cache is set up for page table entries●Contains page table entries that have been most recently used (a cache for page table)Translation Lookaside Buffer (TLB)Address Conversion●That 19-bit page address can be optimized in a variety of waysTranslation Look-aside Buffer●m – memory cycle, - hit ratio, - TLB lookup time●Effective access time (Eat)Eat = (m + )(2m + )(1 – ) = 2m + – mMultilevel Page Table●Similar to indirect pointers in I-nodes●Split the 19bits into multiple sectionsInverted Page Table●Much smaller, but is slower and more difficult to lookupMultilevel Page Tables●Given:4KB (212) page size32-bit address space4-byte PTE ●Problem:Would need a 4 MB page table!●220 *4 bytes●Common solutionmulti-level page tablese.g., 2-level table (P6)●Level 1 table: 1024 entries, each of which points to a Level 2 page table.●Level 2 table: 1024 entries, each of which points to a pageProblem 1Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of (a) 12KB, (b) 10KB, (c) 9KB forFirst Fit?Best Fit?Worst Fit?Next Fit?Problem 2Virtual Address (bits)Memory PagePage Table Entry (bits)Virtual Page # (bits)Page Offset (bits)Addressable physical memory16 256 B 2 8 8 1 KB32 1 MB 432 1 KB 864 16 KB 2064 8 MB 16Fill in the rest of the table. The first row is done for you as an example.Notes: 1) All addresses are byte-based.2) The PTE size does not include status bits,
View Full Document