Virtual Memory Oct. 21, 2003Classic Motivations for Virtual MemoryModern Motivations for VMWhy does VM Work?Motivation #1: DRAM a “Cache” for DiskLevels in Memory HierarchyDRAM vs. SRAM as a “Cache”Impact of Properties on DesignLocating an Object in a “Cache”Locating an Object in “Cache” (cont.)A System with Physical Memory OnlyA System with Virtual MemoryPage Faults (like “Cache Misses”)Servicing a Page FaultMotivation #2: Memory ManagementSolution: Separate Virt. Addr. SpacesContrast: Macintosh Memory ModelMacintosh Memory ManagementMac vs. VM-Based Memory MgmtMAC OS XMotivation #3: ProtectionVM Address TranslationVM Address Translation: HitVM Address Translation: MissSlide 25Page TablesAddress Translation via Page TablePage Table OperationSlide 29Slide 30Integrating VM and CacheSpeeding up Translation with a TLBAddress Translation with a TLBSimple Memory System ExampleSimple Memory System Page TableSimple Memory System TLBSimple Memory System CacheAddress Translation Example #1Address Translation Example #2Address Translation Example #3Multi-Level Page TablesMain ThemesVirtual MemoryOct. 21, 2003Virtual MemoryOct. 21, 2003TopicsMotivations for VMAddress translationAccelerating translation with TLBsclass17.ppt15-213“The course that gives CMU its Zip!”– 2 –15-213, F’03Classic Motivations for Virtual MemoryClassic Motivations for Virtual MemoryUse Physical DRAM as a Cache for the DiskAddress space of a process can exceed physical memory sizeSum of address spaces of multiple processes can exceed physical memorySimplify Memory ManagementMultiple processes resident in main memory.Each process has its own address spaceOnly “active” code and data is actually in memoryAllocate more memory to process as needed.Provide ProtectionOne process can’t interfere with another.Because they operate in different address spaces.User process cannot access privileged informationDifferent sections of address spaces have different permissions.– 3 –15-213, F’03Modern Motivations for VMModern Motivations for VMMemory sharing and controlMemory sharing and controlCopy on write: share physical memory among multiple processes until a process tries to write to it. At that point make a copy. For example, this eliminates the need for vfork()Shared librariesProtection (debugging) via Segment-Drivers (Solaris)Sparse address space support (64bit systems)Sparse address space support (64bit systems)Memory as a fast communication deviceMemory as a fast communication devicePart of memory is shared by multiple processesMultiprocessing (beyond the scope of 15-213)Multiprocessing (beyond the scope of 15-213)– 4 –15-213, F’03Why does VM Work?Why does VM Work?It is not used!– 5 –15-213, F’03Motivation #1: DRAM a “Cache” for DiskMotivation #1: DRAM a “Cache” for DiskFull address space is quite large:32-bit addresses: ~4,000,000,000 (4 billion) bytes64-bit addresses: ~16,000,000,000,000,000,000 (16 quintillion) bytesDisk storage is ~500X cheaper than DRAM storage80 GB of DRAM: ~ $25,00080 GB of disk: ~ $50To access large amounts of data in a cost-effective manner, the bulk of the data must be stored on disk1GB: ~$300 160 GB: ~$1004 MB: ~$500DiskDRAMSRAM– 6 –15-213, F’03Levels in Memory HierarchyLevels in Memory HierarchyCPUCPUregsCacheMemoryMemorydiskdiskSize:Latency:$/Mbyte:Line size:32 B< 1 ns8(16) BRegister Cache Memory Disk Memory32 KB-4MB~2 ns$125/MB32(64) B1024 MB> 50 ns$0.20/MB4(64+) KB100 GB>8 ms$0.001/MBlarger, slower, cheaper8 B 32 B 4 KBcache virtual memory– 7 –15-213, F’03DRAM vs. SRAM as a “Cache”DRAM vs. SRAM as a “Cache”DRAM vs. disk is more extreme than SRAM vs. DRAMAccess latencies:DRAM ~10X slower than SRAMDisk ~160,000X slower than DRAMImportance of exploiting spatial locality:First byte is ~160,000X slower than successive bytes on diskvs. ~4X improvement for page-mode vs. regular accesses to DRAMBottom line: Design decisions made for DRAM caches driven by enormous cost of missesDRAMSRAMDisk– 8 –15-213, F’03Impact of Properties on DesignImpact of Properties on Design If DRAM was to be organized similar to an SRAM cache, how would we set the following design parameters?Line size?Large, since disk better at transferring large blocksAssociativity?High, to minimize miss rateWrite through or write back?Write back, since can’t afford to perform small writes to disk What would the impact of these choices be on:miss rateExtremely low. << 1%hit timeMust match cache/DRAM performancemiss latencyVery high. ~20mstag storage overheadLow, relative to block size– 9 –15-213, F’03Locating an Object in a “Cache”Locating an Object in a “Cache”SRAM CacheTag stored with cache lineMaps from cache block to memory blocksFrom cached to uncached formSave a few bits by only storing tagNo tag for block not in cacheHardware retrieves informationcan quickly match against multiple tagsXObject NameTag DataD 243X 17J 105••••••0:1:N-1:= X?“Cache”– 10 –15-213, F’03Locating an Object in “Cache” (cont.)Locating an Object in “Cache” (cont.)Data243 17105•••0:1:N-1:XObject NameLocation•••D:J:X: 10On Disk“Cache”Page TableDRAM CacheEach allocated page of virtual memory has entry in page tableMapping from virtual pages to physical pagesFrom uncached form to cached formPage table entry even if page not in memorySpecifies disk addressOnly way to indicate where to find pageOS retrieves information– 11 –15-213, F’03A System with Physical Memory OnlyA System with Physical Memory OnlyExamples:Most Cray machines, early PCs, nearly all embedded systems, etc.Addresses generated by the CPU correspond directly to bytes in physical memoryCPU0:1:N-1:MemoryPhysicalAddresses– 12 –15-213, F’03A System with Virtual MemoryA System with Virtual MemoryExamples:Workstations, servers, modern PCs, etc.Address Translation: Hardware converts virtual addresses to physical addresses via OS-managed lookup table (page table)CPU0:1:N-1:Memory0:1:P-1:Page TableDiskVirtualAddressesPhysicalAddresses– 13 –15-213, F’03Page Faults (like “Cache Misses”)Page Faults (like “Cache Misses”)What if an object is on disk rather than in memory?Page table entry indicates virtual address not in memoryOS exception handler invoked to move data
View Full Document