Virtual MemoryOctober 30, 2001Topics• Motivations for VM• Address translation• Accelerating translation with TLBsclass19.ppt15-213“The course that gives CMU its Zip!”CS 213 F’01– 2 –class19.pptMotivations for Virtual Memory• Use Physical DRAM as a Cache for the Disk• Address space of a process can exceed physical memory size• Sum of address spaces of multiple processes can exceed physicalmemory• Simplify Memory Management• Multiple processes resident in main memory.–Each process with its own address space• Only “active” code and data is actually in memory–Allocate more memory to process as needed.Provide Protection• One process can’t interfere with another.–because they operate in different address spaces.• User process cannot access privileged information–different sections of address spaces have different permissions.CS 213 F’01– 3 –class19.pptMotivation #1: DRAM a “Cache” for DiskFull address space is quite large:• 32-bit addresses: ~4,000,000,000 (4 billion) bytes• 64-bit addresses: ~16,000,000,000,000,000,000 (16 quintillion) bytesDisk storage is ~156X cheaper than DRAM storage• 8 GB of DRAM: ~ $10,000• 8 GB of disk: ~ $64To access large amounts of data in a cost-effectivemanner, the bulk of the data must be stored on disk256 MB: ~$320 8 GB: ~$644 MB: ~$400DiskDRAMSRAMCS 213 F’01– 4 –class19.pptLevels in Memory HierarchyCPUCPUregsregsCacheMemoryMemorydiskdisksize:speed:$/Mbyte:line size:32 B3 ns8 BRegister Cache Memory Disk Memory32 KB-4MB6 ns$100/MB32 B128 MB60 ns$1.25/MB4 KB30 GB8 ms$0.008/MBlarger, slower, cheaper8 B 32 B 4 KBcache virtual memoryCS 213 F’01– 5 –class19.pptDRAM vs. SRAM as a “Cache”DRAM vs. disk is more extreme than SRAM vs. DRAM• Access latencies:–DRAM ~10X slower than SRAM–Disk ~100,000X slower than DRAM• Importance of exploiting spatial locality:–First byte is ~100,000X slower than successive bytes on disk»vs. ~4X improvement for page-mode vs. regular accesses to DRAM• Bottom line:–Design decisions made for DRAM caches driven by enormous cost ofmissesDRAMSRAMDiskCS 213 F’01– 6 –class19.pptImpact of These Properties on DesignIf DRAM was to be organized similar to an SRAM cache, howwould we set the following design parameters?• Line size?– Large, since disk better at transferring large blocks• Associativity?– High, to mimimize miss rate• Write through or write back?– Write back, since can’t afford to perform small writes to diskWhat would the impact of these choices be on:• miss rate– Extremely low. << 1%• hit time– Must match cache/DRAM performance• miss latency– Very high. ~20ms• tag storage overhead– Low, relative to block sizeCS 213 F’01– 7 –class19.pptLocating an Object in a “Cache”SRAM Cache• Tag stored with cache line• Maps from cache block to memory blocks–From cached to uncached form• No tag for block not in cache• Hardware retrieves information–can quickly match against multiple tagsXObject NameTag DataD 243X 17J 105••••••0:1:N-1:= X?“Cache”CS 213 F’01– 8 –class19.pptLocating an Object in a “Cache” (cont.)Data243 17105•••0:1:N-1:XObject NameLocation•••D:J:X: 10On Disk“Cache”Page TableDRAM Cache• Each allocate page of virtual memory has entry in page table• Mapping from virtual pages to physical pages–From uncached form to cached form• Page table entry even if page not in memory–Specifies disk address• OS retrieves informationCS 213 F’01– 9 –class19.pptCPU0:1:N-1:MemoryA System with Physical Memory OnlyExamples:• most Cray machines, early PCs, nearly all embedded systems, etc.Addresses generated by the CPU point directly to bytes in physical memoryPhysicalAddressesCS 213 F’01– 10 –class19.pptA System with Virtual MemoryExamples:• workstations, servers, modern PCs, etc.Address Translation: Hardware converts virtual addresses tophysical addresses via an OS-managed lookup table (page table)CPU0:1:N-1:Memory0:1:P-1:Page TableDiskVirtualAddressesPhysicalAddressesCS 213 F’01– 11 –class19.pptPage Faults (Similar to “Cache Misses”)What if an object is on disk rather than in memory?• Page table entry indicates virtual address not in memory• OS exception handler invoked to move data from disk into memory–current process suspends, others can resume–OS has full control over placement, etc.CPUMemoryPage TableDiskVirtualAddressesPhysicalAddressesCPUMemoryPage TableDiskVirtualAddressesPhysicalAddressesBefore faultAfter faultCS 213 F’01– 12 –class19.pptServicing a Page FaultProcessor SignalsController• Read block of length Pstarting at disk addressX and store starting atmemory address YRead Occurs• Direct Memory Access(DMA)• Under control of I/OcontrollerI / O ControllerSignals Completion• Interrupt processor• OS resumes suspendedprocessdiskDiskdiskDiskMemory-I/O busMemory-I/O busProcessorProcessorCacheCacheMemoryMemoryI/OcontrollerI/OcontrollerReg(2) DMA Transfer(1) Initiate Block Read(3) ReadDoneCS 213 F’01– 13 –class19.pptMotivation #2: Memory ManagementMultiple processes can reside in physical memory.How do we resolve address conflicts?• what if two processes access something at the same address?kernel virtual memoryMemory mapped region forshared librariesruntime heap (via malloc)program text (.text)initialized data (.data)uninitialized data (.bss)stackforbidden0%espmemory invisible to user codethe “brk” ptrLinux/x86processmemory imageCS 213 F’01– 14 –class19.pptVirtualAddressSpace forProcess 1:PhysicalAddressSpace(DRAM)VP 1VP 2PP 2Address Translation00N-10N-1M-1VP 1VP 2PP 7PP 10(e.g., read/onlylibrary code)Solution: Separate Virtual Addr. Spaces• Virtual and physical address spaces divided into equal-sized blocks– blocks are called “pages” (both virtual and physical)• Each process has its own virtual address space–operating system controls how virtual pages as assigned to physicalmemory......VirtualAddressSpace forProcess 2:CS 213 F’01– 15 –class19.pptContrast: Macintosh Memory ModelMAC OS 1–9• Does not use traditional virtual memoryAll program objects accessed through “handles”• Indirect reference through pointer table• Objects stored in shared global address spaceP1 Pointer TableP2 Pointer TableProcess P1Process P2Shared Address SpaceABCDE“Handles”CS 213 F’01– 16 –class19.pptMacintosh Memory ManagementAllocation
View Full Document