Unformatted text preview:

Virtual MemoryView of Memory HierarchiesMemory Hierarchy: Some FactsVirtual Memory: MotivationSlide 5Advantages of Virtual MemoryVirtual to Physical Address TranslationMapping Virtual Memory to Physical MemoryPaging Organization (eg: 1KB Page)Virtual Memory MappingIssues in VM DesignVirtual Memory Problem # 1Memory Organization with TLBTypical TLB FormatWhat if not in TLBTLB MissTLB Miss: If data is in MemoryWhat if data is on disk ?What if the memory is full ?Virtual Memory Problem # 2Two Level Page TablesSummarySlide 23Virtual MemoryAdapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC BerkeleyView of Memory HierarchiesRegsL2 CacheMemoryDiskTapeInstr. OperandsBlocksPagesFilesUpper LevelLower LevelFasterLargerCacheBlocksThus far{{Next:VirtualMemoryMemory Hierarchy: Some FactsCPU Registers100s Bytes<10s nsCacheK Bytes10-100 ns$.01-.001/bitMain MemoryM Bytes100ns-1us$.01-.001DiskG Bytesms10 - 10 cents-3-4CapacityAccess TimeCostTapeinfinitesec-min10-6RegistersCacheMemoryDiskTapeInstr. OperandsBlocksPagesFilesStagingXfer Unitprog./compiler1-8 bytescache cntl8-128 bytesOS512-4K bytesuser/operatorMbytesUpper LevelLower LevelfasterLargerVirtual Memory: Motivation•If Principle of Locality allows caches to offer (usually) speed of cache memory with size of DRAM memory,then recursively why not use at next level to give speed of DRAM memory, size of Disk memory?•Treat Memory as “cache” for Disk !!!•Share memory between multiple processes but still provide protection – don’t let one program read/write memory of another•Address space – give each program the illusion that it has its own private memory–Suppose code starts at addr 0x40000000. But different processes have different code, both at the same address! So each program has a different view of memoryAdvantages of Virtual Memory•Translation: –Program can be given consistent view of memory, even though physical memory is scrambled–Makes multithreading reasonable (now used a lot!)–Only the most important part of program (“Working Set”) must be in physical memory.–Contiguous structures (like stacks) use only as much physical memory as necessary yet still grow later.•Protection:–Different threads (or processes) protected from each other.–Different pages can be given special behavior• (Read Only, Invisible to user programs, etc).–Kernel data protected from User programs–Very important for protection from malicious programs=> Far more “viruses” under Microsoft Windows•Sharing:–Can map same physical page to multiple users(“Shared memory”)Virtual to Physical Address Translation•Each program operates in its own virtual address space; ~only program running•Each is protected from the other•OS can decide where each goes in memory•Hardware (HW) provides virtual -> physical mappingvirtualaddress(inst. fetchload, store)Programoperates inits virtualaddressspaceHWmappingphysicaladdress(inst. fetchload, store)Physicalmemory(incl. caches)Mapping Virtual Memory to Physical Memory0Physical MemoryCodeStaticHeapStack64 MB•Divide into equal sizedchunks (about 4KB)0•Any chunk of Virtual Memory assigned to any chuck of Physical Memory (“page”)Paging Organization (eg: 1KB Page)AddrTransMAPPage is unit of mappingPage also unit of transfer from disk to physical memorypage 01K1K1K0102431744Virtual MemoryVirtualAddresspage 1page 311K2048page 2...... ...page 0010247168PhysicalAddressPhysicalMemory1K1K1Kpage 1page 7...... ...Virtual Memory MappingVirtual Address:page no. offsetPage TableBase RegPage Table located in physical memory(actually, concatenation)indexintopagetable+PhysicalMemoryAddressPage TableVal-idAccessRightsPhysicalPageAddress.V A.R. P. P. A.......Issues in VM DesignWhat is the size of information blocks that are transferred from secondary to main storage (M)?  page size(Contrast with physical block size on disk, I.e. sector size)Which region of M is to hold the new block  placement policyHow do we find a page when we look for it?  block identification Block of information brought into M, and M is full, then some region of M must be released to make room for the new block  replacement policyWhat do we do on a write?  write policyMissing item fetched from secondary memory only on the occurrence of a fault  demand load policypagesregcachememdiskframeVirtual Memory Problem # 1•Map every address 1 extra memory accesses for every memory access•Observation: since locality in pages of data, must be locality in virtual addresses of those pages•Why not use a cache of virtual to physical address translations to make translation fast? (small is fast)•For historical reasons, cache is called a Translation Lookaside Buffer, or TLBMemory Organization with TLB•TLBs usually small, typically 128 - 256 entries• Like any other cache, the TLB can be fully associative, set associative, or direct mappedProcessorTLBLookupCacheMainMemoryVAPAmisshitdataTrans-lationhitmissTypical TLB FormatVirtual Physical Dirty Ref Valid AccessAddress Address Rights• TLB just a cache on the page table mappings• TLB access time comparable to cache (much less than main memory access time) • Ref: Used to help calculate LRU on replacement• Dirty: since use write back, need to know whether or not to write page to disk when replacedWhat if not in TLB•Option 1: Hardware checks page table and loads new Page Table Entry into TLB•Option 2: Hardware traps to OS, up to OS to decide what to do•MIPS follows Option 2: Hardware knows nothing about page table formatTLB Miss•If the address is not in the TLB, MIPS traps to the operating system•The operating system knows which program caused the TLB fault, page fault, and knows what the virtual address desired was requested2 91valid virtual physicalTLB Miss: If data is in Memory•We simply add the entry to the TLB, evicting an old entry from the TLB7 3212 91valid virtual physicalWhat if data is on disk ?•We load the page off the disk into a free block of memory, using a DMA transfer–Meantime we switch to some other process waiting to be run•When the DMA is complete, we get an interrupt and update the process's page table–So when we switch back to the task, the desired data will be in memoryWhat if the memory is full ?•We load the page off the disk into a least recently used block of memory, using a DMA transfer–Meantime we switch to some other process waiting to be run•When the DMA is


View Full Document

TAMU CSCE 350 - slide15

Documents in this Course
Load more
Download slide15
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view slide15 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view slide15 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?