Slide 1Address Mapping: Page TablePage TableRequirements revisitedPage Table Entry (PTE) FormatPaging/Virtual Memory Multiple ProcessesComparing the 2 levels of hierarchyNotes on Page TableVM Problems and SolutionsVirtual Memory Problem #1Translation Look-Aside Buffers (TLBs)Typical TLB FormatWhat if not in TLB?What if the data is on disk?What if we don't have enough memory?QuestionVirtual Memory Problem #1 RecapVirtual Memory Problem #2SolutionsPage Table Shrink :Administrivia2-level Page TableThree Advantages of Virtual MemoryCache, Proc and VM in IF (A Fine Slide)Slide 25$&VM Review: 4 Qs for any Mem. HierarchyQ1: Where block placed in upper level?Q2: How is a block found in upper level?Q3: Which block replaced on a miss?Q4: What to do on a write hit?Peer Instruction (1/3)Peer Instruction (1/3) AnswerPeer Instruction (2/3): 40b VA, 36b PAPeer Instruction (2/3) AnswerPeer Instruction (3/3)Peer Instruction (3/3) AnswerCS 61C L24 VM II (1)A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05CS61C : Machine StructuresLecture #24: VM II2005-08-02Andy CarleCS 61C L24 VM II (2)A Carle, Summer 2005 © UCBAddress Mapping: Page TableVirtual Address:VPN offsetPage Table located in physical memoryindexintopagetablePPNPhysicalMemoryAddressPage TableVal-idAccessRightsPhysicalPageAddress.V A.R. P. P. A.......offsetCS 61C L24 VM II (3)A Carle, Summer 2005 © UCBPage Table•A page table: mapping function •There are several different ways, all up to the operating system, to keep this data around.•Each process running in the operating system has its own page table-Historically, OS changes page tables by changing contents of Page Table Base RegisterCS 61C L24 VM II (4)A Carle, Summer 2005 © UCBRequirements revisited•Remember the motivation for VM:•Sharing memory with protection•Different physical pages can be allocated to different processes (sharing)•A process can only touch pages in its own page table (protection)•Separate address spaces•Since programs work only with virtual addresses, different programs can have different data/code at the same address!CS 61C L24 VM II (5)A Carle, Summer 2005 © UCBPage Table Entry (PTE) Format•Contains either Physical Page Number or indication not in Main Memory•OS maps to disk if Not Valid (V = 0)•If valid, also check if have permission to use page: Access Rights (A.R.) may be Read Only, Read/Write, Executable...Page TableVal-idAccessRightsPhysicalPageNumberV A.R. P. P. N.V A.R. P. P.N....P.T.E.CS 61C L24 VM II (6)A Carle, Summer 2005 © UCBPaging/Virtual Memory Multiple ProcessesUser B: Virtual MemoryCodeStaticHeapStack0CodeStaticHeapStackA PageTableB PageTableUser A: Virtual Memory00Physical Memory64 MBCS 61C L24 VM II (7)A Carle, Summer 2005 © UCBComparing the 2 levels of hierarchy Cache Version Virtual Memory vers. Block or Line Page Miss Page Fault Block Size: 32-64B Page Size: 4K-8KB Placement: Fully AssociativeDirect Mapped, N-way Set Associative Replacement: Least Recently UsedLRU or Random (LRU) Write Thru or Back Write BackCS 61C L24 VM II (8)A Carle, Summer 2005 © UCBNotes on Page Table•OS must reserve “Swap Space” on disk for each process•To grow a process, ask Operating System•If unused pages, OS uses them first•If not, OS swaps some old pages to disk•(Least Recently Used to pick pages to swap)•Will add details, but Page Table is essence of Virtual MemoryCS 61C L24 VM II (9)A Carle, Summer 2005 © UCBVM Problems and Solutions•TLB•Paged Page TablesCS 61C L24 VM II (10)A Carle, Summer 2005 © UCBVirtual Memory Problem #1•Map every address 1 indirection via Page Table in memory per virtual address 1 virtual memory accesses = 2 physical memory accesses SLOW!•Observation: since locality in pages of data, there must be locality in virtual address translations of those pages•Since small is fast, why not use a small cache of virtual to physical address translations to make translation fast?•For historical reasons, cache is called a Translation Lookaside Buffer, or TLBCS 61C L24 VM II (11)A Carle, Summer 2005 © UCBTranslation Look-Aside Buffers (TLBs)•TLBs usually small, typically 32 - 256 entries• Like any other cache, the TLB can be direct mapped, set associative, or fully associative ProcessorTLBLookupCacheMainMemoryVAPAmisshitdataTrans-lationhitmissOn TLB miss, get page table entry from main memoryCS 61C L24 VM II (12)A Carle, Summer 2005 © UCBTypical TLB FormatVirtual Physical Dirty Ref Valid AccessAddress Address Rights• TLB just a cache on the page table mappings• TLB access time comparable to cache (much less than main memory access time) • Dirty: since use write back, need to know whether or not to write page to disk when replaced•Ref: Used to help calculate LRU on replacement • Cleared by OS periodically, then checked to see if page was referencedCS 61C L24 VM II (13)A Carle, Summer 2005 © UCBWhat if not in TLB?•Option 1: Hardware checks page table and loads new Page Table Entry into TLB•Option 2: Hardware traps to OS, up to OS to decide what to do•MIPS follows Option 2: Hardware knows nothing about page tableCS 61C L24 VM II (14)A Carle, Summer 2005 © UCBWhat if the data is on disk?•We load the page off the disk into a free block of memory, using a DMA (Direct Memory Access – very fast!) transfer•Meantime we switch to some other process waiting to be run•When the DMA is complete, we get an interrupt and update the process's page table•So when we switch back to the task, the desired data will be in memoryCS 61C L24 VM II (15)A Carle, Summer 2005 © UCBWhat if we don't have enough memory?•We choose some other page belonging to a program and transfer it onto the disk if it is dirty•If clean (disk copy is up-to-date), just overwrite that data in memory•We chose the page to evict based on replacement policy (e.g., LRU)•And update that program's page table to reflect the fact that its memory moved somewhere else•If continuously swap between disk and memory, called ThrashingCS 61C L24 VM II (16)A Carle, Summer 2005 © UCBQuestion•Why is the TLB so small yet so effective?•Because each entry corresponds to pagesize # of addresses•Why does the TLB typically have high associativity? What is the “associativity” of VAPA mappings?•Because the miss penalty dominates the AMAT for VM. •High associativity lower miss rates.-VPNPPN mappings are fully associativeCS 61C L24 VM II (17)A Carle, Summer
View Full Document