1CSE 120CSE 120Principles of Operating Principles of Operating SystemsSystemsSystemsSystemsWinter 2007Winter 2007Lecture 10: PagingLecture 10: PagingKeith Marzullo and Geoffrey M VoelkerKeith Marzullo and Geoffrey M VoelkerKeith Marzullo and Geoffrey M. VoelkerKeith Marzullo and Geoffrey M. VoelkerLecture OverviewLecture OverviewToday we’ll cover more paging mechanisms:zOptimizationszOptimizations Managing page tables (space) Efficient translations (TLBs) (time) Demand paged virtual memory (space)z Recap address translationz Advanced FunctionalitySh i© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 2Sharing memory Copy on Write Mapped files2Managing Page TablesManaging Page Tablesz Last lecture we computed the size of the page table for a 32-bit address space w/ 4K pages to be 4MBfor a 32bit address space w/ 4K pages to be 4MB This is far far too much overhead for each processz How can we reduce this overhead? Observation: Only need to map the portion of the address space actually being used (tiny fraction of entire addr space)z How do we only map what is being used?Can dynamically extend page table© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 3Can dynamically extend page table… Does not work if addr space is sparce (internal fragmentation)z Use another level of indirection: two-level page tablesTwoTwo--Level Page TablesLevel Page Tablesz Two-level page tables Virtual addresses (VAs) have three parts:() p» Master page number, secondary page number, and offset Master page table maps VAs to secondary page table Secondary page table maps page number to physical page Offset indicates where in physical page address is locatedz Example 4K pages, 4 bytes/PTEHow many bits in offset? 4K = 12 bits© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 4How many bits in offset? 4K = 12 bits Want master page table in one page: 4K/4 bytes = 1K entries Hence, 1K secondary page tables. How many bits? Master (1K) = 10, offset = 12, inner = 32 – 10 – 12 = 10 bits3TwoTwo--Level Page TablesLevel Page TablesPhysical MemoryPage tableMaster page number SecondaryVirtual AddressPage frame OffsetPhysical AddressOffset© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 5Master Page TablePage frameSecondary Page TableAddressing Page TablesAddressing Page TablesWhere do we store page tables (which address space)?zPhysical memoryzPhysical memory Easy to address, no translation required But, allocated page tables consume memory for lifetime of VASz Virtual memory (OS virtual address space) Cold (unused) page table pages can be paged out to disk But, addressing page tables requires translationHow do we stop recursion?© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 6How do we stop recursion? Do not page the outer page table (called wiring)z If we’re going to page the page tables, might as well page the entire OS address space, too Need to wire special code and data (fault, interrupt handlers)4Efficient TranslationsEfficient Translationsz Our original page table scheme already doubled the cost of doing memory lookupscost of doing memory lookups One lookup into the page table, another to fetch the dataz Now two-level page tables triple the cost! Two lookups into the page tables, a third to fetch the data And this assumes the page table is in memoryz How can we use paging but also have lookups cost about the same as fetching from memory?© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 7about the same as fetching from memory? Cache translations in hardware Translation Lookaside Buffer (TLB) TLB managed by Memory Management Unit (MMU)TLBsTLBsz Translation Lookaside Buffers Translate virtual page #s into PTEs(not physical addrs)pg(py) Can be done in a single machine cyclez TLBs implemented in hardware Fully associative cache (all entries looked up in parallel) Cache tags are virtual page numbers Cache values are PTEs (entries from page tables) With PTE + offset, can directly calculate physical addressTLBs exploit locality© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 8zTLBs exploit locality Processes only use a handful of pages at a time» 16-48 entries/pages (64-192K)» Only need those pages to be “mapped” Hit rates are therefore very important5Managing TLBsManaging TLBsz Address translations for most instructions are handled using the TLBg >99% of translations, but there are misses (TLB miss)…z Who places translations into the TLB (loads the TLB)? Hardware (Memory Management Unit)» Knows where page tables are in main memory» OS maintains tables, HW accesses them directly» Tables have to be in HW-defined format (inflexible) Software loaded TLB (OS)© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 9()» TLB faults to the OS, OS finds appropriate PTE, loads it in TLB» Must be fast (but still 20-200 cycles)» CPU ISA has instructions for manipulating TLB» Tables can be in any format convenient for OS (flexible)Managing TLBs (2)Managing TLBs (2)z OS ensures that TLB and page tables are consistentWhen it changes the protection bits of a PTE it needs toWhen it changes the protection bits of a PTE, it needs to invalidate the PTE if it is in the TLBz Reload TLB on a process context switch Invalidate all entries Why? What is one way to fix it?z When the TLB misses and a new PTE has to be loaded, a cached PTE must be evicted© 2007 Keith Marzullo and Geoffrey M. VoelkerFebruary 15, 2007 CSE 120 – Lecture 10 – Paging 10loaded, a cached PTE must be evicted Choosing PTE to evict is called the TLB replacement policy Implemented in hardware, often simple (e.g., Last-Not-Used)6Paged Virtual MemoryPaged Virtual Memoryz We’ve mentioned before that pages can be moved between memory and diskbetween memory and disk This process is called demand pagingz OS uses main memory as a page cache of all the data allocated by processes in the system Initially, pages are allocated from memory When memory fills up, allocating a page in memory requires some other page to be
View Full Document