DOC PREVIEW
Duke CPS 110 - I/O Caching and Page Replacement

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

I O Caching and Page Replacement Memory Storage Hierarchy 101 Very fast 1ns clock Multiple Instructions per cycle P CPU DRAM gap memory system architecture CPS 104 volatile Memory SRAM Fast Small Expensive cache registers DRAM Slow Big Cheaper called physical or main 1000 2000 per GB or so I O bottleneck VM and file caching CPS 110 Magnetic Rotational Really Slow Seeks Really Big Really Cheap nonvolatile 25 40 per GB Cost Effective Memory System Price Performance 1 I O Caching 101 HASH object hash function free inactive list head Data items from secondary storage are cached in memory for faster access time methods hash chains hash bucket array object get tag Locate object if in the cache else find a free slot and bring it into the cache release object Release cached object so its slot may be reused for some other object free inactive list tail I O cache a hash table with an integrated free inactive list i e an ordered list of eviction candidates Rationale for I O Cache Structure Goal maintain K slots in memory as a cache over a collection of m items on secondary storage K m 1 What happens on the first access to each item Fetch it into some slot of the cache use it and leave it there to speed up access if it is needed again later 2 How to determine if an item is resident in the cache Maintain a directory of items in the cache a hash table Hash on a unique identifier tag for the item fully associative 3 How to find a slot for an item fetched into the cache Choose an unused slot or select an item to replace according to some policy and evict it from the cache freeing its slot 2 Mechanism for Cache Eviction Replacement Typical approach maintain an ordered free inactive list of slots that are candidates for reuse Busy items in active use are not on the list E g some in memory data structure holds a pointer to the item E g an I O operation is in progress on the item The best candidates are slots that do not contain valid items Initially all slots are free and they may become free again as items are destroyed e g as files are removed Other slots are listed in order of value of the items they contain These slots contain items that are valid but inactive they are held in memory only in the hope that they will be accessed again later Replacement Policy The effectiveness of a cache is determined largely by the policy for ordering slots items on the free inactive list defines the replacement policy A typical cache replacement policy is Least Recently Used Assume hot items used recently are likely to be used again Move the item to the tail of the free list on every release The item at the front of the list is the coldest inactive item Other alternatives FIFO replace the oldest item MRU LIFO replace the most recently used item 3 Example File Block Buffer Cache HASH vnode logical block Buffers with valid data are retained in memory in a buffer cache or file cache Each item in the cache is a buffer header pointing at a buffer Blocks from different files may be intermingled in the hash chains System data structures hold pointers to buffers only when I O is pending or Most systems use a pool of buffers in imminent kernel memory as a staging area for busy bit instead of refcount memory disk transfers most buffers are free Why Are File Caches Effective 1 Locality of reference storage accesses come in clumps spatial locality If a process accesses data in block B it is likely to reference other nearby data soon e g the remainder of block B example reading or writing a file one byte at a time temporal locality Recently accessed data is likely to be used again 2 Read ahead if we can predict what blocks will be needed soon we can prefetch them into the cache most files are accessed sequentially 4 Handling Updates in the File Cache 1 Blocks may be modified in memory once they have been brought into the cache Modified blocks are dirty and must eventually be written back 2 Once a block is modified in memory the write back to disk may not be immediate synchronous Delayed writes absorb many small updates with one disk write How long should the system hold dirty data in memory Asynchronous writes allow overlapping of computation and disk update activity write behind Do the write call for block n 1 while transfer of block n is in progress Thus file caches also can improve performance for writes The Page Caching Problem Each thread process job utters a stream of page references reference string e g abcabcdabce The OS tries to minimize the number of faults incurred The set of pages the working set actively used by each job changes relatively slowly Try to arrange for the resident set of pages for each active job to closely approximate its working set Replacement policy is the key On each page fault select a victim page to evict from memory read the new page into the victim s frame Most systems try to approximate an LRU policy 5 VM Page Cache Internals HASH memory object segment logical block 1 Pages in active use are mapped through the page table of one or more processes 2 On a fault the global object offset hash table in kernel finds pages brought into memory by other processes 3 Several page queues wind through the set of active frames keeping track of usage 4 Pages selected for eviction are removed from all page tables first Managing the VM Page Cache Managing a VM page cache is similar to a file block cache but with some new twists 1 Pages are typically referenced by page table pmap entries Must pmap page protect to invalidate before reusing the frame 2 Reads and writes are implicit the TLB hides them from the OS How can we tell if a page is dirty How can we tell if a page is referenced 3 Cache manager must run policies periodically sampling page state Continuously push dirty pages to disk to launder them Continuously check references to judge how hot each page is Balance accuracy with sampling overhead 6 The Paging Daemon Most OS have one or more system processes responsible for implementing the VM page cache replacement policy A daemon is an autonomous system process that periodically performs some housekeeping task The paging daemon prepares for page eviction before the need arises Wake up when free memory becomes low Clean dirty pages by pushing to backing store prewrite or pageout Maintain ordered lists of eviction candidates Decide how much memory to allocate to file cache VM etc LRU Approximations for Paging Pure LRU and LFU are prohibitively expensive to implement most references are hidden by the


View Full Document

Duke CPS 110 - I/O Caching and Page Replacement

Download I/O Caching and Page Replacement
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view I/O Caching and Page Replacement and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view I/O Caching and Page Replacement 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?