DOC PREVIEW
Princeton COS 318 - Memory-Mapped Files & Unified VM System

This preview shows page 1-2-3-4-5-6 out of 18 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Memory-Mapped Files & Unified VM SystemMechanicsAllocating MemoryManual Memory TuningWhat Is Main Memory?Consider Ages With PagesUnified VM SystemsWhy Mmap?Mmap DefinitionMmap DiagramMmap ImplicationsMmap Versus ReadCost ComparisonLazy Versus EagerDouble BufferingSharing MemoryReloading StateWhat Is a “Private” Mapping?Memory-Mapped Files & Unified VM SystemVivek Pai2MechanicsForgot to finish producer-consumerMemory-mapped files & unified VM3Allocating MemoryOld days: manual tuning of sizesBenefits?Drawbacks?VMVMVMVMVMVMVMVMFS CacheNetworkOSOSDesktopVMVMNetworkOSOSServerFS CacheFS CacheFS CacheFS CacheFS CacheFS CacheNetwork4Manual Memory TuningFixed-size allocations for VM, FS cacheDone right, protects programs from each otherBacking up filesystem trashes FS cacheLarge-memory programs don’t compete with disk-bound programsDone poorly? Memory underutilized5What Is Main Memory?At some level, a cache for the diskPermanent data written back to fsTemporary data in main memory or swapMain memory is much faster than diskConsider one program that accesses lots of files and uses lots of memoryHow do you optimize this program?Could you view all accesses as page faults?6Consider Ages With PagesWhat happens if 5 FS pages are really active?What happens if relative demands change over time?VM, 1VM, 1VM, 3VM, 5VM, 10VM, 20VM, 50VM, 100FS, 5FS, 3FS, 1FS, 17Unified VM SystemsNow what happens when a page is needed?What happens on disk backup?Did we have the same problem before?VM, 1VM, 1FS, 1FS, 1FS, 3VM, 3VM, 5FS, 5VM, 10VM, 20VM, 50VM, 1008Why Mmap?File pages are a lot like VM pagesWe don’t load all of a process at once – why load all of a file at once?Why copy a file to access it?There’s one good reason9Mmap Definition void *mmap( void *addr, size_t len, int prot, int flags, int fildes, off_t off);Addr: where we want to map itLen: how much we want mappedProt: allow reading, writing, execFlags: is mapped shared/private/anonymous, fixed/variable location, swap space reserved?Fildes: what file is being mappedOff: start offset in file10Mmap DiagramCodeCodeDataHeapStackStackFile A Process File B11Mmap Implications# of VM regions increasesWas never really just code/text/heap/stackAccess/protection info on all regionsFilesystem no longer sole way to access filePreviously, access info via read( ) and write( )Same file via filesystem and mmap?12Mmap Versus ReadWhen read( ) completesAll pages in range were loaded at some pointA copy of the data in user’s buffersIf underlying file changes, no change to data in user’s bufferWhen mmap( ) completesMapping of the file is completeVirtual address space modifiedNo guarantee file is in memory13Cost ComparisonRead:All work done (incl disk) before call returnsNo extra VM trickery neededContrast with write( )Mmap:Inode in memory from open( )Mapping is relatively cheapPages needed only on access14Lazy Versus EagerEager:Do it right nowBenefit: low latency if you need itDrawback: wasted work if you don’tLazy:Do it at the last minuteBenefit: “pay as you go”Drawback: extra work if you need it all15Double BufferingCodeCodeDataFS CopyHeapStackStackFSFile Read Process File Mmap16Sharing MemoryTwo processes map same file sharedBoth map it with “shared” flagSame physical page accessed by two processes at two virtual addressesWhat happens when that page victimized (PTE mechanics)?Have we seen this somewhere else?17Reloading StateMap a file at a fixed locationBuild data structures inside itRe-map at program startupBenefits versus other approaches?18What Is a “Private” Mapping?Process specifies changes not to be visible to other processesModified pages look like VM pagesWritten to swap if pressureDisposed when process


View Full Document

Princeton COS 318 - Memory-Mapped Files & Unified VM System

Documents in this Course
Overview

Overview

25 pages

Deadlocks

Deadlocks

25 pages

lectute 2

lectute 2

28 pages

Lecturel

Lecturel

24 pages

Real mode

Real mode

49 pages

Lecture 2

Lecture 2

54 pages

lecture 5

lecture 5

27 pages

Load more
Download Memory-Mapped Files & Unified VM System
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Memory-Mapped Files & Unified VM System and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Memory-Mapped Files & Unified VM System 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?